LongCut logo

How to Use Notebooklm Better than 99% of People

By Parker Prompts

Summary

Topics Covered

  • Deep Research Automates Sourcing
  • Validate Sources Pre-Query
  • Configure Settings for Precision
  • Filter Sources Surgically
  • Mix Formats Comprehensively

Full Transcript

Right now, the average user treats Notebook LM like the usual basic chatbot. They upload a file or two, ask a question, and maybe click around hoping to find something useful. But that is a massive underutilization of the tool because Notebook LM isn't just a chatbot. It is a complete research intelligence system. We are talking about autonomous sourcing that finds data for you, validation methods that ensure your research is reliable, and even generative workflows that turn that

data into slide decks and infographics instantly. So, in this video, I'm going to walk you through the complete advanced workflow to get you using this tool better than 99% of people. And by the end of this, you'll know exactly how to use Notebook LM the way it is actually meant to be used. Let's get started. First, I'll head over to the Notebook LM website. And on the homepage, you'll see your notebook list if you've created any before or an empty state if this is your first time. So, we

can create a new notebook here. But before we do, here is a critical detail that often gets overlooked. It's that notebooks should be topic specific. Don't create a notebook called research or general notes. create focus notebooks like competitive analysis Q12025 or AI video generation research. And that might seem insignificant, but it actually matters because notebook LM performs better when sources are related and focused on a single topic or project. I'll go ahead and create a new

notebook here by pressing this plus icon. And this immediately takes you to the source upload screen. This is where the quality of your output gets decided. The common mistake here is simply uploading one or two files without thinking strategically about what sources you actually need. Instead, take a step back. Think about the complete information landscape for your topic. What formats does your information exist in? PDFs, YouTube videos, websites, Google Docs? The power of Notebook LM

comes from combining multiple formats and sources, creating a web of information rather than just a stack of isolated documents. And later in this video, I will show you exactly how to professionally execute that multiiformat strategy. So, make sure you stick around for that. For this example, I'm going to create a research notebook on AI alignment. Let's hit escape, and I'll go right up here and title this AI alignment research. Now, here's a feature that was added quite recently,

which is the deep research source discovery. Previously, there was only fast research, which was good but limited. But now, you can access a much more powerful agent by clicking this drop down arrow and selecting deep research. So, I'm going to select that and write AI alignment and safety challenges. Hit submit. And here's what happens. Notebook LM launches an agentic AI tool that autonomously researches your topic. It doesn't just do keyword matching like a basic search. It

actually analyzes the topic, finds sources, evaluates them, adapts its search strategy to fill gaps, and generates a comprehensive research report, which means deep research will be able to discover around 50 sources related to your topic. It generates a detailed research report synthesizing those sources. Then it selects the most relevant sources and imports them directly into your notebook. What you get is both a curated research report and high-quality sources already loaded

and ready to work with. And that matters a lot because most people spend hours manually searching for sources, evaluating quality, and uploading them one by one. Deep Research does that work in minutes and often find sources you wouldn't have discovered manually. All right, it's finished. I now have a curated list of citations imported automatically, plus a full research report that's also added as a source. Scrolling down, you'll notice some sources might fail to import if they're

behind pay walls. There's a remove all failed sources button that cleans those up in one click instead of deleting them individually, which I'm going to do right now. Now I have a strong foundation of sources to work with. Before we start asking questions or generating content, here's a step that gets skipped nine times out of 10, which is source validation. Notebook LM is extremely good at reducing hallucinations because it grounds everything in your sources. But that

only works if your sources are reliable and current. If your sources are outdated, biased toward one perspective, or mixing primary research with opinion pieces, Notebook LM will give you answers based on flawed information without distinguishing between them. So here's the validation framework I use for every single notebook. go to the chat interface in the center of the screen. Before asking any topic questions, I run through these checks. First, I ask, create a table showing

each source with its publication date, author credentials, and whether it's a primary source, secondary analysis, or opinion piece. This gives me a clear view of what I'm actually working with. If I see that most of my sources are from 2020 or earlier on a fast-moving topic like AI, I know I need newer material. If everything is opinion pieces with no primary research, that's a problem. Let me ask that now. Notebook LM is generating a table analyzing all of the sources. I can immediately see

the spread, when these were published, who wrote them, and what type of source each one is. In this case, I'm seeing a good mix of recent academic papers, industry reports, and technical documentation. Most of them are pretty recent, which is what I want for current AI alignment research. Second, I ask which of these sources are most frequently cited or referenced by other sources in this notebook. This shows me which sources are foundational to the topic versus which ones are peripheral.

The highly cited sources are usually the ones I should prioritize when I'm filtering sources later. And third, I ask, summarize the primary perspective or bias of the top five most substantial sources. This tells me whether I'm looking at this topic from multiple angles or whether all my sources share the same viewpoint. For controversial or evolving topics, you want diverse perspectives. For technical documentation, perspective matters less. These three checks take about 5 minutes

total, but they give me a complete picture of my source quality before I build my entire workflow on top of it. With our sources validated, the next critical step is configuration. This is something the vast majority of users ignore, but it dramatically improves response quality. In the top right corner, click right here. This opens settings that control how Notebook LM responds to you. First, set your conversational goal. You have three options. Default for general research,

learning guide for educational content, or custom for specific use cases. For this research notebook, I'm choosing custom, and I'll define the role as research analyst focused on AI safety and alignment debates. This tells notebook LM to frame all responses from that perspective instead of giving generic answers. Next, choose response length. You have default, longer, or shorter. For research work, I typically choose longer because I want detailed analysis, not brief summaries. Click

save. These settings now apply to every chat in this notebook. You set them once and forget about them, but they shape every interaction from this point forward. The majority of the people use notebooks in default mode and wonder why responses feel generic. Configured settings give you targeted rosp specific answers optimized for your exact use case. Now let's look at how to work with sources strategically instead of just accepting all of the sources for every

query. On the left side you'll see your source list with these checkboxes next to each file. And a very common mistake that people make is that they leave everything checked all the time. When you ask a question with all of the sources selected, Notebook LM tries to synthesize an answer from every single document. This dilutes your results. It forces the AI to generalize, giving you a vague surface level summary instead of a deep answer. So let's say I want to

query. On the left side you'll see your source list with these checkboxes next to each file. And a very common mistake that people make is that they leave everything checked all the time. When you ask a question with all of the sources selected, Notebook LM tries to synthesize an answer from every single document. This dilutes your results. It forces the AI to generalize, giving you a vague surface level summary instead of a deep answer. So let's say I want to

focus specifically on existential risk. If I leave the mechanistic interpretability checked, I am confusing the model by forcing it to look at conflicting topics. So I'm going to uncheck everything. Then I will go through and select only the three technical papers that contain the actual code logic. Now effectively the other documents do not exist to the AI. It can only see what is checked. When I ask, how do these agents handle memory management? Notebook LM creates the

answer exclusively from those three technical papers. The answer comes out sharper, more technical, and completely free of irrelevant information. This gives you surgical control over your research. You can keep one massive master notebook with 50 sources, but by toggling these check boxes, you can instantly turn it into a focused subnote for any specific query. All right, now let's generate some content from our sources. We'll start with an audio overview, which is one of Notebook LM's

signature features. On the right side, you'll see the studio panel. Click on audio overview. Now, don't just click generate yet. Most people just blindly hit generate and accept whatever random conversation the AI spits out. If you want a result, you can actually use for work. You need to take control of the conversation first. In the instruction input box below is where you tell notebook LM exactly what to focus on, what tone to use, and how long the overview should be. For this research

signature features. On the right side, you'll see the studio panel. Click on audio overview. Now, don't just click generate yet. Most people just blindly hit generate and accept whatever random conversation the AI spits out. If you want a result, you can actually use for work. You need to take control of the conversation first. In the instruction input box below is where you tell notebook LM exactly what to focus on, what tone to use, and how long the overview should be. For this research

notebook, I don't need a balanced overview of all of the sources covering every aspect of AI alignment. I need the podcast to focus specifically on the key debates and disagreements we identified earlier. So, I'll write, "Focus exclusively on the main disagreements between AI safety researchers regarding alignment approaches. Explain each perspective clearly and keep the discussion under 15 minutes. Use accessible language, avoiding unnecessary jargon. Above the

instruction box, you have two critical settings, which are format and length. For format, you aren't limited to the standard deep dive option. You can switch to brief if you need a quick summary, or select critique, which essentially turns the AI into a strict editor that reviews your material for gaps and weaknesses. But since our prompt is specifically asking to uncover disagreements, I'm actually going to switch this to debate. This instructs the host to actively illuminate

conflicting perspectives rather than just having a friendly chat. For length, you can choose short or default. I'll keep this on default, which usually gives us a solid 10-minute discussion, perfect for digging into the details without broadening the topic too much. Now, click generate. Notebook LM will take a few minutes to create a custom podcast with two AI hosts discussing your sources based on those specific instructions. The difference between default audio and customized audio is

massive. The default version covers everything equally. The customized version becomes a targeted research brief focused on exactly what you need to understand. And here's a pro tip. Do not hesitate to regenerate. Think of the first pass as a rough draft. If it came out too technical, regenerate it with instructions to simplify the language. If it wasted time on background history, tell it to cut the intro and focus only on current debates. Most people generate

massive. The default version covers everything equally. The customized version becomes a targeted research brief focused on exactly what you need to understand. And here's a pro tip. Do not hesitate to regenerate. Think of the first pass as a rough draft. If it came out too technical, regenerate it with instructions to simplify the language. If it wasted time on background history, tell it to cut the intro and focus only on current debates. Most people generate

once and just accept whatever they get. But the top users iterate on these instructions until the output matches their specific research goals perfectly. While that audio overview is generating, let's create visual content using another brand new feature on Notebook LM, which is the infographic generation powered by Nano Banana Pro, which is Google's advanced image generation model. To access that, click infographic in the studio panel. You'll see three main settings to configure here. First

is orientation, where you can choose landscape, portrait, or square. Next is level of detail, which ranges from concise to detailed. And finally, you have the custom instruction field. For most use cases, I recommend standard detail level and landscape orientation. The detailed option can introduce minor text errors with complex topics, and concise sometimes oversimplifies. In the instruction field, I'll write, "Create a professional infographic mapping the different AI alignment approaches and

the key researchers associated with each approach. Use clean design with blue and gray color scheme and hit generate." This will take a couple of minutes and what comes back is a fully designed infographic pulling information directly from your sources, including talking charts, diagrams, text hierarchies, visual layouts, everything you'd normally need a designer to create. The quality is legitimately publication ready. Minor spelling errors can appear in detailed mode with complex topics,

but standard mode is consistently accurate. All right, here's the result. This is a clean, well-designed visual representation of AI alignment approaches with key researchers mapped to different strategies. The design is professional. The information is accurate and cited from my sources, and this would have taken hours to create manually. You can also regenerate this with different instructions if you want to adjust the style or focus. Next, let's create a presentation deck, which

is the other new Nano Banana Pro feature. In the studio panel, click slide deck. You'll see two deck types: detailed deck, which creates comprehensive slides with full text suitable for sending as a standalone document, or presenter slides, which creates clean visual slides with minimal text designed to support you while speaking. For most presentations, presenter slides is better because it keeps slides visual and text minimal. For length, you have two main choices.

Short for a 10 slide summary or default for a full 15 to 20 slide deck. I want just the key points, so I'm going to choose short. In the instruction field, I'll write create a presentation explaining the three main schools of thought in AI alignment for a technical audience. Focus on key differences and trade-offs. Click generate. This will take a few minutes to create a fully designed slide deck. While it's generating, let me explain why this is powerful. Most people spend hours

building presentations from research. They read through sources, extract key points, design slides, find or create visuals, and structure the narrative. Notebook LM does all of that automatically. It pulls information from your sources, structures it logically, designs professional slides, and creates supporting visuals. And just like audio overviews, you can regenerate with different instructions if the first version isn't quite right. All right, the deck is ready. Let's take a look.

This is a clean, professionally designed presentation. Each slide has a clear visual hierarchy supporting graphics and text pulled directly from my sources with proper structure. Slide one introduces the topic. Slide two breaks down the three main approaches. Each slide explores one approach in detail with visuals that illustrate the key concepts. This is presentation ready output that would normally take several hours to build manually generated in minutes from your sources. The audio

overview we generated earlier should be ready now. So, let's open it. You'll see a standard podcast player with two AI hosts discussing AI alignment based on our custom instructions. Let me play a bit of it. >> Welcome to the debate. We're diving into what I think is probably the most consequential question of our time. How do we make sure that these incredibly powerful AI systems we're building are, you know, fundamentally aligned with human values. >> Audio is great for understanding the big

picture, but for precision work, we need the chat interface. In the center panel, you can ask any question about your sources. The key is asking precise questions instead of vague ones. Instead of asking, "What does this say about AI alignment?" asks, "Compare the three main technical approaches to AI alignment and explain the key trade-off each approach makes. That specific question gets you a structured, useful answer. You'll also notice little numbers scattered through the text.

Those are citations. When you click one, it highlights the exact passage in the original document, letting you verify the accuracy of the text instantly. But if you need something more engaging than just audio, there is the video overview. This just got a major upgrade with custom visual styles. In the studio panel, click video overview. This creates a narrated explainer video with AI generated visuals based on your sources. It's similar to audio overview but with slideshow style visuals that

illustrate the concepts as they're explained. You'll see two content options. Explainer, which creates a comprehensive overview connecting concepts from your sources, or brief, which gives you a quick bite-sized summary of core ideas. For most use cases, explainer is better because it provides depth and proper context. Below is an option to choose custom visual styles powered by Nano Banana Pro. You can choose auto select to let notebook LM pick a style from their preset

library. Or you can choose custom and describe your own visual aesthetic. Let me try custom. I'll write clean, modern design with blue and white color scheme, minimalist graphics, and professional typography. You can also guide what the AI host should focus on in the instruction field, similar to audio overviews. Click generate. This takes a few minutes to create the full video with narration, visuals, and transitions. All right, it's finished processing. Let's play a quick clip to

library. Or you can choose custom and describe your own visual aesthetic. Let me try custom. I'll write clean, modern design with blue and white color scheme, minimalist graphics, and professional typography. You can also guide what the AI host should focus on in the instruction field, similar to audio overviews. Click generate. This takes a few minutes to create the full video with narration, visuals, and transitions. All right, it's finished processing. Let's play a quick clip to

see how it handled our custom design request. You know, this isn't just a technical puzzle. It's a whole series of really deep debates about the very nature of these artificial minds. See, to make an AI safe, you first have to understand it. That seems obvious, right? But that opens up this truly fascinating question. We can see what an AI does, but what's actually happening on the inside? And here's the core of the problem. Our most powerful AI models are basically black boxes.

>> And look at that. It didn't just grab random stock footage. It actually followed my prompt for a clean blue and white color scheme with minimalist graphics. The narration is synced perfectly with the visuals and the structure follows the logical flow of our source documents. This is perfect for creating educational content, presentation materials, or sharable explanations of complex research. All right, let's wrap up the studio panel by looking at the remaining tools, which

are reports, flashcards, quiz, and mind maps. These are all found in the studio panel on the right, and each serves a specific organizational purpose. Let's start with reports. Click reports and you'll see several options here. The first one which we're going to look at is the briefing dock. This creates several pages of executive summary of your entire knowledge base featuring key insights and quotes from your sources. I'll click to generate one now. And here's the result. This is a clean,

professionally structured document summarizing the key findings from all of the sources. I can export this to Google Docs, edit it if needed, and use it as a foundation for reports or presentations. But here's the feature a lot of people miss. You aren't limited to these defaults. You can click create your own to specify the exact structure, style, and tone you want. Let's try that. I'll write create a technical white paper analyzing the three main approaches to

AI alignment written for researchers include methodology comparison and future research directions. Hit generate and look at this result. Unlike the generic briefing doc, this is highly technical. It actually followed my structure. It gave me the specific methodology comparison and the future direction section I asked for. This essentially did 90% of the drafting work in seconds. Next, you have flashcards and quiz sections. Flash cards generate quick Q and A pairs for memorization.

While the quiz tool builds a full interactive test. The value here is that they pull directly from your sources. So you aren't testing yourself on general knowledge. You are testing yourself on the specific data you just uploaded. And finally, there is the mind map. If you click this notebook LM generates an interactive diagram showing how the key concepts in your sources actually connect to each other. You can click any node to expand it into subtopics or click it again to trigger a detailed

chat response about that specific idea. This is massive for visual learners because it helps you spot connections between files that you would definitely miss just by reading them linearly. And that is the key takeaway here. It is a mistake to limit yourself to just the chat and audio overview. These organizational tools are what actually transform raw information into a structured knowledge system. Now, to bring this full circle, I want to deliver on that promise I made at the

start of the video. We need to talk about source strategy, specifically how to mix different formats to create a truly comprehensive research system. The vast majority of users upload one type of source. Maybe they add five PDFs or maybe they add three YouTube videos, but they don't think strategically about combining formats. Here's what you should do. Notebook LM accepts PDFs, websites, YouTube videos, audio files, Google Docs, and plain text. The power comes from mixing these formats to cover

your topic from multiple angles. For example, in this notebook, I can layer YouTube lectures for accessible explanations on top of company blog posts for industry perspective and even add podcast transcripts for conversational insights. This creates a 360 degree view of the topic that you just can't get from a single file type. Let me add a YouTube video to demonstrate. Click add source. Click YouTube and paste a video URL. I'm adding a lecture on AI alignment from a

recent conference. Notebook LM pulls the transcript and adds it as a source. Now I can ask questions that synthesize across formats. Compare the technical approaches discussed in the research papers with the practical concerns raised in the YouTube lecture. Notebook LM will analyze both the written research and the video transcript and create a synthesis you couldn't get by analyzing each format separately. This multiiformat approach is especially powerful because different formats offer

recent conference. Notebook LM pulls the transcript and adds it as a source. Now I can ask questions that synthesize across formats. Compare the technical approaches discussed in the research papers with the practical concerns raised in the YouTube lecture. Notebook LM will analyze both the written research and the video transcript and create a synthesis you couldn't get by analyzing each format separately. This multiiformat approach is especially powerful because different formats offer

different value. Academic papers give you rigor. Videos give you accessible explanations. Blog posts give you industry context. Podcasts give you conversational insights. The typical user stays within one format while advanced users strategically mix every format to build comprehensive knowledge bases. So, at this point, you've seen the complete workflow for using Notebook LM, the way research professionals actually use it. We started with deep research to automatically build a

different value. Academic papers give you rigor. Videos give you accessible explanations. Blog posts give you industry context. Podcasts give you conversational insights. The typical user stays within one format while advanced users strategically mix every format to build comprehensive knowledge bases. So, at this point, you've seen the complete workflow for using Notebook LM, the way research professionals actually use it. We started with deep research to automatically build a

comprehensive source base. We validated those sources to ensure quality and identified gaps. We configured notebook settings for targeted responses. We used source filtering for focused analysis. We generated custom audio overviews, professional infographics, and presentation ready slide decks. We used mind maps for active learning. And we built a living research system using multi-format source mixing. The difference between someone who uses Notebook LM as an amateur and someone

who uses it at a professional level isn't just knowing these features exist. It's following the complete workflow from source discovery through validation configuration content generation, and organization. If you found this video valuable, you could click right here to check out another video I posted. It's a master class on using Gemini 3.0 Pro at an elite level. You'll see that these strategies like source validation and structured prompting don't just work in notebooks.

They are the secret to getting the most out of Google's flagship AI as well. Thank you so much for watching and I'll see you in the next one.

Loading...

Loading video analysis...