[Day 4] NotebookLM: The All-in-One Learning Tool That Replaced Other Apps
[Day 4] NotebookLM: The All-in-One Learning Tool That Replaced Other Apps
I uploaded one PDF chapter on biomechanics and five minutes later had a polished infographic, a 12-question quiz, 51 flashcards, and a podcast episode—all generated by AI, all grounded exclusively in my source material with zero hallucinations. No Canva, no Anki, no quiz app, no piecing together outputs from different tools. Today’s experiment explores NotebookLM, Google’s quietly powerful (and still free) learning tool that handles production while you handle curation, and what it reveals about the shifting role of humans in AI-augmented learning.
What if a single AI tool could replace your flashcard app, your quiz generator, your infographic designer, your podcast player, and your research assistant—all while drawing exclusively from sources you trust? Today I discovered that NotebookLM does exactly that, and it’s transformed how I approach learning complex technical material.
Today’s Experiment
I’m currently studying for the NSCA CSCS (Certified Strength and Conditioning Specialist) certification—a credential that requires deep understanding of exercise physiology, biomechanics, and program design. As someone without a formal background in sports science, I needed a way to efficiently absorb and retain dense technical content. My experiment: Could NotebookLM serve as my complete learning ecosystem for mastering biomechanics fundamentals?
The Process
I started by uploading a single PDF—Chapter 2 from my CSCS study materials on Biomechanics. In NotebookLM, you create a ‘notebook’ with your sources (PDFs, documents, audio clips, even URLs), and the AI grounds all its responses exclusively in that material. No hallucinations about facts that aren’t in your source deck.
NotebookLM’s Sources panel with my biomechanics chapter uploaded
The Chat interface provides an intelligent overview of your sources
From there, I explored the ‘Studio’ features—and this is where the magic happened. With a few clicks, I generated:
An infographic summarizing key biomechanics concepts with visual diagrams
A 12-question quiz with multiple-choice questions testing my understanding
51 flashcards covering anatomical terms, physics formulas, and exercise classifications
An AI-generated podcast where two hosts discuss the material in an engaging, conversational format
Outputs
The Infographic
The quality of the generated infographic genuinely surprised me. NotebookLM (which leverages Gemini Pro’s capabilities) created a visually coherent overview of how the body moves through different planes of motion, the physics of force, work, and power, and even included a practical comparison of two athletes demonstrating the difference between strength and explosiveness.
AI-generated infographic on biomechanics fundamentals. This replaces the need for other tools like Canva.
The Quiz
The quiz feature generated questions that genuinely test comprehension, not just recall. Here’s an example: calculating total work performed across 8 squat reps with a 150kg barbell. These aren’t trivial questions—they require applying formulas and thinking through real scenarios.
A sample quiz question testing application of the Work formula
The Flashcards
51 flashcards from a single chapter. The spaced repetition format is ideal for memorizing anatomical terminology—words like ‘proximal’ and ‘distal’ that have specific meanings I need to internalize before the exam.
Flashcard question: anatomical terminology
Flashcard answer with clear definition
What I Learned Today
Source grounding eliminates hallucination anxiety. NotebookLM only draws from materials you provide. When I’m studying for a certification exam, I need to know the AI is referencing the same textbook my exam is based on—not pulling from random internet sources that might contradict the official curriculum.
Multi-modal learning is now automated. Previously, if I wanted to study a topic visually (infographic), through testing (quiz), via memorization (flashcards), and through audio (podcast), I’d need Canva, Anki, a quiz app, and probably a podcast app. Now I upload one source and generate all four formats in minutes.
The human role shifts to curation. My value-add isn’t in creating the flashcards or designing the infographic—it’s in selecting the right source material, evaluating whether the generated content accurately represents the concepts, and identifying gaps. I’m the curator and quality controller; the AI handles production.
Free tools won’t stay free forever. The quality of these outputs is remarkable for a free tool. I’m using this aggressively now because I suspect a premium tier is coming. If you’re considering learning a new domain, now is the time to experiment.
The Bigger Picture
This experiment reinforces a theme I keep returning to: AI doesn’t replace human judgment—it amplifies it. Without my decision to pursue CSCS certification, my selection of reputable study materials, and my assessment that the generated content accurately reflects the source, these AI outputs would be meaningless. The human provides the intention, context, and quality control. The AI provides scale and format flexibility.
Every year since ChatGPT launched, I've found myself asking a completely different question about generative AI—not the same question with better answers, but an entirely new question that only became possible because the previous one had been resolved. In 2022, I wondered what AI actually knew (and learned the hard way about hallucinated salary data in job descriptions). By 2025, I was producing interactive training decks and Excel templates without writing a single line of code. Now, at the start of 2026, I'm asking how to build automated workflows that free me up to focus on what humans do best. These five questions trace the extraordinary pace of change in just five years—and reveal where the real opportunities lie for knowledge workers ready to ask the next question
I've tried using Claude, ChatGPT, and Gemini to create presentations and the results were always underwhelming. The content was decent, but the formatting? A manual nightmare. So when I tested Gamma.app with a genuinely complex use case (employee offboarding compliance across six Asia-Pacific countries), I wanted to see if a specialized tool could actually deliver what general-purpose AI assistants can't: polished, presentation-ready output in minutes, not hours. Here's exactly what happened, step by step, including the limitations they don't tell you about upfront.
What happens when you give three AI tools the same complex virtual training scheduling problem and one of them walks straight into a cultural landmine? I ran an experiment with Claude, ChatGPT, and Gemini: identical prompts asking each to recommend the best approach for scheduling a 100-person virtual training across six APAC countries during Lunar New Year season. One tool recommended a culturally tone-deaf date to save $2,500, and one framed the time zone constraint as a "Golden Window" that actually helped me think through the problem differently. This is what AI collaboration looks like in practice and why the human still needs to be in the loop.