My 10-Year-Old Vibe Codes. She Also Does Math by Hand. Why That's the Only Strategy That Works.
By AI News & Strategy Daily | Nate B Jones
Summary
Topics Covered
- AGI Here, Education Stuck in 20th Century
- AI Tutors Double Learning Speed
- Calculators Freed Math for Concepts
- Specification Quality Powers AI
- Metacognition Defines AI Age Competence
Full Transcript
According to one of the most popular scientific magazines in the world, artificial general intelligence is here now today. But none of us have figured
now today. But none of us have figured out what we're going to teach our kids.
This is a real issue. We can use a tool like Claude Code to build an entire medical school curriculum in just two weeks. And I'm not saying that because
weeks. And I'm not saying that because it's a hypothetical. Someone shared that directly on X. 450 lectures, 16,000 figures, roughly a 100 million tokens of automated work with multiple rounds of
error checking, and 99% of it was flawless. Work that normally takes
flawless. Work that normally takes hundreds of faculty years to produce was done in just a couple of weeks by one person. Meanwhile, two billion kids
person. Meanwhile, two billion kids around the world are enrolled in schools that have no idea any of this is possible, that are running on an educational philosophy designed at best
for a 20th century industrial economy.
that economy is not going to exist when these kids get to adulthood. And this is personal to me. I have kids, too.
Nature, the journal, not the concept, published a peer-reviewed argument that said that artificial general intelligence has indeed arrived. And I
quote, "The machines Touring envisioned 75 years ago have arrived." Globally,
86% of students report using AI in their learning, according to the digital education council. In the UK, the
education council. In the UK, the picture is even more dramatic. usage
surged from 66% in 2024 to 92% just the next year in 2025 per HPI's annual student survey. The point is growth,
student survey. The point is growth, right? AI tutors outperform human tutors
right? AI tutors outperform human tutors in controlled studies. And nobody, not schools, not governments, not parents, has figured out what to teach our kids and what comes next in a world that's
changing this fast. I have three kids. I
work in AI every day. I've spent weeks figuring out how to talk about the latest in AI. From autonomous agents that negotiate car purchases to dark factories where code writes itself to
enterprise adoption curves that frankly looks like hockey sticks drawn by a drunk person who just wants to draw the line straight up. That's the world we're living in. And yet in the evening I go
living in. And yet in the evening I go home to the same question every parent is facing whether we realize it yet or not. What does education mean when
not. What does education mean when machines can do most of what we spent the last century teaching humans to do?
My 10-year-old sat at the kitchen table last month working through long division by hand with a pencil because I asked her to do that. I'm also teaching her to vibe code with Claude. These are not
contradictory positions. They're the
contradictory positions. They're the only positions that make sense together and the reasoning behind them is what I want to talk about in this video. Before
we get into that, let's talk about the world that our kids are going to inherit. A Harvard study published last
inherit. A Harvard study published last year found that students using AI tutors learned more than twice as much material in less time than students in traditional settings. A collaboration
traditional settings. A collaboration between Ed and Google DeepMind showed AI tutoring systems outperforming human tutors on problem-solving tasks, 66% versus about 60%. When you combine human
teachers with AI tutoring, the knowledge transfer doubles. So Benjamin Bloom
transfer doubles. So Benjamin Bloom established decades ago that one-on-one tutoring produces a significant improvement about two sigas in standard deviation. That's a massive effect. The
deviation. That's a massive effect. The
constraint was never whether personalized tutoring works. We figured
that out decades ago. The constraint was always that you can't give every child a personal tutor. AI is removing that
personal tutor. AI is removing that constraint today. And that is part of
constraint today. And that is part of what our kids are inheriting. Khan
Academyy's AI tutor, Kmigo, went from 68,000 users to 1.4 4 million in just a year. 266 school districts in the US are
year. 266 school districts in the US are being served by Khan Academy today. Saul
Khan calls it probably the biggest positive transformation that education has ever seen. An 8-year-old can build video games with Claude today by typing instructions like make the bad guys
tigers. Make them move slower. A mom
tigers. Make them move slower. A mom
with no coding background can build a personalized AI tutor for her dyslexic son using vibe coding. Just natural
language and iteration, nothing else.
Zack Yadagari, 18 years old, is the CEO of Cal AI, and he's pulling down 1.4 million a month right now with 8.3 million app downloads because he decided
to be entrepreneurial and use AI to build an app that serves customers. A
13-year-old from Toronto met Sam Alman at a tech conference because he'd built something worth showing at 13. This kind
of environment is the water every kid on Earth is swimming in. Pretending it's
not there doesn't make them better swimmers. And handing them a jet ski
swimmers. And handing them a jet ski before they've learned to swim doesn't make sense either. This is like the calculator moment, except it's for everything. Back in the 1970s, when
everything. Back in the 1970s, when electronic calculators became more affordable, the education establishment panicked. Calculators in classrooms were
panicked. Calculators in classrooms were considered cheating. Full stop. They
considered cheating. Full stop. They
would destroy children's ability to do arithmetic. They would produce a
arithmetic. They would produce a generation that could not think mathematically. So, schools banned them.
mathematically. So, schools banned them.
Parents protested them. And while we may not remember it now, the debate consumed education policy for over a decade. We
know how that ended. Calculators did not destroy mathematical thinking. They
changed what mathematical thinking meant. Once students didn't need to
meant. Once students didn't need to spend 20 minutes on long division, they could spend that time on the concepts long division was supposed to serve.
Proportional reasoning, algebraic thinking, problem decomposition. So the
tool ended up freeing the learner from the mechanical to engage with the meaningful. And schools typically have
meaningful. And schools typically have decided they want to have a foundation in mechanics and then move kids into calculators over time. Here's the part of the calculator story that gets left out. The transition worked because
out. The transition worked because students did still learn those mechanics first. They understood what the
first. They understood what the calculator was doing. They could
estimate whether an answer was reasonable or not. They could catch errors. They had the foundation and the
errors. They had the foundation and the tool extended it. So the parents who said calculators will make our kids stupid, and there were lots of parents in the 70s who thought that, they were wrong. The parents who said just give
wrong. The parents who said just give them calculators and skip the math, they would have been wrong, too, if anyone had been extreme enough to suggest that the right answer turned out to be both.
Build the foundation and then give them the tool. We're in that calculator
the tool. We're in that calculator moment again, except it's not just arithmetic. It's reading, it's writing,
arithmetic. It's reading, it's writing, it's research, it's analysis, it's coding, it's creative work, it's communication, it's problem solving.
Every single cognitive task that AI can now perform competently. The scope is fundamentally different from 1975, but I think the principle is not. So, let's go
back to basics. Why am I insisting on my kid doing long division by hand in a world of AI? My kids read real books, physical books, not screens. We do math by hand before we do math with tools. We
write with pencils on paper. The
reasoning connects directly to everything I've been writing about all year. The single most important finding
year. The single most important finding from watching autonomous agents operate in the real world is that the quality of the output is determined by the quality of human specification. I've talked
about that a lot just in the last couple weeks with the Open Claw moment with Opus 4.6. Human specification matters
Opus 4.6. Human specification matters more and more. I wrote about a Maltbot agent that negotiated $4,200 off a car purchase while its owner was in a meeting. That same week, a different
meeting. That same week, a different agent sent 500 unsolicited messages to friends and family and the developer's wife. Same technology, same
wife. Same technology, same architecture. And what I called out is
architecture. And what I called out is the difference is the human's ability to specify. If you have clear objectives,
specify. If you have clear objectives, if you have defined constraints, if you have a bounded communication channel, then you are in business. If you have broad access and vague boundaries, if you can't specify, you're in trouble.
That's a human skill and it's practiced manually and it's something we can teach our kids. That insight scales to
our kids. That insight scales to education really, really easily. You
don't get to write a good spec for something you don't understand. You
can't evaluate an AI's output in a domain where you have no knowledge. You
cannot exercise good judgment, things like taste, like discernment, like critical thinking about work that you've never engaged with deeply enough to internalize. So, when my kid asks Claude
internalize. So, when my kid asks Claude to help with a math problem, I want her to know enough to recognize when Claude is wrong. Last month, Claude confidently
is wrong. Last month, Claude confidently worked her through a word problem and arrived at an answer that did not pass a sanity check for me. When my kid is older and she can use Claude for math, I
want my kid to know enough to recognize when Claude is wrong. When she uses Claude for coding, I want her to know enough to recognize good separation of concerns as an architectural principle.
These are human skills first and we leverage them with AI tools later. The
Harvard tutoring study, the one that showed AI tutors can double learning outcomes, found that the best results come from human AI collaboration, not from replacing the human with AI. The
human needs to bring something to that collaboration and that something is the foundation I'm talking about here.
Reading physical books builds mental models that no AI can build for you passively. Not because AI can't explain
passively. Not because AI can't explain what Moby Dick means, but because the cognitive work of reading, of struggling with the text, of rereading, of integrating the ideas is itself the
learning a human brain needs. The
struggle is the point. Math by hand builds a sense of numbers you don't get any other way. An intuitive feel for magnitude, for proportion, for relationships that shortcuts any bypass
you can get from talking with Chat GPT about statistical distributions. Writing
by hand builds the connection between thinking and expression that typing and dictation tend to compress in ways that affect our ability to remember and our ability to comprehend. None of what I'm saying here means AI is bad for
learning. Quite the opposite. The
learning. Quite the opposite. The
evidence suggests it's great. The
evidence shows that if AI can help extend onetoone tutoring principles, we should do it. But it also means the foundation comes first and the foundation is built through effort with
our human brains through learning discipline as kids not through efficiency. Look, I am not in the
efficiency. Look, I am not in the protect the children from AI camp. I
have watched my kids vibe code websites and I love it. What I see when kids use AI is not intellectual laziness. It's a
different kind of intellectual work and a genuinely valuable one that we should encourage. Last week my kid wanted
encourage. Last week my kid wanted enemies in a game that she's building.
So, she typed add enemies. Claude added
enemies. Enemies that can spawn off screen, move in the wrong direction, can't be hit. It doesn't work, she said.
So, we talked about it and I asked her what she really wanted the enemies in the video game to do. And she thought about it and then she said, "Add three enemies that spawn from the right side of the screen, move them left at about a
medium speed, and make them disappear when the player touches them." Suddenly,
she got the behavior she was looking for. And that little conversation taught
for. And that little conversation taught her more about spec quality than a lesson that I could have scripted. When
a kid vibe codes, they're doing several things at once. They're specifying
requirements in natural language for something they are interested in doing.
They're decomposing a complicated vague desire into discrete tasks and they're learning to iterate, test the result, see what doesn't match, refine the specification. They're not debugging
specification. They're not debugging code really. They're debugging their own
code really. They're debugging their own intent. And these are skills that
intent. And these are skills that transfer. They map directly onto what
transfer. They map directly onto what professional software development and building products for customers increasingly looks like regardless of what job title you have. And they
develop a capacity for precise thinking that transfers well beyond coding. Andre
Carpathy, Tesla's former head of AI, one of the architects of the deep learning revolution. He founded Eureka Labs
revolution. He founded Eureka Labs specifically to build what he calls an AI native school. His stated goal is to raise young people who are proficient in the use of AI but can also exist without
it. That formulation is one that I keep
it. That formulation is one that I keep coming back to. You need to be proficient and also independent, not one or the other. Carpathy also said something that I think every parent and
teacher needs to hear right now. Quote,
"You will never be able to detect the use of AI in homework. Full stop. He's
right. The arms race between AI writing detection and AI writing generation was over before it started. And the people who are selling AI detection to schools
are wrecking the outcome of kids. They
are judging kids based on a huristic that is mathematically impossible to implement in a correct way. You cannot
detect AI in homework. Full stop. The
educational response cannot be better detection. And I see too many cases
detection. And I see too many cases where students are being pushed out of school for something they did not do because of a tool that administrators blindly believe is accurate just because
of the messaging on the tin. You can't
detect AI writing. Full stop. It has to be a fundamental rethinking of what we're measuring and why, which is long overdue anyway. If a mother with no
overdue anyway. If a mother with no programming can build a personalized AI tutor for her son, which is true, real example, if you can tailor the reading experience to that child's specific
needs, as a mom with no coding experience, you're not replacing education in that sense. You're
extending education to a child the traditional system is failing. The tools
can reach children that our current institutions cannot or won't or don't know how to do. And that is a much more worthwhile use of AI for our education system than trying to purchase from
vendors selling snake oil, promising they can detect AI homework. The
families pretending AI don't exist are making the same mistake as the schools that banned calculators in 1975.
Technology will not go away. The
children who don't learn to use it critically, skillfully, as a tool under their direction rather than a crutch will fall behind in ways that compound every single year. The skill connecting
foundation to AI fluency is metacognition. The ability to think
metacognition. The ability to think about your own thinking, to know what you know, to know what you don't, and to make deliberate decisions about when to rely on yourself versus when to delegate
to a tool. Researchers increasingly call this the defining competence of the AI age. Not what you know, not what the
age. Not what you know, not what the machine knows, but your capacity to move between the two. strategically
allocating your cognitive efforts, coordinating AI assisted tasks, and evaluating results against your own understanding. In practice, it's the
understanding. In practice, it's the difference between a kid who asks chat GPT to write the essay and a kid who drafts the essay, uses AI to identify the weak argument, strengthens them with her own thinking, and produces something
neither she nor the AI would have created alone. The first kid, they
created alone. The first kid, they completed an assignment. The second kid learned something and created something genuinely new. Same tool, different
genuinely new. Same tool, different metacognition skill. This maps directly
metacognition skill. This maps directly to the themes I've been writing about all month. Agency, the ability to direct
all month. Agency, the ability to direct AI rather than be directed by it. Taste,
knowing what good looks like when the machine can produce infinite mediocrity at zero cost. Specification quality,
articulating what you want precisely enough that a system can execute it well. These aren't actually technical
well. These aren't actually technical skills. They're cognitive skills with
skills. They're cognitive skills with technical application. and they develop
technical application. and they develop the same way every other cognitive skill develops through practice, struggle, feedback, and gradually increasing the challenge. Singapore's AI education
challenge. Singapore's AI education framework captures this as a progression. Learn about AI, learn to
progression. Learn about AI, learn to use AI, learn with AI, learn beyond AI.
That last step, learn beyond AI, matters most, and I don't think anybody has figured out how to teach it systematically yet. is where the student
systematically yet. is where the student doesn't just use the tool, but transcends the tools limitations through their own judgment and creativity. I
don't think that step gets solved in a classroom very well. I think it gets solved at kitchen tables in conversations with our kids about what AI got right and what it got wrong and why. in the moments where we ask them to
why. in the moments where we ask them to try it themselves before asking the machine. There's a concept in psychology
machine. There's a concept in psychology called learned helplessness, where a person repeatedly experiences situations where their own effort doesn't matter, where outcomes are determined by forces outside their control, and they
eventually stop trying. Not because
they're lazy, but because their brain has learned the effort doesn't matter.
The AI version of this plays out through what researchers call cognitive offloading. You delegate a mental task
offloading. You delegate a mental task to a tool. The tool takes care of it.
Over time, the neural pathways that would have handled the task don't really develop or if they existed, they weaken.
The offloading becomes a kind of dependence. The dependence becomes
dependence. The dependence becomes helplessness. And that's not necessarily
helplessness. And that's not necessarily a dramatic moment. It's happens
gradually. It's not sudden. It's a quiet erosion of capability that comes from never needing to exercise that skill.
This is not theoretical. Educators are
reporting it in real time. College
professors are describing today that students are arriving in the classroom who can no longer read a full chapter, who can no longer synthesize an argument from multiple sources or sit with a
difficult text long enough to extract meaning from it. High school teachers report that writing quality has absolutely collapsed. Not just because
absolutely collapsed. Not just because students submit AI generated work, although many do, but because even the students who aren't using AI have lost the habit of struggling through a draft,
the muscle has atrophied before we really noticed it was weakening. A
growing number of faculty are redesigning their courses around in-class work and oral exams because take-home assignments have become functionally meaningless as a measure of capability. The phrase I keep hearing
capability. The phrase I keep hearing from educators is they can't do it anymore. Not won't, can't. That is the
anymore. Not won't, can't. That is the evidence I'm responding to when I sit my kids down with pencils and paper. It's
not about nostalgia. It's a direct response to what's happening to the first generation of students who had AI available and who never built the foundation to function without it. I'm
not willing to let that happen at my kitchen table. And this foundational gap
kitchen table. And this foundational gap extends into the emotional domain. 3/4
of teenagers are now using AI companion chat bots for emotional support. Not as
a supplement to human relationships, but in some cases as a primary source of emotional connection. The chatbot is of
emotional connection. The chatbot is of course always available. It's always
patient. It never judges. It never makes demands. It also isn't real. It cannot
demands. It also isn't real. It cannot
teach conflict resolution because there's no genuine conflict. It can't
build relational resilience because it never pushes back when the stakes are real. It can't model empathy because it
real. It can't model empathy because it has no experience to draw upon. Multiple
tragedies have been linked to these kinds of parasocial relationships that teenagers are having more and more across the country and the world. That's
the extreme. The everyday version is subtler and it's much more pervasive.
The AI is so helpful, so frictionless, so immediately gratifying that reaching for it becomes the default before a child has to try to think through the problem on their own. That's true for
math as much as emotions. Every time
they choose that path, the harder path gets a little bit more difficult. Not
because AI is actively making our kids dumber, but because it's making it easy to not take the difficult route where the actual learning lives. And so it
stops feeling worth the effort. So the
trap is not really that AI will be too powerful or that it will take over. The
trap is that it will be so seamlessly and perpetually helpful that your kids and mine are never going to develop the tolerance for difficulty that real learning requires. They're going to end
learning requires. They're going to end up producing impressive looking work without understanding it deeply enough to defend it, to extend it, to know when it's wrong. I am convinced the answer is
it's wrong. I am convinced the answer is not going to be withholding these tools.
The answer is going to be sequencing AI tooling directly and deliberately into the education system. foundation first,
learn with your head first, struggle first, and build the muscle before you add that AI exoskeleton that extends your capabilities. And once that AI
your capabilities. And once that AI exoskeleton is on, you have to keep exercising without it so the muscles don't atrophy. And that's true for us
don't atrophy. And that's true for us adults, too. So, what am I actually
adults, too. So, what am I actually doing with my kids? I don't have a curriculum for this. Neither does
anybody else really. The world is moving faster than any educational framework can really track. Singapore is trying.
They're rolling out AI training for teachers at all levels next year.
Finland has national recommendations.
44% of homeschool parents are already using Chat GPT in their teaching.
Frankly, a higher rate than classroom educators. Everybody's improvising.
educators. Everybody's improvising.
Here's how I approach it. As I've been sharing, the basics are non-negotiable.
You got to read real books, not summaries, not audiobooks at 2x speed.
Real reading for kids with real cognitive effort. math by hand until the
cognitive effort. math by hand until the concepts get internalized and not just performed until you understand numerousy. Writing with a pencil because
numerousy. Writing with a pencil because the physical act of forming letters engages the brain differently than typing and that difference matters for memory and comprehension. These are not
romantic preferences for an oldtimey wood cabin lifestyle. They're
investments in cognitive infrastructure that makes everything else possible down the road. And then yeah, I introduce AI
the road. And then yeah, I introduce AI tools. I introduce them deliberately,
tools. I introduce them deliberately, not as a default, but as an extension. I
talked about vibe coding together. We
code games. We solve problems. We iterate on designs. I ask my kids to explain what they want before they ask the AI. I make them articulate the goal,
the AI. I make them articulate the goal, the constraints, what good will look like, and then I ask them to critically evaluate what comes back. I ask them to be directors, not audience. My
10-year-old is going to need to get into agents. I know this because I've spent
agents. I know this because I've spent weeks writing about what agents can do and the capability curve is accelerating, not flattening. Her world
is going to be agents. But agents are the most spec dependent technology we humans have ever built. The entire value proposition depends on the human's
ability to define a task precisely, set appropriate constraints, and evaluate the output critically. Teaching a kid to use agents before she can think clearly about what she wants would be like handing her the keys to the car before
she can read the map. So I think about it in terms of a readiness model. My
10-year-old is somewhere around steps two and three. She's building games with Claude. She's learning to articulate it.
Claude. She's learning to articulate it.
So the progression looks like this.
Build cognitive foundations. Introduce
AI tools with guidance. Practice
directing AI with increasingly clear spec. and then graduate to agent level
spec. and then graduate to agent level autonomy as judgment starts to develop.
That's not really a timeline. It's more
of a readiness model. And I'll be honest, my 10-year-old right now is between steps two and three. She's
building games with Claude and learning to articulate what she wants, but she's not at a point yet where she would be ready for agent level autonomy. These
are real and valuable skills. We're not
going to rush through them, and we're in the middle of the progression, not the end. I watch for signs of cognitive
end. I watch for signs of cognitive offloading. When one of my kids asks
offloading. When one of my kids asks what AI would say before trying to think through a problem, I redirect not with a lecture, but with a question. What do
you think the answer might be? Go
research in the encyclopedia. Not
because I don't want them using AI, but because I want the AI to extend their thinking, not replace it. Seymour
Pepert, the MIT researcher who pioneered computational thinking in education, wrote in 1968 that computer programming gives children a way to think about their own thinking. He called it
constructionism, the idea that people build knowledge most effectively when actively making things in the world. Not
consuming information, but constructing with it. His vision took 50 years to
with it. His vision took 50 years to reach the mainstream. And by then, tech had outrun his framework. And we're not really talking about the way he talked about computing at the time. But his
core insights hold now like they did then. Children learn by building. The
then. Children learn by building. The
act of creation through computing helps kids develop the cognitive architecture that matters. An 8-year-old building a
that matters. An 8-year-old building a game with Claude is taking constructionism at a scale that Paper never envisioned. The creation is real,
never envisioned. The creation is real, even if the code was written by a machine because the thinking is the child's. I've done some work to distill
child's. I've done some work to distill all of this down into seven principles that I think scale for parents who are struggling with education and I want to share them with you here because I want
us to find ways to help our kids to be human even in a world where technology is changing faster than ever. Principle
number one is foundation before leverage. I've talked about that a lot
leverage. I've talked about that a lot in this video. Reading, math, writing.
Lean into the effort here. Not because
AI can't do these things, but because your kid can't evaluate what AI produces without understanding the domain.
Principle two is specification is the new literacy. The gap between a good AI
new literacy. The gap between a good AI outcome and a catastrophe is the quality of the human specification. Teach kids
to say what they want, the goal, the constraints, what done looks like. If
you can write a clear spec for an AI, you are kind of exercising the same muscle as if you're learning to write an essay. Principle three, be a director,
essay. Principle three, be a director, not a passenger. When your kid uses AI, they should be defining the ask, the task, the output, what to keep, revise,
and reject. If they're just passively
and reject. If they're just passively consuming what the AI produces, they're not learning, they're outsourcing.
Principle number four, sequence the autonomy. Start with bounded educational
autonomy. Start with bounded educational tools that have guard rails and graduate eventually to open-ended tools with guidance. You can vibe code together.
guidance. You can vibe code together.
You can build projects side by side, then get into agent level autonomy later. This progression should follow a
later. This progression should follow a degree of cognitive readiness. It's not
really age gated, right? Some
12-year-olds might be ready for agents.
Sometimes adults are not. Principle
number five, teach kids to catch the machine. AI will be wrong. Confidently,
machine. AI will be wrong. Confidently,
fluently wrong. Train kids to sanity check outputs against their understanding. When the AI makes an
understanding. When the AI makes an error and the kid catches it, that's not a tool failure. The conversation needs to be about how we built a foundation so we can critique and understand and
evaluate machine outputs together.
Principle six, build don't browse.
Making things with AI develops cognition in ways that consuming AI output does not. Videoding a game, designing an app,
not. Videoding a game, designing an app, creating art, these are active choices.
Asking the AI to summarize a chapter, that's passive. But Parrot was right.
that's passive. But Parrot was right.
Construction is how kids learn.
Prioritize creation over consumption every single time. Principle seven, last but not least, attempt before augmenting. I think this is the most
augmenting. I think this is the most important habit you can build. Try it
yourself first and then use AI to extend what you've started. Ask, "What do you think the answer is?" and then go, "Well, what does Chad GPT think?" The
kid who drafts before she edits is going to be learning in a way that the kid who prompts before she thinks is not. I
don't think these are permanent answers.
I think all of them will probably need to evolve as the tech does, but they're the best and most stable operating principles I found for raising kids who can direct intelligence rather than
depend on it. And to be honest, these principles are useful for learning AI as an adult as well. Look, I don't know the right ratio between foundation and fluency. Nobody does because the AI is
fluency. Nobody does because the AI is evolving so fast. The foundation
obviously matters more at six than 16.
Tool fluency is going to matter more at 16 than six. The parents who ban AI outright and the parents who hand over the iPad and walk away are both choosing comfort. One is the comfort of familiar
comfort. One is the comfort of familiar education and the other is the comfort of not engaging with a problem that doesn't have a great answer. This AI
problem is not going to get easier. The
world is changing faster than any curriculum can track. And anyone who tells you they've figured out what to teach the kids is selling you something.
What I have are the principles and daily practices I've rooted in as a parent.
You got to build the brain first. You've
got to give that brain leverage with AI.
You've got to teach your kids to think and then teach them to direct. Give them
the struggle, the gift of struggle that develops real capability and challenge yourself in the same way. Watch closely
enough to know when the tools are helping and when they're replacing the work that needed to happen in their heads. That's where I'm at. I'm at the
heads. That's where I'm at. I'm at the kitchen table. I'm doing long division
kitchen table. I'm doing long division worksheets. I'm using my brain, too, on
worksheets. I'm using my brain, too, on that. By the way, Claude might be open
that. By the way, Claude might be open on the laptop, but Claude doesn't get asked first. I try to raise humans who
asked first. I try to raise humans who can thrive in a world where none of us fully understand it yet. And it's the hardest problem that I have to work on, to be honest. It doesn't resolve easily.
I have to keep going after it day by day. The machines that Turing Envisioned
day. The machines that Turing Envisioned have arrived. Nature magazine was right.
have arrived. Nature magazine was right.
And our kids have to be able to do the work in the moment to get their brains ready to partner with machines that are truly intelligent because anything else
would be a disservice to them and frankly a disservice to us as their parents, as the ones charged with helping them find their way in a world that is going to look nothing like the world where you and I were kids.
Everyone on Earth is living through this transition. And if you can realize it
transition. And if you can realize it and help your kids navigate it, you may just be the person who helps someone
navigate the AI transition with so much less angst than you yourself feel.
That's one of the gifts we can give our kids. I look at my kids and I know they
kids. I look at my kids and I know they are not going to feel the same angst of transitioning through this world as I do and you do. They never knew a world before this one. Our gift to them is to
take the best of what we know in terms of cognitive architecture and cognitive skills, culture, make sure that transitions to them. Make sure they learn that. Make sure they put the
learn that. Make sure they put the effort in. Frankly, it's a gift to
effort in. Frankly, it's a gift to ourselves. If you haven't picked up a
ourselves. If you haven't picked up a book lately, pick up a book lately. It's
good for your brain. Make sure you're putting in the time to keep your own brain healthy so that you're using AI effectively. One more note. If you are
effectively. One more note. If you are not a parent, this applies to you as an aunt or an uncle of kids in your lives.
And this applies to you as a learner.
Don't forget, we all have the responsibility to bring up the next generation in a way that can effectively partner with machine intelligence. It
really will take a village more than it has in any previous generation. Best of
luck out
Loading video analysis...