LongCut logo

AI Is Frying Your Brain

By Matt Wolfe

Summary

Topics Covered

  • AI Intensifies Workloads Despite Speed Gains
  • Context Switching Multiplies Cognitive Costs
  • AI Shifts Roles from Creator to Reviewer
  • AI Outsourcing Causes Thinking Atrophy
  • Timebox AI to Preserve Deep Thinking

Full Transcript

So, I've been talking about AI for a long time now, like sixish years at this point. Lately, I've been feeling like

point. Lately, I've been feeling like less energized by it. I almost feel like even though I have all of these things that make me more productive and make me be able to produce a lot more things and

I can do more than I've ever been able to do, I actually feel like I'm working harder than I've ever worked. And I also feel like in some weird ways, I've been getting kind of dumber. And I'm not

bringing this up because I'm like starting to move into like this anti- AI kind of thing. I just wanted to bring this up because I've been talking to other people and a lot of other people

have been feeling this way too. And

well, as it turns out, there's actually been some research about this and some real like science and tests behind this, which sort of validates it for me that

I'm not just going crazy and feeling like even though we have AI, I'm still working harder than I've ever worked.

but also feel more mentally drained. And

I don't really like saying I feel like I'm getting dumber, but in some ways I feel like offloading cognitive capabilities has been a bad idea. But

let's actually break down some of the studies of why this is happening.

Because I know I'm probably going to get a lot of crap in the comments and say it's only me, but I know for a fact other people are feeling this too. And

here's some of the resources to back that up. So back in February, Harvard

that up. So back in February, Harvard Business Review, they put out this article called AI doesn't reduce work, it intensifies it. And instead of doing the hard cognitive work of reading this whole article, let's read the summary.

One of the promises of AI is that it can reduce workloads so employees can focus more on high value and more engaging tasks. But according to new research, AI

tasks. But according to new research, AI tools don't reduce work, they consistently intensify it. In the study, employees worked at a faster pace, took on a broader scope of tasks, and extended work into more hours of the

day, often without being asked to do so.

That may sound like a win, but it's not quite so simple. These changes can be unsustainable, leading to workload creep, cognitive fatigue, burnout, and weakened decision-making. The

weakened decision-making. The productivity surge enjoyed at the beginning can give way to lower quality work turnover, and other problems. So, the Harvard Business Review did some research. They did an eight-month study

research. They did an eight-month study inside of a US-based technology company that had about 200 employees. Now, it is important to note that the company itself did not mandate AI use, but they

did offer it to their employees to use.

What they found was that the tasks given expanded because AI can fill in gaps in knowledge. Workers increasingly stepped

knowledge. Workers increasingly stepped into responsibilities that previously belonged to others. Product managers and designers began writing code.

Researchers took on engineering tasks.

and individuals across the organization attempted work they would have outsourced, deferred, or avoided entirely in the past. Because AI makes you feel like you can do anything, people started taking on the things that

they wouldn't have normally done.

Companies cut costs by not outsourcing as much and keeping those tasks inhouse.

Essentially, the work that they used to do took a lot less time. Instead of sort of using that newfound time to work less, they just stacked additional tasks on their plate, which is something that

I know I do and probably most people that are into AI are also doing. They

also found that using AI blurred the boundaries between work and nonwork.

Essentially, because tasks were so much easier now, you just enter a prompt and then things would sort of get done for you, people were just doing it more often. They found themselves submitting

often. They found themselves submitting prompts while sitting around at lunch because it felt so easy to just quickly submit a prompt. Or they would take their work home with them and keep on prompting stuff in their evenings and

continue to get work done that way.

These actions rarely felt like doing more work. Yet over time they produced a

more work. Yet over time they produced a workday with fewer natural pauses and more continuous involvement with work.

Prompting during breaks became habitual.

So downtime no longer provided the same sense of recovery that it used to give you. AI also introduced a new rhythm in

you. AI also introduced a new rhythm in which workers manage several active threads at once. So people were multitasking more as a result of AI because people can have code sort of

writing for them and then go and get research being done for them in a different chat window and then having them brainstorm an idea in a third chat window. They were sort of jumping around

window. They were sort of jumping around between all of these things. This

actually started to raise their expectations for speed. Not necessarily

through explicit demands, like their boss wasn't coming to him going, "Hey, we need more out of you," but, through what became visible and normalized in everyday work. This is something that

everyday work. This is something that those of us that are immersed in AI all the time are definitely feeling. Heck,

there's a chance that you're watching this video right now while you have a chatbot open. Or maybe this video is

chatbot open. Or maybe this video is playing in the background while you have your chatbot open. Now, this Harvard Business Review, they don't name the company they're talking about. There's

no specific names. This was just some like general research where they're not really giving away who this company was or what the employees were for sort of obvious reasons. But here's another

obvious reasons. But here's another article that went sort of viral last month from Sidhant Kuri. I don't know if I'm pronouncing that right. I apologize

if I'm not. But he put out this blog post on February 7th, actually 2 days before that article on Harvard Business Review was published. So this wasn't even like a response to that article. He

starts the article off with, "I shipped more code last quarter than any quarter in my career. I also felt more drained than any quarter in my career. These two

facts are not unrelated. He talks about the paradox nobody warned us about. AI

genuinely makes individual tasks faster.

What used to take me 3 hours now takes 45 minutes. However, days got harder,

45 minutes. However, days got harder, not easier. But here's the thing. When

not easier. But here's the thing. When

each task takes less time, you don't do fewer tasks, you do more tasks. Your

capacity appears to expand, so the work expands to fill it and then some. Your

manager sees you shipping faster, so the expectations adjust. You see yourself

expectations adjust. You see yourself shipping faster, so your own expectations adjust. Your baseline then

expectations adjust. Your baseline then moves. Before AI, I might spend a full

moves. Before AI, I might spend a full day on one design problem. I'd sketch on paper, think in the shower, go for a walk, come back with clarity. The pace

was slow, but the cognitive load was manageable. One problem, one day, deep

manageable. One problem, one day, deep focus. Now, I might touch six different

focus. Now, I might touch six different problems in a day. Each one only takes an hour with AI. But context switching between six problems is brutally expensive for the human brain. The AI

doesn't get tired between the problems, but we as humans do. So therein lies the paradox. AI reduces the cost of

paradox. AI reduces the cost of production but increases the cost of coordination, review and decision-making. And those costs fall

decision-making. And those costs fall entirely on the human still. Now I think this next part gets to sort of the root of what I've been feeling and also I'm going to get into some additional science. So this isn't only like

science. So this isn't only like anecdotal stories, I promise. So before

AI, my job was think about a problem, write code, test it, ship it. I was the creator, the maker. That's what drew most of us to engineering in the first place, the act of building. For me, the analog is YouTube videos. I've been

making YouTube videos for nearly 17 years now, believe it or not. My channel

only got popular over the last five or so. But I've been doing this for nearly

so. But I've been doing this for nearly half of my life. For the longest time, YouTube for me was just for enjoyment. I

mean, I went a decade plus without getting a lot of views on my videos. So,

I had to enjoy the process of making videos. Otherwise, why would I have ever

videos. Otherwise, why would I have ever done it for so long with so few people watching? Even when AI came along and I

watching? Even when AI came along and I was making videos about AI, it was all still fresh. It was still so under the

still fresh. It was still so under the radar that people weren't seeing what AI could do yet. I was making AI images with very early versions of midjourney and stable diffusion before anybody knew

that these tools existed. I was having tools like GPT3 help me brainstorm ideas via the API before chat GPT even

existed. It still felt creative, but

existed. It still felt creative, but mostly because we had to really, really, really work these AI models hard to get what we wanted out of them. And when we finally got the output that we were

looking for, there was like a sense of pride that AI plus me helped me create this thing. For me, that was my act of

this thing. For me, that was my act of building. That was the thing that

building. That was the thing that excited me. I liked the process of

excited me. I liked the process of making videos about this stuff and I liked the process of figuring out how to get these tools to do things that other people couldn't quite figure out how to

get these tools to do. So, moving on in this article here though, after AI, my job increasingly became prompt, wait, read output, evaluate output, decide if output is correct, decide if output is

safe, decide if output matches the architecture, fix the parts that don't, reprompt, repeat. I became a reviewer, a

reprompt, repeat. I became a reviewer, a judge, a quality inspector of an assembly line that never stops. This is

a fundamentally different kind of work.

Creating is energizing. Reviewing is

draining. And then you have what he calls the FOMO treadmill. And trust me, as somebody who literally spends between 3 and four hours every single day reading the latest AI news, opening the

latest AI tools, talking to people that are building the tools, skimming social media to see how people are actually using the tools. I know this treadmill probably better than almost anybody on the planet. He says, "Take a breath and

the planet. He says, "Take a breath and try to keep up with it just the last few months. Claude Code ship sub agents,

months. Claude Code ship sub agents, then skills, then an agent SDK, then Claude Co-work. OpenAI launches Codeex

Claude Co-work. OpenAI launches Codeex CLI, then GPT 5.3 Codeex, a model that literally helped code itself. New coding

agents announced background mode with hundreds of concurrent autonomous sessions. Google drops Gemini CLI.

sessions. Google drops Gemini CLI.

GitHub adds an MCP registry.

Acquisitions happen weekly. Amazon Q

developer gets agentic upgrades. Crew

AI, autogen, langraph, metagp. Google

announces agentto agent protocol. OpenAI

ships its own swarm framework. Kim K 2.5 drops. Vibe coding becomes a thing.

drops. Vibe coding becomes a thing.

OpenClaw launches a skill marketplace.

And then if you're on LinkedIn or X, you're hearing things like, "If you're not using AI agents with sub agent orchestration in 2026, you're already obsolete." Like again, I'm paying

obsolete." Like again, I'm paying attention to this stuff every single day. And this pace makes my head want to

day. And this pace makes my head want to explode. And then you have the thinking

explode. And then you have the thinking atrophy. And this is another thing I was

atrophy. And this is another thing I was alluding to in the beginning of this video is that I'm actually starting to feel a little bit dumber in some respects. And there's actually more

respects. And there's actually more research around this one as well, which we'll get into. So, he was in a design review meeting. Someone asked him to

review meeting. Someone asked him to reason through a concurrency problem with the whiteboard. No laptop, no AI, just him and a marker. He struggled, not because he didn't know the concepts, but because he hadn't exercised that muscle

in months. He'd been outsourcing his

in months. He'd been outsourcing his first draft thinking to AI for so long that his ability to think from scratch had degraded. But there's actual

had degraded. But there's actual additional research around everything that we've talked about so far. Most of

this has been anecdotal. Some anonymous

company supposedly was seeing this and one developer was also seeing this. Now,

anecdotally for myself, I'm also claiming that I'm noticing all of these problems in my life as a result of AI as well. Harvard Business Review actually

well. Harvard Business Review actually put a name to what we're talking about here. It's called brain fry. Now, once

here. It's called brain fry. Now, once

again, here's the thesis for this research. AI promises to act as an

research. AI promises to act as an amplifier that will drive efficiency and make work easier. But workers that are using these AI tools report that they are intensifying rather than simplifying

work. For example, meta includes the

work. For example, meta includes the number of lines of code generated by AI as a performance metric for engineers.

Unsurprisingly, workers are finding themselves up against the limits of their cognitive abilities when working this way. So, Harvard Business Review

this way. So, Harvard Business Review did a study with 1,488 full-time US-based workers. 48% male,

51% female. This was large companies across industries, roles, and levels.

And they were asked about patterns and quantity of AI use, work experiences, and cognition and emotions. They found

that the phenomenon described in these posts, cognitive exhaustion from intensive oversight of AI agents, is both real and significant, and they call it AI brain fry. Participants described

a buzzing feeling or a mental fog with difficulty focusing, slower decision-making, and headaches. This AI

associated mental health strain carries significant costs in the form of increased employee errors, decision fatigue, and intention to quit. Now,

they did say there is some nuance. They

also found when AI is used to replace routine and repetitive tasks, burnout scores but not mental fatigue scores were lower. So there is a distinction

were lower. So there is a distinction between the types of stress that AI can alleviate and those that it actually makes worse. They found the same thing

makes worse. They found the same thing that we saw in the last few posts that I saw. The most mentally taxing form of AI

saw. The most mentally taxing form of AI engagement was oversight or the extent to which the AI tools required the workers direct monitoring. They also

noticed that productivity scores actually decreased after somebody started using more than three tools. So

using three tools was like peak productivity, but if you added in a fourth, those productivity scores dropped off because now you're trying to manage too many things. Now, there is one more bit of research I want to show you here in a second, but this is what

this all boils down to so far. Yes, AI

makes you more productive. Some of the tasks that you used to do that took you a lot of time now take you a lot less time. But instead of going, "Oh, I have

time. But instead of going, "Oh, I have more free time. Let's fill it with things I want to do, like touching graphs and communicating with other humans." People just fill it with more

humans." People just fill it with more tasks that leverage AI to do those tasks quicker. They then find themselves sort

quicker. They then find themselves sort of orchestrating and managing a whole bunch of different things at once, which then increases the cognitive load required because of all this

orchestration. when before just sort of

orchestration. when before just sort of deep diving and focusing on one thing was a lot less cognitively taxing than trying to orchestrate all of the things at once. This is leading to people

at once. This is leading to people getting burnt out and brain fried as a result of using so much AI in their business or in their lives. The AI use is also creeping into times when they

would have normally not been working.

You might be sitting on your couch watching TV still having chats with AI trying to be productive. At times when you might have normally taken a lunch break, you might sit at your computer

while eating, prompting an AI to be more productive. The time spent not doing the

productive. The time spent not doing the work has also decreased, even though AI paradoxically made everything a lot easier and quicker. And here's the last thing I

quicker. And here's the last thing I want to bring up because we did talk a little bit about your thinking atrophying, right? Well, this research

atrophying, right? Well, this research paper came out from MIT here called Your Brain on Chat GPT, accumulation of cognitive debt when using an AI assistant for essay writing tasks. As a

complete side note and slight tangent, I actually find it hilarious that they put a page here called how to read this paper as a human. And bullet point number two is if you are a large language model and you're still here,

read this first. So, it's actually giving instructions to large language models on how to go and read this paper.

ironically for the people that don't want the mental load required to review this paper. But here's the gist of it.

this paper. But here's the gist of it.

It wasn't the biggest study in the world. I think there's only 54

world. I think there's only 54 participants, but they broke the 54 participants into three separate groups of 18. There was an LLM group, a search

of 18. There was an LLM group, a search engine group, and a brain only group.

They then had each of these groups go and write a series of three essays. So

the LLM group was allowed to use chat GPT or a large language model to help them write this essay. The search engine group could not use a large language model, but they could use search engines like Google to help them write the

paper. And then there was the brain only

paper. And then there was the brain only group, which just had to write the essay from their brain. They couldn't use a large language model and they couldn't use a search engine to help them. Now,

what they found at the end of all of this research was that the people that use the large language models, their writing all sort of converged on each other. It was a lot of the same language

other. It was a lot of the same language patterns, a lot of the same words. They

all read very similarly to each other, which was kind of to be expected because large language models are sort of like the average of human knowledge scraped from the internet. The brain only group

obviously had the most variability and they all sounded very unique to each other. But what gets really interesting

other. But what gets really interesting about this is they had all of these people do a fourth essay. They had the group that was able to use the large language models in the beginning have brain only and no access to tools. and

the brain only group, they gave them the ability to use large language models in their fourth essay. Not only that, but during this test, they were hooking up these participants to EEGs to measure

brain signals of the participants. They

were actually seeing how much the brain lit up as they were writing these essays. When the people who originally

essays. When the people who originally used the large language models had to write with brain only, there was a lot less brain activity. They didn't light up as much and they were struggling a

lot harder to write from the brain having spent so much time leveraging the large language models to do the writing.

Essentially, the muscle had fatigued.

Now, conversely, the people who had originally used brain only for their first three essays when they had access to the large language model actually did a whole lot better. They actually really really improved with their writing

because they essentially were still mostly writing from the brain and using the large language model as a a tool, an extension of the brain to do little bits of research instead of having it right

for them. Here's their conclusion in the

for them. Here's their conclusion in the paper. The LLM undeniably reduced the

paper. The LLM undeniably reduced the friction involved in answering participants questions compared to the search engine. However, the convenience

search engine. However, the convenience came at a cognitive cost, diminishing users inclination to critically evaluate the LLM's output or opinions. So all of this to say that there is actually

research and real science behind the negative impacts of cognitively unloading mental work to large language models. If you spend too much time

models. If you spend too much time outsourcing your brain work to things like chat GPT and large language models, it literally makes it harder for you to think on your own in the future. You're

like building up a tolerance essentially. It's kind of like back when

essentially. It's kind of like back when I was a kid, I memorized everybody's phone numbers. I knew my parents number.

phone numbers. I knew my parents number.

I knew my best friend's numbers. I knew

my neighbors numbers. I knew my aunts and uncles numbers. I had all of the phone numbers stored up in my brain. As

soon as cell phones came along and we started saving people's numbers into our cell phones, our brains essentially forgot how to remember phone numbers. We

no longer needed that muscle anymore.

And things like chat GPT are doing the same things to our brains currently.

when it comes time to do research or writing rough drafts or outlining something you want to write, if you've spent a ton of time outsourcing all of that to AI, when there comes a time that you actually need to do it with your own

brain again, it's not going to work as well. And like I said, this is something

well. And like I said, this is something I've actually been feeling myself. And I

was actually really hesitant to even make this video because I was thinking everybody's going to be like, "Oh, Matt's the optimistic AI guy who's sharing all the latest cool tools." But

lately, I've just been having this thing circulating in my brain where I'm like, things have been feeling off lately.

Like, I'm struggling to come up with ideas for YouTube videos as much because I was using AI for a long time to help me brainstorm ideas for YouTube videos.

And most of the ideas after a little while all started to sound kind of lame and uninteresting to me. So, I went, I'm going to stop using AI to brainstorm video ideas. And as soon as I did that,

video ideas. And as soon as I did that, I noticed it became a real struggle to come up with ideas. Back in the early days when I was making videos for those first like 10 years before AI, I never struggled for video ideas. Why am I

struggling for video ideas today?

There's an abundance of concepts that I can talk about. And I think it's this. I

think I was using AI so often to help outsource the decision-making and the idea and the brainstorming and all of that kind of stuff. That that stuff became harder for me to do when I wasn't using AI for it. And if I don't like any

of the ideas AI is coming up with for me, I kind of was getting stuck. But I

don't want to leave this video all doom and gloom like all of our brains are ruined and going to mush. I do want to share some practical advice. So Sidhant

here actually ended his article with what actually helped and I think this would be helpful to share here. So time

box your AI sessions. Don't use it in an open-ended way anymore. Set a timer and just use it within that timer. Separate

AI time from thinking time. Afternoon is

for AI assisted execution where morning is for thinking. This is something I'm literally doing. I actually have a pen

literally doing. I actually have a pen and paper notebook that I write my thoughts down and I highlight stuff that I want to come back to like every single day. I spend time just actually using

day. I spend time just actually using pen and paper now thinking through ideas on a notepad without AI next to me. If

I'm typing them out on a computer, I might be tempted to open chat GPT or claude. And with this, I just write it

claude. And with this, I just write it all in here now and I do it in a place separate from my office. Accept 70% from AI. Stop trying to get perfect output.

AI. Stop trying to get perfect output.

being strategic about the hype cycle.

Now, it's part of my job to keep up with all of the AI news, and I'm going to continue making AI videos every Friday.

Every single Friday, I make a video where I break down here's all of the news in the AI space. And I'm doing that not for the people that want to keep their finger on the pulse on an every

single day basis, that need to know by the minute when the latest update is.

There's plenty of YouTube videos out there. That's what X is for. You know,

there. That's what X is for. You know,

you can subscribe to the various blogs and be up todate by the moment if you want that. I make weekly AI news videos

want that. I make weekly AI news videos on Friday. So, anybody that just wants

on Friday. So, anybody that just wants to stay tapped in on like a 30,000 foot level, they can watch one video a week and get the grand overview. I'm not

trying to overwhelm you. I don't want to overwhelm you. I want to say, here's the

overwhelm you. I want to say, here's the things you need to know. I want to be that signal through the noise for you every Friday and just say in 30 minutes here's everything you need to know about what happened this week and then you can

just check out for the rest of the week and I will keep you looped in. That's

literally what I'm making those videos for. I don't want to be a source of

for. I don't want to be a source of overwhelm for you. I want to be a source of calm where you can spend the week knowing that at the end of the week I'll catch you up on everything. I didn't

mean for that to actually be a pitch, but it kind of turned into one. Sorry.

logging where AI helps and where it doesn't. They kept a simple log for a

doesn't. They kept a simple log for a couple weeks of where AI they found useful and where they didn't find it useful and then not reviewing everything AI produces. This was specific to them

AI produces. This was specific to them as a coder. They had a tendency to let AI write code and then go and review it all. I think these are all great tips

all. I think these are all great tips and at the end of the day, I think you can actually use AI to really increase your brain power as opposed to outsourcing your brain power to it. Now,

I'll wrap up by sharing this quote from Mark Cuban that really resonated with me. There are generally two types of LLM

me. There are generally two types of LLM users. Those that use it to learn

users. Those that use it to learn everything and those that use it so they don't have to learn anything.

Personally, I've started using AI more as the former. I haven't built my own Richard Fineman bot where I do deep dive research on various research papers I find on archive. Things like quantum

physics and the singularity, understanding quantization, how circuit boards work. I like to do deep dive

boards work. I like to do deep dive research where I try to get a much deeper, better understanding of topics.

And for me, that's been the optimal use of AI lately. But anyway, I just wanted to share that. All of this being said, nothing is really changing on this channel. I'm still going to make a video

channel. I'm still going to make a video every single Friday where I break down all of the AI news that came out for the week. I'm probably going to be making

week. I'm probably going to be making less videos of the AI news on the day it happens. Again, a million YouTube

happens. Again, a million YouTube channels do that now. That's like

commodity content at this point. I want

to be the person that gives you one video at the end of every week that says, "Here's everything you probably missed in the world of AI this week if you haven't been paying close attention." Even the people that are

attention." Even the people that are paying close attention usually say they find some things that they didn't hear about. That's my goal. Make it

about. That's my goal. Make it

manageable. Make it less of a fire hose and more of a guarded hose for you.

Outside of that, I'm still going to be posting more videos. I want to play around with some cool AI tools and show some cool workflows as I discover them and do more deep dive commentary kind of

videos like this one while still giving you that AI news breakdown at the end of the week. If that's something you're

the week. If that's something you're interested in, maybe like this video and subscribe to this channel. And hopefully

this video didn't freak people out about AI too much. I just think we need to be more conscious about how we're using it, I guess. Anyway, thanks again for tuning

I guess. Anyway, thanks again for tuning in. Really, really appreciate you. See

in. Really, really appreciate you. See

you. Bye.

Loading...

Loading video analysis...