4 Skills I’m Learning that AI Can’t Replace (backed by data)
By Jeff Su
Summary
Topics Covered
- Cockpit Rule Guides AI Delegation
- Build AI Rails for Workflow Speed
- Storytelling Turns Data into Impact
- Manual Override Prevents Thinking Atrophy
Full Transcript
Saying you know how to use AI nowadays is kind of like putting proficient at Microsoft Word on your resume. It's no longer a differentiator, right? It's a baseline expectation.
Just like adding AI to your dating, I mean, LinkedIn profile. And that means being good at ChatGPT is now the bare minimum. So in this video, we'll cover four skills you need to build on top of that to actually get ahead. Let's get
started. Beginning with the most important skill to develop, the cockpit rule. Put simply, this is a mental model for deciding when to delegate to AI, when to collaborate with it, and when to avoid it entirely. Think of it like a pilot in the cockpit. At cruising altitude on a clear day, you engage autopilot and let the plane
cockpit. At cruising altitude on a clear day, you engage autopilot and let the plane fly itself. During takeoff and landing, you and the systems work together because there are
fly itself. During takeoff and landing, you and the systems work together because there are more variables. And in an emergency where sensors fail, you take over full manual control.
more variables. And in an emergency where sensors fail, you take over full manual control.
The exact same logic applies to AI. Autopilot mode is when you hand the task to AI with clear instructions and trust the output with minimal review. The AI handles everything on its own. Collaboration mode is where you and AI iterate together through multiple rounds until the output meets your standard. Neither you nor AI could have produced the result alone. Manual mode is when you do the work yourself because AI either can't
result alone. Manual mode is when you do the work yourself because AI either can't do it well, or the risk of getting it wrong is too high. Now, the
real skill is knowing which mode to pick for any given task. And Professor Ethan Malik of Wharton has a useful framework for this called the agentic cost -benefit framework.
And it comes down to three factors. First, human baseline time. How long would this take you to do manually? Second, probability of success. How likely is AI to get it right? And third, AI process time. How long does it take to prompt, wait,
it right? And third, AI process time. How long does it take to prompt, wait, and check the output? Diving right into an example. You have a messy spreadsheet that needs to be restructured and formatted for a presentation. Human baseline. Two hours of tedious spreadsheet work. Probability of success. High, because AI is great at structured data manipulation.
spreadsheet work. Probability of success. High, because AI is great at structured data manipulation.
AI process time. Maybe 15 minutes to upload the data, write the prompt and spot check. Result, autopilot mode. 15 minutes is much shorter than two hours, and you know
check. Result, autopilot mode. 15 minutes is much shorter than two hours, and you know this domain well enough to catch any major errors at a glance. Example two. When
I was at Google preparing a client pitch deck, AI could handle the research and draft talking points, but it didn't know my client's risk tolerance or Google's priorities for that quarter. Human baseline. About 10 hours to build the presentation. AI's probability of success
that quarter. Human baseline. About 10 hours to build the presentation. AI's probability of success on any single attempt. Medium, because the AI needs my direction and domain expertise. AI
process time per round. Maybe 45 minutes of prompting, checking sources, and fixing hallucinations.
Result, collaboration mode. Even if I iterated five times and spent four hours total managing the AI, that's still less than half the manual baseline. Example three. Your VP sends an angry Slack message questioning your team's approach on a project. Human baseline. Three minutes,
since you already know the backstory and the politics. Probability of success. Low, because AI doesn't know your boss's personality. AI process time. 20 to 30 minutes since you'd have to explain all the context you already have in your head. Result, manual. So as
a rule of thumb, the best tasks to delegate to AI are those that take you a long time to do. The AI tool itself is very capable in that domain to increase probability of success. And you can easily evaluate the output to decrease AI process time. Now, regular viewers know I've taken quite a few AI courses on Coursera before. So when I received early access to their latest Google AI professional certificate,
Coursera before. So when I received early access to their latest Google AI professional certificate, I honestly expected more of the same. But I was pleasantly surprised by the labs feature. Basically throughout the course, instead of just watching videos, there are these standalone lessons
feature. Basically throughout the course, instead of just watching videos, there are these standalone lessons where you open up Gemini, follow along with a step -by -step video, and work with downloadable documents. So it's sort of like a self -contained mini project. For example,
this lab walked me through a product ideation process with Gemini. The video and written instructions stay within this page. And we actually go through an end -to -end brainstorming exercise in another tab, like running a cost -benefit analysis and scheduling recurring reports. The
certificate covers seven courses, including brainstorming, research, writing for content creation, data analysis, and more. So it pairs well with the skills we're talking about today. You can
and more. So it pairs well with the skills we're talking about today. You can
get 40 % off three months of Coursera Plus right now. So I'll leave a link to the description. Thank you Coursera for sponsoring this portion of the video. Next
up, build the rails. Put simply, now that AI has become so capable, your competitive advantage is no longer doing the work. It's designing the process so AI can do it for you. Think of it like this. A bullet train needs a lot of heavy lifting up front to lay the tracks, right? But once those rails are in place, the train glides over them at over 300 kilometers per hour with almost no
friction. For our American friends, that's around 200 hamburgers per unit of freedom. It's the
friction. For our American friends, that's around 200 hamburgers per unit of freedom. It's the
same thing with AI. Designing a workflow is like laying the tracks. It's tedious at first, but once the system is in place, AI can just do its thing. Here's
a simple example. I used to have a single prompt to polish the subject line and body content for my newsletters, and the output was fine. But when I created a separate prompt optimized just for the subject lines, my click -through rates went up.
Andrew Ng had a famous example where he found that using a single prompt to write code gave him a 48 % success rate. But when he designed a workflow to write, run, and troubleshoot the code using the same AI model, that jumped to 95%. And in a study from Harvard and BCG, they tested 758 consultants and found
95%. And in a study from Harvard and BCG, they tested 758 consultants and found the top performers fell into two groups. Centaurs, who divided tasks between themselves and AI with clear handoff points, and Cyborgs, who integrated AI into every step of their workflow.
The third group, let's call them peons, used AI with no structured process, and they performed 19 percentage points worse. Again, the variable wasn't the AI model. It was the process. So how do we actually redesign our workflows to be AI first? I've talked
process. So how do we actually redesign our workflows to be AI first? I've talked
about this in other videos, so I won't waste your time here. But in a nutshell, you want to first take a recurring deliverable you produce, like a weekly report, and break it into its component steps. Second, apply the cost -benefit framework I mentioned earlier to each step. Which steps are autopilot, which are collaboration, and which should stay manual? Third, prioritize redesigning the autopilot steps first, since that's where you get the biggest
manual? Third, prioritize redesigning the autopilot steps first, since that's where you get the biggest return for the least amount of effort. Obviously, this is a very dense topic. I'm
probably going to have to dedicate an entire lesson to it in my upcoming AI course. I'll leave a link to that waitlist down below. Skill number three, the storytelling
course. I'll leave a link to that waitlist down below. Skill number three, the storytelling mode. So first, let's check out this ad from Anthropic. How do I communicate better
mode. So first, let's check out this ad from Anthropic. How do I communicate better with my mom? Find emotional connection with other older women on Golden Encounters, the mature dating site that connects sensitive cubs with roaring cougars. Would you
like me to create your profile?
I'm not surprised that ad went semi -viral because AI companies have been aggressively hiring head of content and storytellers because even they understand that AI, as powerful as they are, cannot generate meaning. I still remember this crazy meeting when I was still at Google where all the managers were asking for more budget. And on paper, our team had the weakest case by far, right? But my manager, instead of focusing on the
data, she talked about how her project would benefit other countries and would become an Asia -wide case study, thereby making our big boss look good. And she ended up getting most of a budget. Now, am I saying I learned all my bullshitting skills from her? No, I'm naturally gifted. But that's not the point here. The point is,
from her? No, I'm naturally gifted. But that's not the point here. The point is, in the world of AI, information is a commodity. And so the real skill is turning that information into something people actually care about. Put simply, if you can turn data into a story that moves people, you're safe. If you just pass along the data, you're replaceable. So how do we actually get better at this? I'm still working
on this myself. So I recommend checking out Philip Hum's storytelling video and Vin Jang's content. He's an absolute monster at storytelling. That said, I've been practicing two frameworks since
content. He's an absolute monster at storytelling. That said, I've been practicing two frameworks since my management consulting days. First, the and but therefore ABT framework developed by Randy Olson. Here's how it works. Your manager asks, hey, how's the launch going? Instead of
Olson. Here's how it works. Your manager asks, hey, how's the launch going? Instead of
listing facts, you answer, we're on track and adoption is rising. But one client paused spending due to technical issues. Therefore, I'm preparing a follow -up call to troubleshoot his account. And sets the stage. Here's where we are. But introduces the conflict and makes
account. And sets the stage. Here's where we are. But introduces the conflict and makes people lean in because something's wrong. Something didn't go according to plan. Because it's all part of the plan. Therefore, delivers the resolution and a clear next step. Second, the
tried and true SCQA framework from McKinsey, Bane, and BCG. Situation, here's where we are.
Complication, here's the obstacle, aka conflict. Question, what do we need to answer to move forward? Answer, here's a resolution. You've probably already noticed a common denominator across both frameworks.
forward? Answer, here's a resolution. You've probably already noticed a common denominator across both frameworks.
They introduce conflict, then resolve it. That's what makes people care. And to show you how big a difference this makes, here's the same story told both ways. Version 1,
Frodo volunteered to take the ring to Mordor. And he was joined by a fellowship.
And after a long journey, he destroyed the ring. The end. Version 2, Frodo was entrusted with the one ring. And he was the only one who could resist its corruption. But the journey nearly broke him. By the time he reached Mount Doom, the
corruption. But the journey nearly broke him. By the time he reached Mount Doom, the ring had won. Therefore, it was only through Gollum's obsession, not Frodo's strength, that the ring was accidentally destroyed. The hero failed at the finish line, and the quest was saved by the villain. Same story, completely different impact. And that's a perfect segue into
skill number 4, manual override. This is about intentionally choosing not to use AI for certain tasks so that your critical thinking doesn't atrophy. Put simply, if you let AI write every email, outline every strategy, and summarize every meeting, you gradually lose the ability to synthesize information yourself. Think of it like this. A weightlifting belt helps you lift heavier, right? But if you wear it for every single rep, your stabilizer muscles, like
heavier, right? But if you wear it for every single rep, your stabilizer muscles, like your abs, weaken. After a year, you're only strong with a belt on. And the
science backs this up. Researchers at McGill found physical changes in the brains of drivers who relied heavily on GPS that decreased their ability to navigate on their own. A
Microsoft and Carnegie Mellon study found that knowledge workers who over relied on AI gradually stopped doing key cognitive steps themselves, like questioning assumptions, checking sources, and weighing trade -offs.
As a result, they became less prepared for unexpected edge cases. And a study of 2 ,760 decisions from radiologists found that those who used AI as a first opinion often anchored themselves onto the AI's answer and stopped looking for other signs. By contrast,
those who formed their own opinion first and used AI as a second check maintain their accuracy. So how do we protect our thinking while still benefiting from AI? There
their accuracy. So how do we protect our thinking while still benefiting from AI? There
are two habits we can develop. Number one, and this is very simple, Professor Malik recommends think first, prompt second. For example, I'll still use AI to summarize reports. I'll
leave a link to my essential prompts below. But I always write my own so -what analysis first before asking AI for its take. Basically, for analytical tasks, spend a few minutes forming your own position before engaging AI. Second, interrogate the output.
When AI gives you an answer, don't accept it right away and instead ask yourself, how would I verify this? What's the counter -argument? For instance, I recently asked AI about refinancing my mortgage and I challenged it with a series of what -ifs since this kind of active debate forces my brain to engage rather than passively consume. Now,
at this point, it's easy for skeptics to say, see, I told you so, AI is making us all dumber. But that's just simply not true. Yes, an MIT study found students who used chat GPT were less engaged, but those findings are about habits, not neurological brain damage. And for context, Plato worried that writing would erode wisdom, and people worried that cell phones would kill our ability to remember phone numbers.
So here's a nuance those click -baity bullshit articles will never tell you. AI will
only hurt us if we allow it to change our habits. It's just like giving taller unlimited access to TV. The TV isn't the problem, it's the behavior. Back to
the learning example, students using chat GPT without guidance scored 17 % worse on exams, yes. But with structured guidance, a World Bank study found that six weeks of AI
yes. But with structured guidance, a World Bank study found that six weeks of AI tutoring produced learning gains equivalent to two years of traditional schooling. Ethan Malik sums it up perfectly. There's plenty of work worth handing off to AI. We rarely mourn the
up perfectly. There's plenty of work worth handing off to AI. We rarely mourn the math we do with calculators. But there's also a lot of work where our thinking is important. Your brain is safe. Your thinking, however, is up to you. If you
is important. Your brain is safe. Your thinking, however, is up to you. If you
enjoyed this, you might want to check out this video where I share my favorite AI tools. See you there and in the meantime, have a great one.
AI tools. See you there and in the meantime, have a great one.
Loading video analysis...