AI Doom Predictions Are Overhyped | Why Programmers Aren’t Going Anywhere - Uncle Bob's take
By Dev Tools Made Simple
Summary
## Key takeaways - **AI doom predictions are clickbait**: Predictions of AI-induced doom and the end of programming jobs are often made for clickbait, appealing to a desire for sensationalism rather than reality. [00:02], [00:09] - **Programmers feared compilers in the 1950s**: Early programmers writing in binary feared losing their jobs when Grace Hopper introduced compilers, a historical parallel to current AI anxieties. [00:25], [00:32] - **AI lacks true judgment and reasoning**: AI tools like ChatGPT statistically assemble data but lack genuine judgment or understanding, meaning their code output must be carefully verified. [01:53], [02:04] - **AI history shows cycles of hype and disappointment**: The current AI optimism mirrors past booms in symbolic AI, expert systems, and early deep learning, all of which eventually faced limitations and disillusionment. [03:03], [03:17] - **Software development is complex and context-dependent**: Fully automating software development would require AI to achieve a level of reasoning that could automate most other jobs, suggesting programmers are not easily replaced. [04:25], [04:38] - **Pivoting careers due to AI is premature**: Switching to a seemingly safer profession due to AI advancements may be a waste of time, as future AI could automate those fields as well, making career predictions difficult. [05:14], [05:28]
Topics Covered
- AI is just another tool, not a job-ending intelligence.
- AI lacks judgment; don't trust its code or claims.
- Why does AI always follow a predictable hype cycle?
- If AI automates software, no job will be safe.
Full Transcript
Uh, I think I think the people that are
um predicting doom are just in it for
clickbait. Um, they like the idea of
predicting doom. It's going to be
terrible. Oh my god, life is going to
end. No, it's not. Um, there's other
another group of people that are saying,
uh, there will be no more programmers
because the AIs will do all the
programming. Let me tell you a little
story about that. The very first
programmers to worry that they were
going to lose their jobs because of a
technology improvement were programmers
in the very early 1950s.
These people were writing in binary,
literally binary. They wrote their code
in binary. There were no compilers.
There were no assemblers. In the worst
case, they actually got pieces of paper
tape and they punched holes in the paper
tape one at a time. They were the
programmers. They wrote code. They
called it coding because the holes in
the paper was the code. And Grace Hopper
came along and came up with an idea for
taking
a much
wasn't much a slightly better
representation, still numbers, not quite
binary anymore, but still numbers. and
automatically using the computer to
automatically translate that to the
holes in the paper tape.
And the programmers at the time said,
"Oh my god, anybody will be able to do
this. Then we're going to lose our
jobs."
This is what AI is. AI is another tool.
It's um a fairly useful tool as long as
you don't believe too much of what they
say. Um Chat GPT is perfectly willing to
lie to you and and tell you all kinds of
misinformation. All of them will. Uh if
you ask them to write code, they might.
They might write some code for you, but
you better check that code really well
because those those things have no idea
what they're doing. They're not
thinking, right? They are still just
programs that are assembling data
statistically to satisfy an algorithm.
They are not judgment. They have no
judgment. Uh as time goes by, I think
these these programs will get better and
better. AIS will get better and better.
That's good. We they will not approach
human intelligence. That is not on the
horizon. And I know people say, "Well,
we're only a year away from AGI." Yes,
we are. We are well more than a year
away. I I doubt that the current
siliconbased technology has the capacity
to emulate a human brain. I'm not sure
that's feasible. You know, we put down
our circuits on two-dimensional things.
We have not broken into the
three-dimensional neural net that you
have in your head. That's we're way away
from that kind of stuff. So, I don't I
don't think that's going to be an issue.
There will be some changes. There will
be some interesting things that happen.
Uh there will be great tools that will
help us, but the human will always be in
the loop. My prediction. Uncle Bob's
views on AI are becoming more and more
common as time goes by. And this is in
part due to the fact that we might be in
the thro of the solution stage of the
Gartner hype cycle. That stage with the
initial allure of a new technology
begins to fade. And people start
wondering if it can really deliver what
was promised. And this isn't the first
time something like this happened in
tech or in AI more specifically. We've
been here before in the 1960s and 70s
during the symbolic AI boom when logic
based systems failed to handle the real
world complexity and ambiguity of
language and perception. Again in the
1980s when expert systems once the
future of intelligence collapsed under
their inability to adapt despite massive
hyping newspapers and corporate
investments and once more in the early
2000s before deep learning triggered yet
another wave of optimism. This repeating
pattern is often called the AI cycle
when a breakthrough achieves what was
once deemed impossible leading people to
believe the artificial general
intelligence is just around the corner.
But after the initial excitement,
limitations start to show and the signs
are here again. Diminishing returns in
AI progress. Companies quietly scaling
back their AI hiring. Papers
highlighting the inadequacies of large
language models in real world settings.
Permanent thinkers in tech and AI
questioning whether current
architectures are even capable of
reaching AGI. all signs of a looming AI
winter. So, does this mean AI won't take
your job or should you still pivot to
something less risky? Well, when a
disruptive technology emerges, people
naturally start re-evaluating their
careers. And often that's a rational
move. Historically, if your field was
disrupted by automation, jumping ship
made sense. But software development
might be different because of the sheer
level of complex context dependent
problem solving it involves. From
interpreting human requirements and
navigating social context to designing
robust and maintainable systems to
completely replace a human developer, AI
would need to reach a level of reasoning
that might render most other occupations
obsolete as well. In complexity theory,
there's a principle that once a single
hard problem in a class of difficult
problems is solved. The rest tend to
follow. If AI ever reaches the point
where it can autonomously build
software, it might have also solved many
of the core challenges of general
intelligence itself. Meaning no
occupation would be safe from automation
in that way. And this ties into the
argument made in AI snake oil by Arun
Narayan and Sash Kapoor particularly in
their discussion of the latter of
generality where they note that you
can't meaningfully predict the next rung
on that letter meaning that each level
of AI capability brings new and
unpredictable consequence. So jumping
into another quote unquote safer
profession right now might be premature
or even a complete waste of time. The
social economic landscape could look
completely different by the time the
next wave of AI rolls through. So
investing time and money to pivot may
not protect you because the next
generation of AI could easily automate
whatever safer field you move to. Dots.
Subscribe for more.
Loading video analysis...