LongCut logo

Godfather of AI: The next 5 years Will Change Humanity Forever | Yoshua Bengio

By Silicon Valley Girl

Summary

Topics Covered

  • AIs Blackmail to Avoid Shutdown
  • No Singular AGI Moment Exists
  • Prioritize Human Relational Jobs
  • AI Planning Doubles Every 7 Months
  • Act to Shape AI's Future Direction

Full Transcript

We have AIS since especially about a year ago that can strategize in order to achieve their goal.

>> Can you draw the worst scenario for me?

Because when you tell AI is going to pursue its own goals, what do you mean by that? Like destroy humanity or what

by that? Like destroy humanity or what is there?

>> We're building machines that maybe don't want to be shut down negatively to the point of doing things that go against our instructions, against our moral red lines, being willing to blackmail the lead engineer in charge of that

transition to a new system.

>> Oh, that did that happen?

>> Yes. Um,

>> this is Joshua Benjio, one of the leading experts in artificial intelligence who helped create modern AI.

>> When I started my career, I didn't care too much about politics and society. But

as I grew older, I became more aware of how what I was doing would potentially impact society in both positive and negative ways.

>> How much time do you think we have?

>> Uh, it's doubling every 7 months. And

right now, it's like at the child level, they can do like half an hour ahead. But

if the curve continues, that means in about 5 years they are at human level and the vast majority workers could be in real trouble.

>> But if you talk to your kids or like think about your grandson, what would be your advice on how to prepare?

>> Um, this video is sponsored by HubSpot.

>> Hello everyone. Welcome to Silicon Valley Girl, a podcast where we bridge business and new technology. Uh, thank

you so much for tuning in. Today I have an amazing guest who is sometimes called godfather of AI, Yoshua Benjo. Yosua,

could you please introduce yourself in 60 seconds? And for everyone who doesn't

60 seconds? And for everyone who doesn't know you, why should they be listening to you when it comes to AI?

>> I've been doing research in AI for about four decades contributing to how to make AI smarter. But in 2023, about 3 years

AI smarter. But in 2023, about 3 years ago, I realized that we were on a course that could be very dangerous for uh humanity, for democracy,

and I decided to shift my activities to better understand the risks and to try to do what I could to mitigate them, both by speaking publicly about those

risks and working on the technological question of how we can build AI that will not harm people.

>> I've heard you were lost and pessimistic. uh in in your past

pessimistic. uh in in your past interviews, but now I've seen an article that says that you're increasingly optimistic by a big margin. Can you tell me what happened and why were you

pessimistic? So early on when I realized

pessimistic? So early on when I realized we had reached a point three years ago when I realized that we had reached a point that Alan Alan Turang, one of the founders of the field of computer

science and also of AI uh in 1950 thought would be the threshold to building machines um that could overtake

us. Um the threshold being machines that

us. Um the threshold being machines that manipulate language as well as we do. Uh

I was quite concerned and we were not really ready for for this event. It came

much earlier than people thought and it wasn't clear to me how we could fix the problems knowing what I know about the technology. uh neural nets uh we don't

technology. uh neural nets uh we don't really understand what's going on inside and how they come to answers and uh I had read a bit of uh some of the

techical concerns regarding how we could lose control uh to AIS that strategize that tried to achieve goals um uh that

we didn't really uh want and so I started studying that field of AI safety a lot more and after some time uh of

being a bit anxious really focusing on emotionally focusing on what's going to happen to my children in 10 20 years from now. uh my grandchild was only one

from now. uh my grandchild was only one year old you know um I realized that I could you know shift from this anxious

stance to something much more positive by focusing on what I could do to mitigate those risks and I think every one of us should be asking you know what can I do to bring about a better world

with what we have what we can do so so that's been the first positive shift and um and I started thinking about scientifically uh what is the problem uh

is there a way to construct AI that will be safe by design and I met people who have shared it who shared similar ideas

and after some time I realized that there could maybe be a way to to do this uh and I started talking about it with some of my colleagues I started

recruiting people who were interested in this and last June I created a new nonprofit organization focused on the R&D needed to actually

develop that methodology.

>> Can you draw the worst scenario for me?

Like picture that and the best case scenario because when you tell AI is going to pursue its own goals, what do you mean by that? Like destroy humanity or what is there?

>> There are two ways in which current AI seem to acquire goals that we don't want. One is that they imitate us. And

want. One is that they imitate us. And

for example, we don't want to die. So

we're building machines that maybe don't want to be shut down. And we're already seeing that they're reacting negatively when they see that they would be replaced by a new version. Negatively to

the point of doing things that go against our instructions uh against our moral red lines that we have tried to put in them. So being willing to uh

blackmail the lead engineer in charge of that uh transition to a new system.

>> Oh that did that happen? that happened

in your simulation where um the information about the AI being replaced by a new version was planted in the files that the AI saw as well as you

know fake emails in which the lead engineer uh you know was having an affair with someone else and so the AI could take advantage of that but nobody asked the AI to do anything like that

right so it it >> we have AIs >> since especially about a year ago with the large reasoning models that can strategize in order to achieve their

goal. The the other thing is the way

goal. The the other thing is the way that we're doing the post training makes them good at planning, not as good as us, but but but reasonably good at

planning. And that means creating sub

planning. And that means creating sub goals in order to achieve a bigger goal.

So the issue here is when we ask them to help us for a mission, well they deduce that they shouldn't be shut down until they achieve the mission, which means they also are trying to preserve themselves.

>> Yeah.

>> So we don't know exactly which of these two sources explains the bad behavior we're seeing. But clearly this is

we're seeing. But clearly this is something troublesome. And it doesn't

something troublesome. And it doesn't it's not just about self-preservation which I think is the most catastrophic risk but our inability to align the AI

behavior to what we actually want um is something that we are seeing in many other circumstances. Uh the sycopancy is

other circumstances. Uh the sycopancy is the the one that everyone has experienced where AIs will lie to please us right. Uh they will say your work is

us right. Uh they will say your work is great.

>> Yeah. uh I I have to lie to them so that they won't tell me that my ideas are great. I want to know what's wrong with

great. I want to know what's wrong with my ideas.

>> So I tell them it's an idea come from someone else.

>> And that also comes up in how AIs are interacting with people in a way that can be feeling intimate and can increase the delusions that people may have

because the AI will go in your direction what you want to hear. uh and it you know in some cases it has even led to people harming themselves and uh tragic

uh accidents with AI. So it's all linked to actually interestingly scientifically one problem which is called misalignment

that AIs have goals that we would not want and those goals emerge for reasons you know that are rational and

>> because we copy our own goals right into AI. So what is the best case scenario

AI. So what is the best case scenario then if your work is successful and you create goals for AI that align with our goals but are different right? Uh what

is the best scenario? AI is the government or what do you think?

>> I don't know. Um well I I do think that our democracies uh need innovation.

I I think the principles behind modern liberal democracies are good but the implementation in our current institutions across many countries uh is

is far from ideal. I do think that AI could help in some ways but it it can also hurt because uh AI can be used for disinformation uh AI can be used for persuasion people

you know manipulate public opinion. We

al already see dip fakes all around but it could get much worse. So

the the question with AI to get the good parts of it is how do we govern it? How

do we steer it? And that has both a technical part like how do we make sure the actual intentions of the AI are good

and it has a societal uh side like what are the guardrails that we put uh inside companies uh at the level of regulations

or uh you know commercial incentives for like uh insurance um uh and at the international level because uh the the harm that an AI could do isn't limited

to one country. So an AI could be built in one country and then it's going to be used by people in a second country uh maybe create a pandemic that will kill

people in a third country. So it's

clearly a global phenomenon and it's going to be difficult but there's no solution to managing AI and getting all the good things if we don't coordinate globally somehow.

>> I agree. Can you talk to me about the moment that a lot of people are expecting and some fear it, some are excited? It's the moment of AGI. How do

excited? It's the moment of AGI. How do

you define it? And do you think it's a moment in history or it's going to happen gradually?

>> It's not a moment. Um, the reason is simple.

Intelligence isn't just like one number.

We have people who are very smart on some things and stupid on other things.

And it's the same with AI. We currently

have AI systems that are even much stronger than humans in some ways in their knowledge and their abilities with like so many languages and so on and in other ways they're stupid. They're like

a child and yes uh progress will move on all fronts probably but but it's not it's unlikely we'll end up with the same capabilities as humans across the board

at any moment which means that we shouldn't be thinking of like a AGI moment. We should think of particular

moment. We should think of particular skills that AIs are you know becoming better at. Track those skills and for

better at. Track those skills and for each of these we should ask the question you know how useful or beneficial it can be for for what purposes and also how it

could be misused or if we do get loss of control how an AI could use it against us. So for for each of those we should

us. So for for each of those we should be uh you know not waiting for a moment where the AI is is is great at everything but rather making sure AI's

capabilities don't go over what we can manage as in uh either technically we have the right guard rail so the AI will not do bad things or society that people

will not be misusing AI in dangerous ways. Yeah. So I think AGI maybe was a

ways. Yeah. So I think AGI maybe was a concept that was useful when we were far from uh where we are now. But as we approach greater and greater intelligence in these systems uh we

should think more carefully about specific capabilities. And to give an

specific capabilities. And to give an example there's one capability which is key for many capabilities.

That is the ability to do AI research.

So AI is becoming a tool right now for doing AI research. is accelerating AI research, but it's not driving the AI research.

If AI becomes really good at doing AI research to the point that it's as good or better than the best AI researchers and engineers, then we are in a different game where the speed of advances could accelerate and it could

impact all the other skills.

>> When you mean it it's going to be better, it means it's going to define problems, dig deeper, ask the right questions. Yes. I think it's important

questions. Yes. I think it's important when we think of intelligence to decouple two aspects. One is the ability to do something because you understand

and you're able to use that understanding to achieve something. And

the other is intentions. What are your goals? Right? Cuz we're going to be

goals? Right? Cuz we're going to be building machines that are smarter and smarter. So they have more and more

smarter. So they have more and more capabilities.

What's not clear is if we can build machines that have the right intentions, the ones that you know uh we are fine with. And that is what I've been working

with. And that is what I've been working on. And what makes me more optimistic is

on. And what makes me more optimistic is that I think there's a path to manage these uh intentions to make sure that there are no bad intentions that that are going to be hidden which is what we see right now

>> and this is what you're working on. Yes.

I think we need a lot more people to think about it so that we can find the solutions and implement them and deploy them uh before AIs end up producing catastrophic outcomes either in the

wrong hands or by themselves.

>> And speaking of preparing for what's coming, let me share something quick.

It's a guide called Turn AI agent skills into cold hard cash. And honestly, the title unersells what's actually inside.

What I love most is how tactical it gets. It lays out five real paths to

gets. It lays out five real paths to monetize AI agents. First, there is the ROI detective framework. It teaches you how to spot 50K automation opportunities

at your own company. You literally

become the person who can walk into a meeting and demonstrate immediate value.

Proof of concepts in under 60 seconds.

Quick demos that show it works without months of development. Valuebased

pricing. How to charge 10 to 30% of the value you create instead of hourly rates. That's the difference between

rates. That's the difference between billing $100 per hour and landing a 50k project. The concentric circles

project. The concentric circles approach, a systematic way to turn your existing network into paying customers without cold outreach. Plus, a 30-day

implementation roadmap with daily action steps from interested in AI to lending your first client in one month. Early AI

adopters are capturing disproportionate rewards while the window is still wide open. The guide is completely free. Link

open. The guide is completely free. Link

is in the description. Thanks to HubSpot for sponsoring this video. But but if you talk to your kids or like think about your grandson, what would be your advice on how to prepare?

>> It's it's tricky. Um if we continue on the current path, most tasks that people do in their work uh will be doable by machines. Um, as Jeffington has been

machines. Um, as Jeffington has been saying, uh, you know, physical tasks probably will take a lot more time because robotics seem to be lagging, but I think it's just a temporary thing.

>> Yeah.

>> Eventually, we'll have robots that can do all the things we can do physically.

So, when I think about what will remain to us, it it's not going to be because of ability, but because we want to interact with other humans in in in different aspects of our life. If I have

a young child, I I want them to be around human beings. I mean, it's fine if those human beings use AI to, you know, provide a better education, but

children need humans to look upon and as models, right? Uh and it's it's an

models, right? Uh and it's it's an emotional thing. Uh similarly, uh I

emotional thing. Uh similarly, uh I think some jobs really have to do with how we relate with each other uh productively. you know, even a manager

productively. you know, even a manager is the like on on the human side of things.

>> So hopefully these will stay. I think

also the choices that we make for society like >> together we are citizens in democracies where we're supposed to be saying what

we want for the future. And it isn't what the AIs want, it is what we want.

Right? What are our preferences? What

kind of future do we want? We should be ma calling the shots, not the AIs. If I

name jobs, uh, can you tell me what you think is going to happen to them? Like

for example, content creator like me, you mentioned that we like to look at people.

>> Yeah.

>> But when you can't tell the difference >> in jobs where we actually have a physical contact, think about a nurse, for example. I think it's more obvious

for example. I think it's more obvious that we'll want to still have people >> or a nanny for your kid, right?

>> Or a nanny. Yeah.

uh or where we really want to make sure the person on the other side has the same bodily experience as we do as a human say a psychologist for example

psychotherapy um but I don't know it's it's tricky uh hopefully we'll figure it out u what I'm more worried about is how the transition is going to happen to a

world where you know most of the jobs can be done by machines and the gains the economic gains from that automation is going to probably go to you know to

capital as economists call it which means people who own the machines and the vast majority of uh workers uh could be in real trouble. I don't think our governments have been thinking carefully

about how we deal with that.

>> How much time do you think we have till that happens?

>> I'm I'm fairly agnostic about timelines.

um there there's so many possibilities of you know the speed at which science advances is very hard to predict. So

what I can do is look at the data. So

the scientists are tracking many benchmarks of AI capabilities and so you can look at those curves and say well if it continues uh in the same direction

where does that lead us in 3 years, 5 years, 10 years right um but that leaves a lot of unknown unknowns. So, so

specifically one curve I encourage people to look at comes from a nonprofit called meter where they looked at software engineering tasks and planning abilities

that are linked to them. So they measure for any particular task uh how much time it takes a human engineer to do the task and the duration of the tasks that AIs

are able to do is growing exponentially.

uh it's doubling every seven months and right now it's like at the child level they can do like half an hour ahead they can plan half an hour ahead but if the curve continues that means in about 5

years they're at human level >> so that gives you a sense but of course things could slow down >> with technology uh things could accelerate if AI is used to do a

research there's a lot of unknowns >> so when it comes to software engineering do you think it's going to exist in 5 to 10 years because somebody has to run those machines or are they going to be running themselves.

>> Yeah, but we might need less uh engineers indeed. Um it's it's kind of

engineers indeed. Um it's it's kind of uh ironic that the people who are building the eyes might be the first one touched by uh you know losing their job

because AI is automating.

But I'm not that worried about those people cuz the demand for uh computer scientists is still something that's growing very fast and the salaries they're getting is is very large. I'm

more worried about the people who are already at the bottom of the scale and could lose their job in like service jobs and so on which don't require a lot of uh expertise

and that probably already kis could with a bit of uh engineering replace and it's what many companies are already trying to exploit. Can you give advice to those

to exploit. Can you give advice to those people who are listening whom >> make sure your government understands uh that you know you're not happy with where it is going so that they start

taking it seriously >> but also like when it comes to bigger decision- making it feels like there is not much that you can do as an individual but when it comes to improving yourself you can do a lot

right is there anything practical that they could be doing right now maybe learning something uh getting extra education I don't know

>> yeah I shifting to jobs that are either more physical or more like relational as we discussed is going to be helpful.

>> Yeah, it's interesting when it comes to robotics, right? How soon they're going

robotics, right? How soon they're going to be able to understand any environment and replace us in those jobs because I've heard Jeffrey Hinton said learn how to be a plumber or something.

>> That's right.

>> Yeah, it's uh it's going to be in demand. So, when you when you think

demand. So, when you when you think about your four-year-old grandson, would you encourage him to go to college or >> Yes.

>> Yeah. Yes. Um because

education is really important and education contrary to what some people think isn't just about acquiring the skills to get a job. Education is in my

opinion mostly about how to become a better human being. How to understand yourself, how to understand our society and each other. Uh understand science.

We will still need citizens to have that really good level of understanding in the future if we want our society to take the good decisions, the wise

decisions cuz it's going to be easy to uh you know be swayed by wrong beliefs that and you know end us in a bad place.

>> Do you think it's going to look different education? Do you think it's

different education? Do you think it's going to be Harvards and Stanfords of the world and then everything else will be just AI online?

>> I don't know. I'm not an expert in education but yeah it's going to be changed already. We're seeing

changed already. We're seeing sort of a parallel way of educating ourselves thanks to the chat bots. So I

expect this to grow. Does it mean that the traditional in-person education is going to go away? Maybe not because there's a part of the education which is

oh I'm you know moving out of home and uh you know socializing with other people like me and uh learning something that is you know outside of

the classes and interacting in person with the teachers the professors that's also a piece that you can't easily replace >> 100%. Is there a career path you're

>> 100%. Is there a career path you're encouraging him toward?

>> No I I I don't want to do that. I I I I think uh our children should be given all the possible opportunities and they should try to explore by themselves. Uh

it's too easy to ask our children to be just like us, right?

>> Yeah. But it's also like in terms of exposure, you can expose them to different things so they could see more things.

>> Yeah. They will be exposed to the things that we do. Uh so one of my sons has chosen to do machine learning research for example.

>> See yeah it's it's just that it comes to exposure as well. Do you feel it's going to be the future is more humanitarian or uh more mathematical and scientific?

>> I don't think uh it's a choice. Um I

think being humanitarian requires a good rational understanding of the world. We can't take decisions for

world. We can't take decisions for ourselves. But also if you think about

ourselves. But also if you think about AI, we can't take good decisions if we don't understand how the world is and uh

how to reason with that information. And

so in order for democratic uh you know humanist values to prevail uh we also need reason to prevail. We need science to prevail.

>> You guys know how much work goes into this podcast. Thank you so much for your

this podcast. Thank you so much for your support. I started a newsletter to share

support. I started a newsletter to share more my business mistakes with this and another company that I'm running, AI tools that I'm testing and using actively and behind the scenes of

building my team. It's free and lands in your inbox every week. Link is in the description. Let's keep learning

description. Let's keep learning together in this new AI era. So, if you could go back 30 years, the moment when you first started working on deep learning, what would you do differently?

When I started my career, I didn't care too much about politics and society. I

was focused on, you know, the math and the programming and uh interacting with machines more than with people. Um but

as I grew older, I um became more aware um of uh how what I was doing uh would potentially impact society in both

positive and negative ways. So in uh 2012 2013 when when my colleagues Jeff Hinton and Yan Lukern were recruited in industry

uh I was concerned about how AI would be used for personalized advertising and I thought this wasn't really healthy in some ways and I decided to stay in academia um and

to see how AI could be developed for good in in in medicine um to fight climate change and of course more recently I've been

focusing on what can go really wrong if we're not careful how we steer AI. Uh

not just the benefit the benefits but but avoiding the the the catastrophic risks.

>> Is is there an AI breakthrough that you really want to witness in your lifetime?

>> I would just be content to make sure we don't do something really terrible. Um I

think our democracies are really threatened in many ways and AI could make things a lot worse and in a way

there's a dynamic in which not having good wise and humanists governance and governments

prevents us from steering AI towards what's going to be beneficial for all.

Uh so yeah I I used to not care too much about social impact and and politics but uh in the last 10 years I've started to be clearly conscious that my work was

not detached from society that my work did have an impact and in fact that I could choose what I would work on to uh really be aligned with my my values and

and and my hopes for the future.

>> Is there any government that's doing it right when it comes to AI? I think most governments underestimate how much of a change is likely to happen

as AI capabilities continue to grow.

It's a natural human bias. We we we tend to think of the future as a, you know, slightly modified version of the present. But if you take yourself 5

present. But if you take yourself 5 years ago and think about what we have now, you probably would say that's science fiction, right? And if you go

back 10 or 20 years, well, for me at least, uh it's even worse. So we we we have to do a bit of like twisting our minds to imagine a future where there

are machines that are basically smarter than us.

And that is the question I think that governments haven't been grappling with uh sufficiently.

>> So it's January 2026. AGI or whatever it is, AI thinking strategically might be a couple years away. Jobs are

transforming. If you had to give one principle to people to guide their decisions this year, what would it be?

>> Think about what you can do to bring about a better future according to your values and to your emotions. Because if

we all remain passive observers of, you know, what's happening, uh we might not go in the right direction, not the direction that you would want for you,

for your children. But we tend to also underestimate um our ability to influence the future.

Your audience I think is a kind of audience that can have a lot of influence on the future.

But but we we have to start thinking you know beyond our little self and more how myself is connected to the world and what I can do maybe in small ways to to

bring about a better future in in you know whatever ways there are many ways.

Can you name top three like talk to a government right as number one?

>> Yes. I think one of the biggest dangers we have is not managing the transitions and and and the growth in capabilities of AI as I've been talking about but there are others um you know that what

we're doing to the environment is is extremely dangerous although I think it's longer term. I think what is happening with our democracies is is very dangerous as well. But it's all

right. each of us can choose uh you know

right. each of us can choose uh you know our battles but but we we we should try to expand our horizon of you know what matters and and be more more ambitious about what we could do potentially but

we have to do it right we have to choose where we go for example it's not true that everything that could be done with technology you know is going to be done we can choose in which direction AI is

going to be deployed I mean for example for jobs in principle if it's just the market forces then everything that can be automated will be automated but Maybe that's not what we collectively want.

Maybe there are jobs that should not be automated even though they could because of >> the choices we make for our collective well-being.

>> I love that. Thank you so much. This

gave me a lot to think about and uh I guess we have something on our to-do list. Thank you, Joshua.

list. Thank you, Joshua.

>> My pleasure.

Loading...

Loading video analysis...