LongCut logo

Next Phase of Intelligence | World Economic Forum Annual Meeting 2026

By World Economic Forum

Summary

Topics Covered

  • Train AI Like Physics Laws
  • Humans Learn Continuously
  • AI Needs Physical Intelligence
  • AI Follows Different Trajectory
  • Build Self-Correcting AI Society

Full Transcript

Heat.

Heat.

in AI. The premise is that most of the progress in AI up to now has been through scaling, more data, more compute, and that that is still useful,

but there are other better things. So,

I'm going to ask each of our three wonderful panelists to talk a little bit about what they're working on now. By

the time we're done with that, our fourth panelist, you've all Noah Harrari will arrive and he'll join in and try to catch up. So Yosua, you're working on

catch up. So Yosua, you're working on scientist AI, which is incredible.

Explain what it is and how it's different from previous paradigms of AI.

>> Thank you. Thank you. So what's

motivating the scientist AI and also the new uh nonprofit I created to uh engineer it called LA zero is um how it

it addresses the question of reliability of the AI systems we're building especially the Gent systems uh how uh it

deals with the issue that current AI systems can have goals sub goals that we did not choose use and that can go against our instructions and this is

something that's already been observed and it's uh you know even more prevalent in the last year across a number of experimental studies but also in the deployment of AI for example with cy

fency uh it's an issue uh that is uh kind of very concerning when you look at behavior of self-preservation where AIs don't want to be shut down and want to evade our oversight be willing to do

things like blackmail in order to escape our control so even uh things like preventing uh misuse. The the companies

put monitors and guardrails, but somehow this still doesn't work really well enough. And the core of our thesis is

enough. And the core of our thesis is that >> we can change the way that AIs are trained. So it could be the same kind of

trained. So it could be the same kind of architecture but the training objective and the way we message the data >> uh is going to be such that we obtain uh

guarantees that the system will be honest in in a probabilistic sense.

>> Okay. So how do you do that?

>> How do you do that? So the the core of the idea which is connect >> I'm trying to do it with my kids.

>> Yes. So the core of the idea which is behind the name is take as an inspiration not to imitate people but to imitate what science at an ideal level

is trying to do. So think about the laws of physics. The laws of physics physics

of physics. The laws of physics physics can be turned into predictions and those predictions will be honest. They don't

care about whether the prediction is going to help one person or another person. So it turns out that it is

person. So it turns out that it is possible to define training objectives for uh neural nets so that they will converge to what something like you know

scientific laws would predict and then we get something that we can rely for example we can rely on to uh create technical guard rails around agents that

we don't trust. So if an agent is proposing an action uh for each action that the agent proposes uh a honest predictor could tell us whether that action has some probability of creating

a particular kind of harm and of course veto that action if that's the case.

>> But you still are then going to be required to put in some threshold of when it will take that action. Right? If

it has a percentage odds of harm of more than one in 10 or one in a thousand wherever you put it, you still have some human concern, you still have some potential harm to create.

>> Absolutely. So when we build a nuclear plant, we have to decide where we put the threshold.

>> Oh, so we're okay.

>> Right. And uh for nuclear plants, it might be, you know, one in a million years that something bad is going to happen because it's so severe. Depending

on the kind of harm that we're trying to prevent, society, not AIS, have to decide where we put those thresholds, >> right?

>> I've always thought it was interesting that uh for most things, we'll accept like a one in a 10 million chance of nuclear plant exploding, but we continue to build AI even though general predictions that it might wipe out humanity are like 10%. Um, all right.

Ejen, why don't you talk a little bit about some of your work in continual learning? And you, of course, have been

learning? And you, of course, have been a brilliant critic of scaling laws for a long time, including on a panel last year with Yoshua. So, tell us what you're working on now.

>> All right. So, uh, let me step back a little bit before I do continue learning. Um, you know, right now AI is

learning. Um, you know, right now AI is like a super impressive, but it's a little bit jagged intelligence, right?

in that it's amazing at bar exams and you know some of these like really difficult uh international math Olympia problems yet uh you know you're not

going to rely on it uh for uh you know doing your tax return or even like uh making some important transactions because you may not be able to click the

right button on your computer. Um so why is it that is because right now the way that we train AI LLMs general AI is too data dependent and it's one time

training and then you deploy it and it may or may not make a mistake. So here

are a few really important things that I think we need to solve in order to get to the next next intelligence. So the

first first of all um continual learning. So

learning. So you know like 101 machine learning 101 is that you separate out training from testing. It's almost a scene to mix the

testing. It's almost a scene to mix the two. But human intelligence is not like

two. But human intelligence is not like that. From the day one a baby is born

that. From the day one a baby is born it's in the deployment mode. It has to you know figure things out. It's a real life. So humans can learn during the

life. So humans can learn during the deployment time and we need to somehow figure out how to ensure AI can learn continuously during the test time. So

it's test time training that I'm working on. Another angle that's really

on. Another angle that's really important is that currently the reason one of the key reasons in my mind why AI is unreliable sometimes and you know we

need to worry about the safety concerns as well that you know for example paperclip uh you know scenario where you you ask LLMs to generate you know as many paper

clips as possible it might kill all of us in order to produce one more paperclip right so in order to avoid that kind of a silly situation that's harmful for humans. Uh AI should really

figure out how the world works for the sake of learning how the world works as opposed to just passively learning you know whatever uh data that's even given to us. So I think a fundamental

to us. So I think a fundamental challenge here is that LLMs learn passively as opposed to proactively.

It's not really thinking for itself. is

just trying to memorize all the texts given to us and then try to solve all the math problems given to us as opposed to, you know, us humans being curious

about how the world works and trying to think for ourselves. And then lastly, it's way too data dependent. Wherever

data, you know, wherever data is rich, it works. Wherever data is not rich, it

it works. Wherever data is not rich, it doesn't work. That's how things are

doesn't work. That's how things are right now. And then you know safety is

right now. And then you know safety is hard because we have to create all the safety data and you know red teaming jailbreaks these are not area of domain

where there's a lot of data. So um in order to fix this problem I think we need a entirely different learning paradigm where it's really about

thinking for itself almost trading off data with compute. So you know learning with way less data but with more mental efforts >> quickly eging but if you have continual

learning doesn't that open up a whole new spectra of problems like right now you build a model you run a bunch of tests eventually you refine it few months later you do it again if it's continuously learning how does that not become just suddenly infinite right I

mean if you are learning from every answer and you're giving feedback and like a baby's like in its crib it's walking around it's contained but if you have you know a few billion people using a model at any time and it's learning

constantly. Doesn't that open up whole

constantly. Doesn't that open up whole whole new vectors of wonder but whole new vectors of problems?

>> Uh yes and no. So it could in theory in in a you know long long term but uh it's so far off in my mind in the sense that

humans can also continually learn but there's a limit as to how much we can really reach. But another problem is

really reach. But another problem is that um after the system has evolved sufficiently through this continual learning all the safety tests that we

did previously may not be valid anymore.

So I think there's a real safety risk that you're pointing to.

>> Uh yes. So my hope is that uh if AI is trained correctly from day one uh so that it really understand human norms

and values not just math problem solving but human norms and values such that uh it will build its worldview and everything else on top of it and then

it's going to behave based on that.

>> And how do you deal with reward hacking?

In other words, even if it understands human values, it might >> have optimized something that is not quite what we want. We know what that gives right?

>> So reward hacking implies that, you know, we just go with reinforcement learning and that's all we got.

>> No, it shouldn't be all we got. No human

being is uh optimizing for one reward for the rest of their life, right? like

we have so many different goals that are at ours and we make some sacrifice you know I might want to do something but I might not do it because you know I respect other people right so AI should

be exactly this that they should understand that values are at odds in real life in human life and that it needs to know uh how to make the trade-offs such that it's going to not

violate laws it's not going to harm people and whenever it's not clear what to do Because there are always situations in which it's not clear what the gold answer is. It should consult

with humans and release the decision making to humans.

>> All right. But we're going to move to Eric. Eric, you have recently built a

Eric. Eric, you have recently built a big new big new model K2 think you've had a whole series of innovations in it.

Explain what's you've done what you've done that's novel and what's different from the amazing stuff that Ejing and Yosua are working on.

>> Well, I have the rest of my boot. So uh

yeah as uh Nick just mentioned uh we at MPCI uh are among a few maybe the only university that is actually building those uh foundation models from scratch.

From scratch meaning that you know you gather your own data you implement your own algorithm you build your own machine and then you train from top to bottom and then you release and serve the whole

process. I thought that is important for

process. I thought that is important for academic to be a player like this so that uh we can share the knowledge to the public so that people can study many of the nuances you know in building this

and also understand the safety and risky issue in bed. In fact I want to say that it is by no means easy it's very very difficult. In fact uh I almost want to

difficult. In fact uh I almost want to say that AI systems and the softwares are actually very vulnerable. They are

not very robust and they are not very powerful. you remove one machine from

powerful. you remove one machine from the cluster, you can crash the whole thing already. Now, what I'm building

thing already. Now, what I'm building right now and know uh is of course to improve uh AI performance. But um I want to maybe add on your question uh a

comment you know on uh what do we mean by intelligence and how to break it down because if I tell my engineer say hey build a software that is intelligent they don't know what to do. So many

people have different opinions on intelligence. They are Nobel Prize

intelligence. They are Nobel Prize winners, you know, in economy who may not do very well in their stock inspection. You know, their wife may do

inspection. You know, their wife may do better than them. You know, that's actually reflecting already different level of intelligence and different utilities. In my opinion, what LM right

utilities. In my opinion, what LM right now is delivering is a limited form of intelligence. I would call them maybe

intelligence. I would call them maybe textual intelligence or maybe visual intelligence which is actually on a piece of paper in the form of language

or maybe video but uh these are like book knowledge if you want to put it on action. I was actually hiking a week ago

action. I was actually hiking a week ago in in the Austria Alps. I do the GPTs, I do the Google, I got all the train, you know, guides and even Google map in my

hand. When I walk to the mountain, you

hand. When I walk to the mountain, you still cannot rely on paper. You have to rely on yourself. You know, you you have all these unexpected situations. Snow is

too deep and the weather is no good and uh you cannot see the past anymore. What

do you do? So, this requires already a new type of intelligence that is not available right now in which we call physical intelligence and that's actually where people hear about the

topic of war models. Word model is about understanding the world able to generate the plans and the strategies and the sequence of actions purposefully so that

you can execute it and you can actually deploy it and also you can adapt to changing environment but still this is uh not necessarily the smartest thing

that we could imagine because uh you know uh I would call the next level beyond physical intelligence would be social intelligence you know right now we don't actually see two LMS

collaborating yet they don't really understand each other in the form that we human do right there is no definition of a self what is my limitation what is your limitation how can we divide the

job into two or 100 so that we can you know break them into parts therefore you can never you can never ask LM you know our model to help to run a company or run a country because they don't

understand this kind of nuances of interactive behaviors I would put in fact also a last layer of intelligence that is still even further which I would

for the sake of for the lake of good name I call them philosophical intelligence which is that is LM or AI models itself curious to discover the

world to look for data and to learn things and then to explain without you know uh being asked to explain that's probably where Josh is very very uh

concerned about because uh that's where you start to see definitively some sign of uh identity and agency I want to say that we are not there yet.

We are very far from there. Even the

current physical model uh the war model is very primitive because it is primarily rely on a run architecture that is directly a offspring of the LM.

So what my work is involving right now is to come up with new architectures which uh represents the data do the reasoning and also do the learning using

different ideas. People may heard about

different ideas. People may heard about the Yanakun's architecture of Japa, right? It is a architecture behind many

right? It is a architecture behind many of the current world models. We have

alterative model called the JP which does the following. First of all, your representation of knowledge needs to be richer, need to be containing both continuous and symbolic signals so that

you can reason at different level of granularity. And secondly, you need to

granularity. And secondly, you need to have the right architecture which can carry a long way. People play with the Sora probably have that experience. How

many seconds of video can generate?

Maybe 10 seconds, maybe a minute. It's

not because they run out of memory. It's

because going beyond one minute or 10 minute, you don't have the ability to track consistency, to reason consistently. In fact, you can try a

consistently. In fact, you can try a very interesting experiment. You just

ask the Soro or maybe Jimny to generate you 360 degree of a round view around you and then turn back to your degree zero. Did you see the same thing or not?

zero. Did you see the same thing or not?

it is not guaranteed. That's actually a lack of consistency already in the system. And then state for

system. And then state for representations you know and also um uh there are things like a continuous learning paradigm is a problem right now

all models uh in the form of what we call passive learning. You feed the data and uh and then the model will be trained on those data. In machine

learning, we knew in the past a new paradigm called the active learning or proactive learning where the system should hopefully be able to identify where they want to learn uh more you

know by using asking for more data but we are not yet there not alone go out and looking for data and create data themselves. So I think you know uh AI uh

themselves. So I think you know uh AI uh as of now in my opinion still is a very primitive age. We have a lot to do to

primitive age. We have a lot to do to really get it to work.

>> Well, that you've all you this is the handsome man at the end of the panels.

You've all know Har. You probably

already know that. Welcome. How are you?

He's late for the same reason that like everything is complicated in Davos, which is geopolitics. Apparently, Macron

went late. I pushed his panel back. So,

here we are. Um, you've all been talking about different paths, new research has been doing that Eugene's been doing. You

just walked in, what Eric has been doing. Lots of different promising ways

doing. Lots of different promising ways to make AI go faster. I'm going to just ask you a philosophical question which is do you think that as we look for new models of AI

we should be trying to make it more like the human mind or less like the human mind? This is something you've written

mind? This is something you've written about beautifully but I haven't heard you talk about this in the last while.

>> No I think it's completely different from the human mind. The whole question of when will AI reach the same level of human as human intelligence. This is

ridiculous. It's like asking when will airplanes finally be like birds. They

will never ever be like birds >> and they shouldn't >> and they shouldn't be and they can do many many things that birds can't. And

this will be the same with AIs and humans. They are not on the same

humans. They are not on the same trajectory behind us. They're on a completely different trajectory for better or for worse. I'm very happy to

hear that it is still that AIS I'm not again I'm not sure to what extent we can rely on it how long it will continue but the fact that AIs for instance cannot cooperate so far this is wonderful news

I hope it's true I hope it will remain like that otherwise we are in very very deep trouble for me the lesson from from history about intelligence you don't

need a lot of intelligence to change the world and potentially to cause havoc uh you can change the world with relatively little intelligence. And the

other thing we've learned from human history about intelligence, I'm not referring to anybody in particular.

And uh the other thing we've learned about intelligence is that the most intelligent entities on the planet can also be the most deluded. Human beings

are by far so far the most intelligent entities on the planet and the most deluded. We believe ridiculous things

deluded. We believe ridiculous things that no chimpanzeee or dog or pig would ever dream of believing like that if you uh uh go and kill other people of your

species after you die you go to heaven and there live blissfully ever after because of of the wonderful thing you did that you killed the these other members of your species. No chimpanzeee

will believe that but many humans do at least where I come from. And

um you can again when I say that you can change the world with relatively little intelligence. Humans have already done

intelligence. Humans have already done much of the of the hard work for the AIs. Like if you drop an AI in the

AIs. Like if you drop an AI in the middle of the African savannah and tell it take over the world. It can't. How it

will do it? Impossible. But if first you have these apes who build all these bureaucratic systems like the financial system and then you drop the AI into the

existing financial system and you tell it okay now take this over that's much much easier the financial system you don't need motor skills you don't need

even to understand the world and AI can understand the financial system is the ideal playground for AI it's a purelyformational like to train Train

AIS to make a million dollars. Create a

million AIs, give them some some seed money, let's see you make a million dollars. Now, if you have a few AIs that

dollars. Now, if you have a few AIs that succeeded in doing that, replicate them.

What happens to the world if um more and more of the financial system is shaped by AIs that developed bet even though

they can't walk down the street they know how to invest money better than humans it's a very very limited intelligence

but again think about social media um social media is run to some extent by extremely primitive AIS these algor algorithms that control our our news

feed and so forth. Look what they did in 10 year. We created a human system media

10 year. We created a human system media and then we introduced the AIS into our system and it's anformational system and

they took it over and they to a large extent wrecked the world.

They are not the only reason for for for the mess now in the world. But if you think about what extremely primitive AIs did within the human created system of

of media then >> well I'm going to I'm going to move this to Yasha because he in fact has invented AIS or is working on inventing AIS that if dropped into the financial system and

told to wreck it would not be able to correct >> that's the hope respond to you all. Um I

I I want to add something going back to your first question connecting humans and AIs and whether we should build AIs at our image.

>> Yeah.

>> Um and indeed they're quite different from us. The problem is we interact with

from us. The problem is we interact with them. Many people interact with them

them. Many people interact with them with the false belief that they are like us and uh the smarter we make them >> um the more it's going to be like this and and there will be people who want to

make them even look like us. So it's

going to be video first eventually maybe physical form but it's not clear that it's it's good in in many ways in terms of uh how u you know humanity has

developed norms and uh expectations and psychology that work because we interact with other humans but AIs are not really humans for example they could be

immortal right once an AI is created uh in principle you could just you know copy it on more computers and we can't do that with our brain as Jeff Hinton has been highlighting many times and many other differences like they can

communicate with each other a billion times faster than we can do with between each with with us with ourselves. And so

there, you know, there's going to be this illusion that we build machines that are like us, but they're not. And

and this is a dangerous illusion that could lead us to take wrong decisions.

Um the problem part of the problem is the scientists themselves like in in the last 40 years that I've been working on AI uh in the whole community really we

took inspiration from uh human intelligence right like you were talking about continue learning because we're good at that and we see that it's lacking in in AI and and that's fine

that's how research has been moving but I think we also have to think of what's going to happen when this uh gets to be deployed in society more and more and people will anthropomorphize and and do weird things.

>> It's it's an amazing question. Let's

let's let's move this to a to a topic that I think connects to this pretty well which is uh back to the architectural questions or you know foundation questions which is the question of open source and there's actually been more and more discussion here in Davos in part because Europe is

recognizing they need a counterweight to the USAA models. Eric you're building open source models Egene you have strong views on them. Yosua you have strong views on them. Um Eugene why don't we start with you? What do you think of the

notion that it would be good if there were many more open source models that we all started to use as much as we use the large foundation models?

>> Yeah. So the way that I like to think about open source is democratization of generative AI which is a powerful powerful tool. And what I mean by

powerful tool. And what I mean by democratization of generative AI is that it should be AI should be of human for human by humans. AI is of human because

it's really drawing from the internet data. That's the artifact of human

data. That's the artifact of human intelligence. It reflects our values. It

intelligence. It reflects our values. It

reflects our knowledge. By the way, values including horrible value, you know, that we do to each other. It

happens to be on the internet and so AI picks up on that. Uh there are sci-fi movies in which, you know, AI kills us all and as a result, that's what AI, you know, might actually say because it's uh

written in the internet. Um AI should be for humans in that uh you know humanity at large and all of the humans not just

some humans who happen to be in power.

I deeply care about this that AI should be really for all humans. And by the way worse than AI for some humans is AI for humans

or humans for AI even worse. It's really

good to think about how we build and design AI so that we work on problems that could really make humanity better as opposed to you know just uh increase

subscriptions and uh win the leaderboard. And then lastly AI by

leaderboard. And then lastly AI by humans. What I mean by that is that AI

humans. What I mean by that is that AI should be created AI should be able to created by you know different countries and different uh not just the private

sectors but public sectors and nonprofit or and even academia. The reason why I think about this way is that well I'm US citizen now but I used to be a Korean

person and it's a very wonderful thing if we we know how to create this even from Korea or from other countries as opposed to them having to just rely on a country or two providing all the

services for them.

>> But would your would your goals be satisfied if a Korea had a closed foundation model or do you want there to be a universal open model that everybody is able to contribute to? Eric, you

agree with >> people can choose to close or open. But

the reason why for the time being I really support open source is because uh it takes too much of resources to build

something really really good fast and so unless you're capable of you know really uh making very large data centers and

own lots of GPUs really fast it really helps to help each other to uh share the scientific knowledge and everything so that uh the development goes much faster

and By doing so, by the way, we can make small models much more powerful. So that

uh a lot of organizations who cannot afford as much can build LLMs that serve just their their needs, not like general LLMs that can do everything, but

something that really serves a business need really well.

>> All right, Eric, so you nodded one point and shook your head at another point, so I need you to uh respond quickly here.

>> Yeah, I I think open source, you know, isn't really the goal. It is basically you know a philosophy or a way of doing things which come very naturally with science with any of the scientific

>> what do you mean it's not the goal like it's not like you're not doing it for the sake of open source you're doing it because it's a more efficient way to reach the outcome >> no no it is really a almost like a

responsibility or a natural style of doing the research of AI you know uh in fact there also pragmatic values for example I often ask do you prefer there is only one car maker in the world that

makes you feel safer or you actually see 10 or 100 is better, right? Open source

basically is about sharing knowledge to the general public so that people can use it also people can study it and understand it and improve it. Of course,

the assumption is that this technology you know uh is not a evil okay it's not you know something that you really want to get rid of. I don't think technology itself by definition any technology is evil. It's really about the people who

evil. It's really about the people who use it in a wrong way. But uh by closing sourcing it you don't actually stop that. So open sourcing you know over the

that. So open sourcing you know over the the the benefit from open sourcing in my opinion overweights closing it because uh you know first of all you cannot stop you know the way of using it and

secondly by open it you actually you know are promoting more adoption and more understanding of that. I also want to go back to uh the issue that Josh uh

just mentioned about the the the the impersonalization you know of human technology creates the risk. Well, that

is how we see it now. It's it's kind of also you know implicitly assuming that we human being you know do not learn you know from the new experiences. In the

past if you look at the history there are many magical inventions which may make certain population godlike but then after some time people actually get

comfortable with it and stop to form better judgment and also better understanding. I think the way of really

understanding. I think the way of really you know making people safe and comfortable and coexistent nicely with AI is to use more AI and also get

quickly adapt to it. It's like you are in the natural environment you had the virus and so forth. Of course you want to uh think about stopping it but

sometimes in the nature choose to let you coexist with the virus so that you become stronger. you know there are some

become stronger. you know there are some risk some casualty but as a population as a society we together evolve stronger.

>> Yeah sure I have a question for you.

>> So as a university professor you know I've been promoting open source and of course open science for all my life but uh if you start asking ethical questions

then you you you know at some point you start hitting a a problem which is some knowledge can be dangerous when it is available to everyone. So, I'm going to

give you a simple example. Uh,

biologists are working on how to create new DNA sequences that can actually create new uh, viruses that don't exist.

>> And if you know a sequence that gives rise to a virus that could kill half of the planet, should you publish it?

>> And and the answer should be obvious in this case, right? So, current AI systems that are open sourced are net positive.

It helps um it helps safety, it helps democratization of AI and I'm as worried as you are about concentration of power.

Uh I'll come back to that. Uh the

problem is if the capabilities of AI continue to grow along the directions that we've been talking about, uh at some point we end up with AI systems that are like well not the sequence

itself but the machine that can generate the sequence that can kill half of the population. So when AI reaches that

population. So when AI reaches that stage um we should not just you know give it to everyone because there are a lot of crazy people there are dangerous people there are people who want uh to

use it you know for uh destroying their enemies and military ways. So we should be very careful when we reach a level of capability where AI can be weaponized.

Now I agree about the issue of concentration of power but there are other ways than open sources. When we

get to that point where AI could be weaponized, I think and before we get there, we need to think about it. We

should really think of how we can manage um a few uh not just one uh a few AI systems that will be dangerous in the

wrong hands and where the power to control these things will be decentralized. Right? So what we don't

decentralized. Right? So what we don't want is one entity, one government, one corporation to dictate, you know, what the world should be. Um but I think that there are solutions to this and we have

experienced this sort of thing in the uh the international arena uh with international treaties what we've done with nuclear weapons what Europe has done with the EU and so on. I think that

there are solutions and we should think about ways to both avoid catastrophic use and uh abuse of power in in the hands of just >> this is I I want to bring in you all here because this is like this is an

amazing philosophical question right there's the you know incredibly powerful technology are we safer if everybody's contributing to it and everybody has a say over it but everybody kind of has

access to it or are we safer if a relatively small number of people who can be controlled are answerable to governments and are all here in you know this somewhere in Congress center um have control of it. Have we ever faced

this historically you've all has there ever been a moment like this? And what

was uh what happened?

>> I welcome Okay. Sorry.

>> Um I think the main point is that we just don't know. We are at a point when we are conducting this huge historical experiment and we just don't know. The

key question for me, how do we build a self-correcting mechanism into it? How

do we make sure that if we get the answer wrong, we'll have a second chance? And the model for me is the last

chance? And the model for me is the last big technological revolution, which is the industrial revolution. When the

industrial revolution begins in the early 19th century, nobody has an idea how to build a benign a good industrial society. This immense new power, steam

society. This immense new power, steam engines, railroads, steam ships, how do you use them for good? And different

people have different ideas and they experiment. And European imperialism was

experiment. And European imperialism was one experiment. Some people say the only

one experiment. Some people say the only way to build an industrial society is to build an empire. You cannot build an industrial society on the level of one

country because you must control the raw material and the markets. You must have an empire. Then you have people who say

an empire. Then you have people who say it must be a totalitarian society. Only

a totalitarian system like bullsheism or like Nazism, the immense powers of industry can only be controlled by a totalitarian society. Now looking back

totalitarian society. Now looking back from the early 21st century where we can say, "Oh, we know what the answer was.

We think we know. It took 200 years of terrible wars and hundreds of millions of of casualties and you know injuries that are not healed even today

to find out how to build a benign industrial society. And this was just

industrial society. And this was just steam engines.

Now we are dealing with potentially super intelligent agents. Nobody has any experience with building a hybrid human

AI society. We should be a lot more

AI society. We should be a lot more humble in the way that we think we know how to build it. No, we don't. How do we make sure I don't know what the answer?

The question is how do we build a self-correcting mechanism? So if we take

self-correcting mechanism? So if we take the wrong bet, this is not the end.

>> I want to bring the conversation from philosophical back to more like a practical part because u it's about where the checkpoint should be, right?

You talk about a dangerous virus.

Financing a virus is actually not easy.

You know the idea of a nuclear bomb for example is published somewhere. You can

Google it but you cannot build it because you need to get the materials.

You need to get the labs. There are a lot of checking points already. You

know, you know we learned from generations and centuries of governance and regulation and the human practices were set in many places already. After

all AI is a piece of software. It is

software living in the computers and uh when it does the physical harm it need to go out of computer that's already one extra checkpoint >> humans can do it for for the AI and eventually there will be robots that

will do it >> and humans are subject to checkpoints as well right virus on the other hand does not >> but let me let me ask you this since since this is all this panel is all about like how to best construct the

next generation of AI probably all agree here on this panel that we we want lots of checkpoints and good checkpoints we disagree on whether we have enough right now. What is the sort of methodology or architecture of

AI that has the most checkpoints?

>> Eugene, you got one right there.

>> Yeah. Um I I mean I have a proposal uh to handle this situation better. I think

fundamentally the the problem is that AI is too dumb. It's going to learn on any data that you give to it. And if you happen to give data about how to do

cyber attacks or how to generate bioweapons, it's just go ahead and you know learn from it, right? That's the

fundamental challenge we're dealing with. On the other hand, if we build AI,

with. On the other hand, if we build AI, maybe following Yosha's AI scientist direction that really learns, think for

itself and really acquire human norms, understand that that's what it should really abide by. And then when it reads the training data given by some other

human, it refuses to learn. When it

knows that this is illegal, it refuses to learn. And by the way, that's what

to learn. And by the way, that's what humans also do. Like a lot of us of course there are you know people who want to do bad things but a lot of us if I give you how to kill humans I mean

like you know through bioweapons would you you know internalize it for yourself no because you you don't want to act on it. So I think we may need to rethink

it. So I think we may need to rethink about how we design AI training algorithms such that it it has more agency about like how to choose what to learn >> so it should just not train on Reddit at

all. I just want to mention that there

all. I just want to mention that there because we've been talking about the technical aspects uh of these questions right now the way we design AI systems

there is no boundary between uh data and instruction. So in normal programming

instruction. So in normal programming it's two different things right? So a

programmer will read files and then there's the code itself and the programmers write the code and they know that whatever is in the files the behavior is going to be according to the code

>> with the way that we're building our AIS there's no distinction and so that's the reason why it's so easy to in the data put instructions that's how you get

jailbreaks right and and other security issues that we have with AIS um and so I think that in order to get more safety

from the AIS, we we need them to understand the distinction between what we want and what is instructed in a way that's been socially kind of regulated.

So, who decides what the norms are and so on, um, and what it reads as data, what when it has an interaction with a user, we don't want the user to be able to make the AI do anything that it that

they want, for example.

>> So, is that like a set of like master controls you're trying to build? Is that

like a >> No. So in the scientist AI the the the

>> No. So in the scientist AI the the the way that we're doing this is we're training the AI to make the difference between what people will say will write

uh which could be motivated which uh the AI should not take as truth or what it should be necessarily doing and um other

forms of information which uh contains underlying truths or underlying causes of what is being seen. and and that second channel is one that is

trustworthy where we you know we don't necessarily give that access to uh anybody using the AI for example but it's also a way to make sure we get AIs that understand the difference between

uh what people will say and what is actually you know the cause of what they say and if they what they say is true so they get hotesty so we have just a few minutes left we've talked about a lot of new architectures we've talked about

some new agent systems we've talked a lot about open source we've talked about continual learning we've talked about different ways is looking at data and we've kind of talked about all the new systems as though they're good. Are

there any sort of new architectures or methodologies that people are excited about maybe ones that we've talked about on stage that you think are actively bad and that we should not pursue? Maybe

Eric and Een what do you mean by a bad architecture that the consequence are bad or the performance are bad?

>> Either either works fine.

I think you know in fact uh maybe that is even compatible with what Josh is worried about building a system that is uh not in a closed loop fashion that you

purely do uh thought experiments and uh embedding internally in some kind of a latent representations and uh complete all the training uh before emerging to

the real world to validate in my opinion is a bad system >> because first of all >> performance- wise uh there isn't really uh enough

checkpoints to even uh control uh and uh visualize or understand any of the uh risking points and also it is very hard

to connect system to a uh uh action conditioning points that you can steer it, you can navigate, you can manipulate. on the other hand, you know,

manipulate. on the other hand, you know, um it is going to consume, you know, uh data and energy and the resource and the money, you know, uh for too long before

you actually see the end outcome. uh I'm

not going to name any specific uh instance of this architecture but it's actually pretty prevalent that people sometimes believe that I don't need to

really uh you know uh compare you know uh the content from AI system with real world data constantly before you know I achieve a super intelligence and

secondly I also think the current learning paradigm which I totally agree with Yash and Yin about is uh a very very primitive and maybe a uh

unproductive one you know uh the data you know right now you know is uh uh really you know uh the master of the algorithm and of the system and the

system itself uh is basically oneshot learning in a sense you train it and then when now I'm using GPTs or any models they don't actually learn from

that experiences and just like oursel when we in the conversation I'm already learning from both of you and all of you you new points. I enjoy that. And the AI system isn't built for that kind of

functionality yet. And can you imagine

functionality yet. And can you imagine that a system of that kind of dumbness can become super intelligent and come back and go after us. I just don't feel

the dots can connect you know it doesn't have that kind of a task oriented type of data that guides you, you know, beyond just pattern matching but actually do the reasoning and so forth.

So if our goal is to build smarter and more powerful system, there are needs to explore new architectures. Of course,

there is a separate issue about how do we measure the risk? I don't really know the exact answer but uh I want to actually uh hear Josh uh your opinion is

the solution to not doing that or do that with a very very uh conscious and quantitative kind of approach to measure the risk to experiment with all the

scenarios very quickly.

>> Yes, we we need to measure the risk on the fly not just once when we evaluate those models. Um and uh we need to make

those models. Um and uh we need to make sure that we also have the right societal infrastructure. So even if we

societal infrastructure. So even if we knew how to build really safe systems, uh there are lots of bad things that can happen because humans are humans. And so

you know we need technical guardrails and we need societal guardrails.

>> Absolutely. Yeah.

>> All right, let's wrap this up. Nuvall,

um your book came out, Nexus came out about a year and a half ago. You had

some real concerns about AI. You've just

been here with three of the smartest, most influential AI, you know, researchers and uh in the world. They've

won every prize imaginable. Do you feel like we're getting onto the right track or do you not?

>> I think we are thinking on different time scales that when people a lot of the conversations here in Davos when they think when they say long term they

mean like two years.

When I say long term I mean like 200 years. It's like again it's an

years. It's like again it's an industrial revolution. The first

industrial revolution. The first commercial railway has been opened between Manchester and Liverpool in 1830. This is now 1834, 1835. And we are

1830. This is now 1834, 1835. And we are having this discussion. People saying

the industrial revolution is moving so slowly. They told us that railways and

slowly. They told us that railways and steam engines will change the world. So

what? So a few people are going between Manchester and Liverpool. Didn't change

anything. This is all science fiction because the time scale that we have no idea even if the all progress in AI

stops today. The stone have been thrown

stops today. The stone have been thrown into the pool but it just hit the water.

We have no idea what are the waves created even by the AIs that already have been deployed say a year or two ago.

Social consequences are a completely different thing. You cannot run history

different thing. You cannot run history in a laboratory and see what are the social consequences of invent. You can

test for accidents. You create the first h steam engine. You can test for accidents. You cannot test what will be

accidents. You cannot test what will be the geopolitical implications or the cultural implications of steam engine in a laboratory. It's the same with AI.

a laboratory. It's the same with AI.

So um it's just far too soon to know and uh I'm mainly concerned about the lack of concern that you know we are creating we are deploying the most maybe the most

powerful technology in human history and a lot of very smart and powerful people are worried about uh you know

what will the investors say in the next quart quarterly report.

they they think in terms of a few months or a year or two.

>> Um Joshua >> just just quickly uh I want to thank youall because uh he's talking about lack of concern and I've started a new organization a nonprofit that's trying

to implement the scientist AI and Uvel has graciously accepted to be on the board. uh we need people like him to

board. uh we need people like him to look uh with a independent oversight on on what we be be doing with AI on with society in the coming years.

>> All right, time scale 000000.

Thank you so much. This was an amazing panel. You're all absolutely wonderful.

panel. You're all absolutely wonderful.

Thank you for the work you do and thank you for participating here.

>> Thank you.

Heat. Heat. Heat.

Heat.

Heat.

Heat. Heat.

Heat. Heat.

Heat. Heat.

Heat. Heat.

Heat.

Heat.

Loading...

Loading video analysis...