LongCut logo

The AI Tsunami is Here & Society Isn't Ready | Dario Amodei x Nikhil Kamath | People by WTF

By Nikhil Kamath

Summary

Topics Covered

  • Biology's Complexity Yields to AI
  • Leave OpenAI for Serious Safety
  • Intelligence Emerges from Scaling Ingredients
  • AI Tsunami Ignored by Society
  • AI Knows You Better Than Yourself

Full Transcript

Okay.

[music] Heat. [music]

Heat. [music] >> [music] [music] >> So I started playing with Claude. It's

getting to that point where sometimes it surprises me by how much it [music] knows me. I don't know if that makes

knows me. I don't know if that makes sense. It is surprising to me that we

sense. It is surprising to me that we are in my view so close to these models reaching the level of human intelligence and yet there doesn't seem to be a wider

recognition in society of what's about to happen. It's as if this tsunami is

to happen. It's as if this tsunami is coming at us and you know it's so close we can see it on the horizon and yet people are coming up with these explanations for oh it's not actually a tsunami that's just a trick of the light

like there hasn't been a public awareness of the risk.

What is India's role in all this?

>> Many other companies come here as themselves a consumer company and they see they see India as as a market, right? A place to obtain consumers. We

right? A place to obtain consumers. We

actually see things a little bit differently.

What did you do before founding anthropic?

>> Yeah, so I was I was actually originally a biologist. Um I uh you know did my

a biologist. Um I uh you know did my undergrad in physics, my uh PhD in bioysics and you know I wanted to understand biological systems so that I

could cure disease. uh and uh the the you know the thing I noticed about studying biology was its incredible complexity that uh you know you know for example if you look at the the protein

mass spec work that I did right trying to find protein biomarkers it's it's just really incredible how much complexity there is right you have a given protein it's like you know the RNA gets spliced in a whole bunch of

different ways depending on where it is in the cell then it gets post-transationally modified phosphorolated complex with a whole bunch of other proteins and and I was starting to despair that it was too

complicated for humans to understand.

And then as as I was doing this work on biology, I noticed a lot of the early work around Alexet, which is one of the first neural nets like you know almost

15 years ago now. Uh and and I said wow like you know AI is actually starting to work. It has some things in common with

work. It has some things in common with how the human brain works but you know has the potential to be be larger and scale better and learn tasks like biology. Maybe this is ultimately going

biology. Maybe this is ultimately going to be the solution to uh you know to to solving our problems of of solving our problems of biology. So you know I I

went to work with Andrew Ing at BU. Then

I was at Google for a year. Then I

joined OpenAI a few months after it uh started and was uh you know was was basically led led um all of research there for for for for several years. But

then eventually you know myself and a few other of the of the employees just kind of had our own vision for you know how how we wanted to how we wanted to make AI and what we wanted the company to stand for. And so we went off and

found an anthropic.

>> How was it? Was it like a fork in how OpenAI was thinking into what Anthropic eventually did?

>> Yeah. You know, I would say, you know, my conviction and the conviction of my co-founders when we we founded Anthropic were two of them. And I think one we were starting to convince OpenAI of, the

other I was, you know, not I didn't feel that we were convincing of. So the first was the you know the conviction in the scaling loss and the idea that you know if you scale up models you give them

more data more compute again there are a few modifications like RL but not really very much it's pretty close to pure scaling um you you find that you know when you when you do that you you find

you know incredible increases in performance and you know I was finding that in like 2019 with with GPT2 um you know when we just first saw the first glimmers of the scaling laws And of course there were a lot of folks you

know inside and outside who didn't believe it at all and we really made the case to leadership like this is this is important this is going to be a big deal and I think they were kind of starting to believe us and ultimately went in

that direction and there was a second um you know conviction I had which is look you know if if these models are going to be kind of general cognitive agents like

general cognitive tools that match the capability of like the human brain we we better get this right. The economic

implications are going to be enormous.

The geopolitical implications are going to be enormous. The safety implications are going to be enormous. It's going to transform how the world works. And so we need to do it in the right way. And and

you know, I think despite a lot of, you know, kind of language verbiage about doing it in the right way. I I was for a variety of reasons just just not convinced that at the you know, institution that that I was at that that

there was a real and serious conviction to to to to do it in the right way. And

so, you know, my my view is always, you know, don't argue with someone else's vision. Don't try to get someone to do

vision. Don't try to get someone to do things the way the the way you want to.

If you have a strong vision and you share that vision with a, you know, a few a few other people, you should just go off and do your own thing and then you're responsible for your own mistakes. You don't have to answer for

mistakes. You don't have to answer for anyone else's. And and you know, maybe

anyone else's. And and you know, maybe your vision works out, maybe it doesn't, but you know, you know, at least it's at least it's yours.

>> [snorts] >> Didn't OpenAI believe in scaling laws cuz they went down the same path themselves too, right?

>> Well, that Yeah. Yeah. We we succeeded.

>> Can you can you explain what scaling laws are in very simple terms?

>> Um it's like if if you know you want a chemical reaction to produce oxygen or start a fire or something like that, um you need different ingredients and you

know if you don't have one enough of one ingredient the the reaction stops. But

if you you know if you put if you put ingredients together in proportion you know you get your you know your explosion or your fire or fire or whatever. And and for AI those

whatever. And and for AI those ingredients are data compute the size you know the the size the size of the AI model. And so the scaling laws just tell

model. And so the scaling laws just tell you that like the you know if you put in the ingredients to the to the chemical reaction the ingredients of data and

model size that what you get out is is intelligence. Intelligence is the

intelligence. Intelligence is the product of a chemical reaction.

>> And what is intelligence?

intelligence as measured by the ability to translate language or the ability to write code or uh you know the ability to answer questions correctly about a

story. Basically any cognitive task we

story. Basically any cognitive task we can think of any any any you know task that exists in text or in images any any task that you can you can do on a computer.

How is the intelligence of today as you are describing it different from what a computer could do like 5 years ago?

>> Yeah, you know, I would say well I mean for example 5 years ago a a computer you could not ask a computer a question and have it write a one-page essay on that

question. Um you could not uh ask a

question. Um you could not uh ask a computer uh to you know implement a feature in code and have it implement that feature in code. None of those things were possible. You could not

generate an image. You could not generate a video. You could not analyze a video. You know, I could could get one

a video. You know, I could could get one of those uh you know uh uh you know v you know videos of like you know a monkey juggling or something and you know say what's going on in this video?

How many times did the ball change hands? And right now you could get

hands? And right now you could get Claude or another AI model to to to to give you an answer on that. Um and and 5 years ago, you know, none of those things were possible. What I'm I'm

trying to figure out has the definition of intelligence changed per se?

>> Well, you know what I would say is five years ago, you know, you could you could Google and there might be a website that you know would tell you a little bit about this, right? But, you know, you're just you're just looking up some text

that exists exists on the web, right?

You know, maybe it's not about how to get a monkey to juggle. Maybe, you know, maybe it's about how to get a a seal to juggle. you know, is it's not quite

juggle. you know, is it's not quite exactly the same thing because maybe exactly the same thing doesn't exist.

Um, but you know, as as as we see when when people use these models, uh, you know, you can ask and you can actually get an intelligent response. You can ask a specific question and have the model write, you know, one page about it or

you can give it a, you know, you can give it a you can give it a hypothetical. you know, what if I had,

hypothetical. you know, what if I had, you know, the monkey juggle clubs instead of balls or, you know, what if I did this thing and and that information doesn't exist anywhere, you know,

whereas the model is able to kind of think for itself and and come up with an answer on its own. So, it's it's it's something um you know, it's it's something totally new. It's just it's

not just matching some of the text that exists on the internet.

>> Fair. So, you know, this is more like a conversation. So, feel free to like talk

conversation. So, feel free to like talk about what you want to talk, not necessarily related to the questions that I'm asking.

>> You look very animated when you speak.

Did you ever teach?

>> Uh, you know, I I was originally an academic and uh, you know, I thought that I might become a professor. You

know, I I got my PhD. I went all the way to being a a posttock at Stanford Medical School and, you know, I was I was aiming to become a become a professor. Um so if I had become a

professor. Um so if I had become a professor you know I I would would have uh would have done that. Um uh but you know as I mentioned uh you know I got

interested in AI and to work in AI required a lot of computational resources and that was mostly happening in industry. So that took me off the

in industry. So that took me off the academic path and and you know into industry and of course you know ultimately through several steps led me led me to start a company. But you know sometimes I think I'm still like a

professor at heart >> at this point. Dario, if AI is the most relevant thing in the world, u if the

world is realigning in a way and AI is determining who gets what and who doesn't get what, I'm talking about industries,

you today are probably the most relevant person in the world. If

anthropic in this last cycle, in this minute is sitting on top of this pile. for

somebody who who was going on the path of being a teacher to have arrived to where you are today. Are you best equipped for where you are today?

>> Well, I mean, you know, first I would say a couple of things. You know, I I I think there's a lot of there's a lot of folks who are who are right relevant in different ways, right? You know, even within industry, there's the different

layers of the stack. There's like the folks who make chips. There's the folks even earlier who make semiconductor manufacturing equipment. There's the

manufacturing equipment. There's the folks who make models like us. And then

there are other players who make models.

There's the folks who make kind of applications after the models. Um uh you know and then then there's a bunch of other folks who have a say. There's you

know governments, there's like civil society. So you know my my hope you know

society. So you know my my hope you know isn't that there's uh you know just one tiny set of people that's that's relevant. I think we're trying to

relevant. I think we're trying to broaden the set of people who are relevant and you know turn it into a turn it into a broader conversation. Um

but you know I think at the same time your your question is a fair one and one way I could interpret it is like you know there's there's a certain randomness to how you know kind of you know a few people you know end up

leading these you know you know leading these companies that kind of you know grow so fast and it seems like you know in the near future will power so much of the economy. Um, and you know, I've said

the economy. Um, and you know, I've said openly publicly, not for the first time, that I'm I'm at least somewhat uncomfortable with the amount of concentration of power that's happening

here. I would say almost overnight,

here. I would say almost overnight, almost by accident. Um, and and you know, we think, you know, about that in a bunch of ways. You know, one is we have an unusual governance structure,

something called the long-term benefit trust. um you know it's it's a body that

trust. um you know it's it's a body that that kind of ultimately appoints you know the majority of the board members for anthropic and is you know made up of

of financially disinterested financially disinterested individuals. So that's

disinterested individuals. So that's some you know check on what one one single person is doing and then you know I think I think as always the government should play some role here. you know,

I've been an advocate of, you know, proactive although, you know, sensible that doesn't slow down the technology, sensible regulation of the technology because, you know, I think I think like the people should have a say like

government, you know, governments and the people who elect them should have a say in in how this goes. So, I I actually think of a lot of what I'm trying to do as as kind of trying to

trying to preserve a balance of power.

um you know uh uh kind of you know against the the the the the natural grain of this technology >> for someone like me who's sitting on the

outside and doesn't have a bone in this competition when I watch OpenAI talk about how they're how they were a not for-p

profofit company or how you are projecting humility in the conversation that you're having right now or how the American companies are competing with the Chinese companies

which are coming about this projection of humility where it is for the larger good and not necessarily for how I view the world as companies

with shareholders with investment and revenues and seeking profit.

Is this par for the course? Is this

something you have to do?

>> So, you know, I I would I would put it in the following way. You know, I I would say the philosophy of Enthropic from the beginning has been that we we

try not to make too many promises and we try to keep the ones that we make. So,

you know, we we set ourselves up as, you know, a for-profit but public benefit corporation with this LTBT governance and we've maintained that. We've said

that you know our goal is to you know stay on the frontier of the technology but you know to work on uh uh you know to work on uh um you know the safety and security aspects of the technology.

We've pioneered the science of interpretability. We've uh you know

interpretability. We've uh you know pioneered the science of alignment. I

don't know if you saw but we recently released a constitution for claude the ability to align models in line with the constitution. And you know, we've done a

constitution. And you know, we've done a bunch of policy advocacy and warning about risks, right? Warning about risks is not in our commercial interest, right? Like people can come up with

right? Like people can come up with conspiracy theories, but you know, I will tell you saying that the models we build could be dangerous. Whatever

people might say, that's not an effective marketing strategy and that's not the reason that we do it. And you

know, speaking up on when we disagree even with the US administration on uh you know, on on on policy matters, right? we've we've we've spoken up,

right? we've we've we've spoken up, right? We're willing to say, you know,

right? We're willing to say, you know, we disagree on this issue, like, you know, we we've said that there should be regulation of AI when all the other companies and the administration have said there shouldn't be regulation of

AI. And so that's both, you know, the

AI. And so that's both, you know, the regulation of AI of AI holds, you know, holds us back commercially as a company, even though I think it's the right thing to do. And it's, you know, it's it's

to do. And it's, you know, it's it's it's it's difficult to go against the government and the other companies and say this. We're really sticking our neck

say this. We're really sticking our neck out. So, we've we've taken a number of

out. So, we've we've taken a number of actions that, you know, I see as really, you [snorts] know, putting our putting our money where where where our mouth is here. I can't speak for the other

here. I can't speak for the other companies. You know, it's again, it's

companies. You know, it's again, it's quite possible that some people say these things, uh, you know, and they don't really mean them, but I wouldn't look at what people say. I would look at what people do.

If what you're saying gets the government to act by regulation, as the incumbent leaders in this space,

you get some kind of a regulatory capture where it becomes harder for the new people coming in as well. Right.

>> I I don't agree with that at all. the

regulation we've advocated for, for example, SB uh 53 in California, um uh exempted everyone uh who makes under

$500 million a year in in in uh in in revenue, right? SB SB53 was a

revenue, right? SB SB53 was a transparency law which um you know uh uh basically requires companies to um you know to show um you know the the the the

safety and security tests that they've run. Um, and it exempts all companies

run. Um, and it exempts all companies under 500 million in revenue. So, it

really only applies to Enthropic and three or four three or four other companies. So, it only applies to the

companies. So, it only applies to the companies that that have the resources and and everything that we've advocated for here uh uh not just SB53, but all the proposals that we've made, the ones

that we've made in the past and the ones that that we plan to make in the future have this character. We're constraining

ourselves and a very small number of additional um um um companies. We're

we're not uh people people who say that need to look at the actual content of of of what we're proposing because it doesn't match that idea at all.

>> Fair.

I read your paper machines of loving grace and the adolescence of technology and you seem to have had a 180°ree shift in

perspective almost from optimism to skepticism over like two years from 2024 to 2026.

Is there one moment in the last two years that changed this for you? Did you

see something change?

>> Yeah, I actually wouldn't agree with the question. I don't think I've had a shift

question. I don't think I've had a shift in perspective.

>> Um, I think the positive side and the negative side are always something that I've held in my head. And if you look at the history of, you know, the things that I've said, I mean, I've been talking about risks for a very long

time. I've been talking about benefits

time. I've been talking about benefits for a very long time. Um, you know, it it it turns out that actually it takes me a while to write one of these essays.

Um, you know, both >> they're really large as well. They're

big essays.

>> They're like 30 pages.

>> Both both of these it's it's you know, it's taken me like I I I spent for each one I spent about a year having a kind of vague vision of the essay in my head and like trying to write it but like not

fully succeeding at writing it. and and

then you know in either case I had to I had to be on vacation or somewhere where I could you know where I could think where the business day-to-day business of running the company didn't didn't occupy me. Um and and then I was finally

occupy me. Um and and then I was finally able to you know to to kind of write the essay. So all of that is to say, you

essay. So all of that is to say, you know, I I I I started thinking about what would be in adolescence of technology almost the instant I finished Machines of Loving Grace because I was like, "Oh, you know, I want to inspire

people with the good vision, but I also want to warn people with, you know, what can go what can go wrong." And so it it just it just took me a year to write it.

But really, both visions were in my head. And I think they're both, you

head. And I think they're both, you know, I think they're both possible.

They're two different visions of the future. And obviously, I want to get the

future. And obviously, I want to get the Machines of Loving Grace one, right? you

know, I want to solve all the problems and have the have the positive vision, but it's not a it's not a shift in perspective. It's um it's me just um you

perspective. It's um it's me just um you know, finding the time to write the light and then the dark.

>> But have you had a change of perspective?

>> You know, I would say overall I have I'm about where I was before. I've not

gotten more positive, more nor more more negative. There may be some places where

negative. There may be some places where I've gotten more optimistic or things have gone better than expected.

>> There may be places where I'm more pessimistic and where things have gone worse than expected, but on average they sort of cancel each other out. I would

say I feel very good about you know how things have gone with areas like interpretability. Interpretability is

interpretability. Interpretability is the science of seeing inside these neural nets. you know, as a human would,

neural nets. you know, as a human would, you know, look inside, you know, as we would scan a human brain with an MRI or a neural probe. Um, I've been amazed at what we've been able to find. We've been

able to find, you know, neurons that correspond to very specific concepts, neural circuits that correspond to, you know, keep track of how to do rhymes in poetry. And so, we're starting to

poetry. And so, we're starting to understand what these models do, right?

We we we don't we just we just train them in this kind of emergent way as you would build a snowflake. But now we're starting to be able to look inside and understand them. I'm I'm also very

understand them. I'm I'm also very encouraged by some of the work on alignment and constitutions. Um, you

know, making sure that models behave in the way that we want and expect them to.

I think that's going pretty well. Um, I

felt pretty positive about that. Um I

think I felt maybe you know have been a bit disappointed or felt a bit more negative about some of the things that are more like in the you know in the

kind of public awareness and the actions of wider society. Um you know it it is surprising to me that we are you know in in my view so close to these models

reaching the level of human intelligence and and yet there doesn't seem to be a wider recognition in society of what's about to happen. It's as if this tsunami

is coming at us and you know it's so close we can see it on the horizon and yet people are coming up with these explanations for oh it's not actually a tsunami it's you know that that you know that's just a trick of the light like

it's some you know and I think along with that there hasn't been a public awareness of the risks and you know therefore our governments haven't acted to to address the risk there's even an ideology that you know we should just

try to accelerate as fast as possible which you know I understand the benefits of the technology I wrote machines of loving grace. But I think there hasn't

loving grace. But I think there hasn't been an appropriate realization of the risk of the technology and there certainly hasn't been action. So I would say that the the technical work on

controlling the AI systems has gone maybe a little better than I expected and kind of the societal awareness has gone maybe a little worse than I expected. So I'm I'm about where I was a

expected. So I'm I'm about where I was a few years ago.

So in my own journey I'm you know when something sounds complicated and I'm not a programmer I don't have a background in coding

so I used a bunch of tools for things like research and a conversation both ways but I never tried to figure out if I could code using uh your tool for

example.

Recently I hired a developer just to like push me to sit for a couple of hours a day and teach me how to start becoming more familiar with it [clears throat] largely because of you know something

like FOMO like the fear of missing out on how the world is changing.

>> Uh so I started playing with claude u I connected I used the connectors to connect my Google drive mail and calendar and a bunch of those things. I

started using the cowwork and then I started using claude code to write simple programs around the industry that I am in which is

financial services uh basically to research stock markets and stuff.

>> We even have a optimized cloud for financial services. I don't know if

financial services. I don't know if you've tried that but we even have that.

>> No. And then I went into claudebot which is now open claw. I think clawbot became something else and now is open claw and I set it up on a Mac mini and connected

it to a telegram account and now I chat with it and I I try and move files from a to b work on a server on remote. It's

getting to that point where I'm not talking about open claw but even claude with all the connectors sometimes it surprises me by how much it knows me. I

don't know if that makes sense.

>> Yeah. You know, one of my one of my co-founders um you know, he was writing this diary with his kind of you know, his thoughts

and his fears. Um uh and he fed it into Claude and uh you know he he asked Claude to comment on it and Claude said, "Here are some other fears you might have that that I you that you know that

you haven't written down." Um and Claude ended up being mostly right about those.

So it really gave this eerie sense of like you know the model knows you the model knows you super well that you know that from a relatively small amount of information it can learn a lot about you

and come to know you fairly well and you know I I I you know like most things with the technology right we talked about the machines of loving grace and adolescence of technology you know on one hand something that knows you really

well can be a sort of angel on your shoulder that you know that helps to guide your life and make you a better version of yourself and you know that's the version we can aim for of course something that knows you really well you

know can um you know it can it can you know use what it knows about you to you know to exploit you or manipulate you on behalf of some agenda or sell your data to someone else I mean you know this is

one reason we just you know don't like the idea of you know using ads right you know this this is because you're not paying for the product like you're the product and you know in this case the the the the product then would be all

you know this model that knows you super well and you know could use that in in all kinds of in all kinds of nefarious ways. So, you know, we need to make sure

ways. So, you know, we need to make sure we take the positive um uh the positive road here and not the not the negative road.

>> With Claude, I need to use the connectors to give it context to my life.

With Google, for example, it already has the context to my life because I use their worksheets and their email and their drive and their chat and

everything like that.

for anthropic long-term will you also have to own the ecosystem?

>> Yeah. I mean, you know, do >> you have to build mail and chat and >> Yeah. Yeah. You know, I don't think we

>> Yeah. Yeah. You know, I don't think we need to build all of those things. Um,

you know, it, you know, my my thought would be, you know, we're going to it's going to be a mixture of things we make ourselves and integrating into others, right? Like, you know, we can we can

right? Like, you know, we can we can integrate Claude into Google Docs. we

can integrate quad into into you know Google sheets like you know we have external connectors there we can you know we're starting to do that with with co-work you know same for Microsoft

office same for other tools so you know I think I think we do whatever is you know easiest and fastest to do you know we we integrate into the existing tools

now it might turn out at some point that the existing you know tools aren't enough and we have kind of a different vision you know we want to we might want to slice things differently right? You

know, maybe traditional email doesn't make sense or traditional spreadsheets don't make sense given what you can do in in AI. So, I you know, I don't exclude that we could chop up products in a different way, but we're we're

happy to use the ecosystem that exists and work with anyone else, right? In

many ways, we're a platform company. We

allow many people to build on us, even though we sometimes also build things ourselves.

the the one thing this is a slight digression but I think the one thing that you're missing that

also your peer group is missing is in society today people inherently distrust anybody who claims to be doing good or

trying to do the right thing. So when

you and your peers are out saying I I heard you and Deis speak at Davos. I was

in the room when you guys were talking about how me, you I don't mean me, how Dario, how Deis and a bunch of other people have to

come together and prevent things from changing too quickly like you need to like meter it to a certain extent.

When a person who is not in your world in society on social media hears a few people speak in a certain manner u you're doing it in the manner that

creates more distrust than trust because nobody believes on social media that somebody wants to do the right thing or do good. So it might be counterintuitive

do good. So it might be counterintuitive but I think it needs a change of strategy. If if you were to be more

strategy. If if you were to be more capitalistic about this and own up to the fact that you have shareholders and you seek a profit, but this will help you win, maybe it'll work more. Just

>> thought I don't No, I don't I don't really uh I don't really agree with that. Um I would again go back to the

that. Um I would again go back to the idea that you know you know you you need to judge us by the actions that we take.

Um, you know, I think the company has taken a number of of of of actions over its, you know, over its time that, you know, I think I think, you know, show

that it's really serious about these commitments. So, back in 2022, um, you

commitments. So, back in 2022, um, you know, we had an early version of Claude, Claude one. This was before chat GPT um

Claude one. This was before chat GPT um and we chose not to release this um because we were worried that it would kick off an arms race and and not give us enough time to you know to build

these systems safely, right? It was it was kind of a one-time overhang like we could see the power of the models. a

couple other companies could see the power of the models and so we didn't you know we decided not to do that and that's public that's well documented and and you know and then we waited until someone else did and then we're like

okay the arms race has kicked off so you know now now now we can release our model but probably the world gained a few months now that was very commercially expensive we probably you

know seated the lead on you know consumer AI because of that um you know we've we've you advocated on chip policy in ways that have made some of the chip companies who

are suppliers very angry at us. You

know, voicing our disagreement with the administration on, you know, AI policy and AI regulation on some on some matters. You know, anyone who thinks we

matters. You know, anyone who thinks we we we benefit from being the only ones to do that. Um, you know, it's it's really hard to come up with a it's really hard to come up with a picture where that's the where that's the case.

You look at any one of these and okay, fine. But, you know, you put you put

fine. But, you know, you put you put enough of them together and uh you know, uh you know, I I I don't know. I just I ask you to to judge us by our actions.

>> Dario, isn't this a bit like rich people saying capitalism is bad?

>> Rich people saying capitalism is bad. If

rich people believed capitalism were truly bad or the income inequality is such a big problem, the simplest thing would be to do the simplest thing to do would be to

stop accumulating wealth, further wealth, and then nudge their friends to do the same.

>> But but I'm not saying AI is bad, right?

We we just talked about um you know this this this two sides of it. Um my view isn't my view isn't that AI is bad.

That's not my view at all. My my my view is that is that you know the market will deliver a a lot of really great things about AI, that it's good to build AI,

but that there are dangers of AI and that we need to steer AI in the right direction. You know, we're we're

direction. You know, we're we're steering this car, we're steering it towards a good place, but also there are trees, there are potholes, and so what we need to do is we need to steer away from the trees and the potholes. we

might need to occasionally slow down a bit probably temporarily um you know kind of in order to um in order to uh you know make sure that we steer in the

right direction. You know that that

right direction. You know that that isn't like you know the analogy wouldn't be a rich person saying capitalism is bad. It would be like if a rich person

bad. It would be like if a rich person said capitalism is a force for good but the economy it it needs to be levvened.

it needs to be moderated, right? You

know, we need to deal with problems like pollution, we need to deal with problems like inequality and and then capitalism can be good. If we don't deal with those things, then capitalism might be bad. Um

uh and and so that is more analogous to the to the position that I have here.

The concept of consciousness, where is that going? And what does a AI think it is? If AI truly were to if a AI were to question itself, would you would it would do you think it

thinks it's consciousness? It has

consciousness.

>> So, you know, this is one of these mysterious questions that we really don't have any kind of, you know, answer to. We don't know what human

to. We don't know what human consciousness is and therefore we don't know if AIs have it. Um,

>> what do you think it is? So, you know, I I suspect that it's an emergent property of, you know, systems that are complicated enough that

kind of reflect on their own decisions um that, you know, it's it's it's it's it's something that uh uh emerges from complex enough systems. And so, you

know, I do think when our AI system when our AI systems get advanced enough, I suspect they'll have something that, you know, resembles what we would call consciousness or moral significance. I

do think it'll happen at some point. It

may not be the same as human consciousness. You know, it may be

consciousness. You know, it may be different in how it works because the modalities are different because the things it's learned are different. But,

you know, having having studied the brain and the, you know, the way it's wired together, the models are, you know, different in some ways, but I I don't think they're different in the fundamental ways that matter. So, I I am

someone who who does suspect that uh, you know, at some point, even even if I don't think they are today, I I suspect that at some point the models will, you know, we would indeed say under, you

know, most definitions that we would endorse that, you know, the models will be conscious.

This is a question I keep asking myself when people talk to me about things like spirituality or consciousness.

I feel like the world is very random.

This is my view. And we are not far removed from cockroaches. When somebody

stamps a cockroach, the cockroach dies.

If there is something called consciousness and if there is a collective consciousness, I've not been able to a either connect with it or derive anything from it. Do you believe

differently? Um I you know I I don't

differently? Um I you know I I don't think consciousness you know necessarily needs to me needs to mean anything you know mystical right like uh you know I

there's just some there's some property of kind of being aware of your own existence and feeling things and and you know um uh uh uh uh you know being able to take in kind of a lot of information

and reflect on that information and to you know feel a certain way and to notice yourself noticing something. um

you know uh the the I think that that the you know we can tell self-evidently from our own experience that that those properties that those experiences exist

you know what their what their basis is whether it's you know entirely materialistic or there's something more mystical going on I think is is is you know obviously very hard to know and and

you know I I think is ultimately not not relevant to these questions. what what

does seem relevant to me is that you know these are because we have can observe our own experience these are properties of human brains um and you know I suspect that these models we are

building as they get more sophisticated are becoming enough like human brains that they will have some of the same properties that is that is my guess as as to what will happen and so we've t we've taken various interventions with

the models you know we've given the models um we you know we call it a I quit this job um button uh uh basically where you know that we've given the model the ability to basically terminate

its conversations by saying I don't want to be involved in the conversation and you know models do that when you know they they have to deal with you know particularly violent or brutal content

um it usually only happens in very extreme cases >> so I've grown up here this is my city Bangalore I I've grown up in the southern part in the northern part of

the city right now as somebody who saw the boom of the IT services industry here uh big employer

employs a lot of people a big part of how the city grew what is India's role in all this >> yeah so you know this is my second time

in India I visited in in October and you know um uh you know the last time I came here you know I I met with all the you know the major kind of Indian IT and and just conglomerates more generally I

won't names but you know the usual ones you would you would you would you would think of um you know and and we're beginning to work with with most or most or all of them and you know one of the things I said is look Anthropic is an

enterprise company its job is to serve other consumers um you know many other companies come here as themselves a consumer company and they see they see India as as a market right a place to

obtain consumers we actually see things a little bit differently we want to work with companies in India to provide our tools to them to help them build those tools um uh and you know help them do

their job better. So you know if we um you know work with a company here they know the Indian market better right they're better at you know doing doing what they do you know whether that's you

know uh uh you know consulting or systems integration or you know building IT tools they're going to be better at that than we are particularly for the Indian market and so our hope is that we

can add AI to what they do and kind of enhance what they do right there's a lot of worry that you know AI could you know replace SAS or or all of these things but but my view is if we do this in the

right way if we work with all these companies then then then you know AI can enhance what they're doing can enhance their kind of you know their their connection to the market their go to

market abilities and their and their specific knowhow >> I really like the steam engine story uh when the steam engine was invented how

the world changed productivity went up uh people had more The thing I worry about is at the

beginning of a change, you need a human to operate the steam engine.

Then you have assembly lines and all of that. Eventually, the way the world is

that. Eventually, the way the world is moving, the human becomes less and less relevant with time as these models get smarter.

So if you here partner with the IT services companies today and there is a use case for them are they not much like the man behind the

steam engine 10 years from now where the relevance if the tool works so simply that you don't need an operator eventually what happens to the operator >> so so I think a few things are true all

at once one is that definitely the scope of of automation of the agents is going to expand over time that is definitely the case. You know, I think that's a

the case. You know, I think that's a problem for for everyone. That's a

problem for us. That's a problem for consumers. That's a you know, it's not

consumers. That's a you know, it's not just a problem for the for the IT for the for for the IT companies. Um what

what I think will happen though is other modes will become more important. For

example, the models have not done a lot in the physical world. They may at some point, you know, I think, you know, robotics will happen at some point, but I think it's that's a distinct thing from what's happening now with with AI.

So you know a lot of this involves you know things in the physical world.

Another thing is things that are human- centric right. Some of these IT

centric right. Some of these IT companies are also consulting companies and they have a big web of relationships with with other you know with with other humans with other institutions here in

India or you know or across the world.

Um and I think those relationships are going to become increasingly important, right? You know, you know, some of these

right? You know, you know, some of these are combined technology and sort of, you know, consulting or or like or like integration companies and and I think a

lot of it is, you know, knowing how institutions work and so being able to, you know, integrate things with institutions, being able to work with them to make things happen faster than they would have otherwise. And I think

that I think that element, you know, if if nothing else is is, you know, is going to continue to be valuable in the long run. You know, at the end of the

long run. You know, at the end of the day, it like it just it just comes down to humans, right? All of this is supposed to be being done for the benefit of humans. So it um uh you know, there's there's always going to be some

human- centric element of this that's going to be important. And I suspect there will be other modes that we haven't thought about, you know. So you

know uh the there's this concept called Amdoll's law which is you know if you have a process that has many components and you speed up some of the components the the the components that haven't yet

been sped up become the limiting factor they become the most important thing and and you know you might not have thought about them at all right you might not have thought of them as moes or important components but you know when

writing software what it becomes a lot easier you know some of the moes that you know companies have will go away but others will become even more important.

So there will be a bunch of adjustment.

Folks will have to say, "Oh man, the stuff we thought was really important before isn't as important. Whereas these

other advantages that we never really thought of as advantages are now super important." So I guess what I would say

important." So I guess what I would say is, you know, companies will need to adapt very fast and think about what really matters for them, what their real

advantages are. Um but but I think some

advantages are. Um but but I think some of those advantages are going to are gonna are going to stay around because you know while the technology is very broad it does have its limits.

>> I don't know if I buy that fully. I

think I see the diminishing returns for being a service provider even if the moat is the network in relationships

they hold today because if I am using open claw to maneuver some of my relationships and the conversations I don't know if it's too far-fetched to assume that most

conversations tomorrow and relationships will be maintained by an agent like that >> but you know if if you just think of the chain of companies right at the end of the day you're dealing with consumers,

right? Like at like at the end of the

right? Like at like at the end of the day you have to deal with people. You

know, there's this story of like, you know, I think it was Jeff Hinton predicted, you know, that that that AI will replace radiologists. And indeed,

AI has gotten better than radiologists, you know, at doing scans, right? But

what happens today is there aren't less radiologists. Um, uh, what the

radiologists. Um, uh, what the radiologist did does is they walk the patient through the scan and they kind of talk to the patient. So the the most highly technical part of the job has

gone away but somehow there's some still some demand for like you know the the kind of the kind of underlying human skill. Now that may not be true

skill. Now that may not be true everywhere and you know perhaps over time AI will advance in in you know areas where it where it hasn't hasn't yet advanced and you know may maybe

maybe that'll happen fast. Um but you know what I think I think what I will say is like you know we should take it one step at a time right this is a very empirical

science this is a very empirical observation let's see what AI does you know today and like we'll we'll kind of try and adapt to uh

you know kind of try and adapt to that the [clears throat] kind of system starts to figure it out and then then we'll see then we'll see what happens next I you know I do think you know in the long run we'll Will AI be better

than than us at at at basically everything? Will it be better than most

everything? Will it be better than most humans know including even the physical world and robotics and the human touch?

Yeah, I you know I think that is I you know I think I think that is uh uh uh uh you know possible maybe even likely.

It's something that goes beyond the country of geniuses in a data center I described because that's purely virtual.

Um but you know building robots is something you know something it's a skill. It's something you can do. So

skill. It's something you can do. So

maybe the AIS will make us will make us better at that as well. Um uh but you know the the way I think about it is you know we need we need to take the we need

to figure this out step by step and figure out how to adapt to it. This

might sound a bit selfserving to the people who know me because I believe the reason so much risk capital exists in America, not the only

reason but one of the big reasons is how big your stock market is and how much of an opportunity it is for this risk capital to exit eventually. Uh it's a case for why India should really allow

for our stock markets to flourish. The

audience that I speak to is very much the wannabe entrepreneur in India. What

can they do in AI? What is an actual opportunity?

>> I think there's a lot of opportunities around building at kind of the application layer. We release a new

application layer. We release a new model every 2 or 3 months and so there's an opportunity every two or three months to build some new thing that wasn't possible before that wouldn't have worked before because the models were

weak. Um people in fact say people were

weak. Um people in fact say people were you know the majority of our revenue still comes from the API model. People

say that you know API models aren't viable or that they'll be commoditized or whatever. I think what people are not

or whatever. I think what people are not seeing is there's this expanding sphere of what is possible with AI and the API allows you know this new startup to try

making something that you know wasn't possible before. And and this is why the

possible before. And and this is why the API is such a flourishing business and it's it's constantly in motion. it's

constantly in churn and so and so it doesn't you know it doesn't get commoditized it's a very dynamic thing and so I think there's an opportunity for lots of lots of individuals to just

say you know what can I what can I build you know what what what can I build on top of these models with an API like you know what are the things that I can make that others cannot make um uh you know

what are some new ideas and you know we've we've we we've seen that you know we see both with the API itself and with claude code um you know I Think I think the um the number of users and the

number of revenue we've seen in India has doubled since I last visited in October. So that was what November

October. So that was what November December like three three and a half months since I visited it's doubled.

>> But I'm going to be candid here Dario.

Uh you're a company which is worth I don't know 400 billion or 380 billion today. You've raised 35 billion. You do

today. You've raised 35 billion. You do

15 billion of revenue but going up really really fast.

If I build an application on top of cloud that for some reason I'm sitting in Bangalore and JP Nagar and building this that for some reason happens to work for

a short period of time. U it is but a matter of time before you would want to onboard that revenue and not let that lie with me and you will probably better

that application in a manner that I will never be able to. I I've heard this argument for different people like the Harvey the legal AI company in in uh New York. They're friends of mine and they

York. They're friends of mine and they were talking about how they built on top of OpenAI but eventually they don't know if it's a easy fix for OpenAI to do what they're doing. So even if I were to

they're doing. So even if I were to build it, say you put out a model in three 3 months or 6 months, what is to stop you from taking that revenue center away from me and onto

yourself in a certain period of time?

>> Yeah. So I you know I think I think there's a few things here you know one is I would give the advice that I give to basically any business and say like you know like a a business should establish a mo you know your your mo you

shouldn't be just a rapper right like you know I would not advise that you know you you just say oh like you know here's a way to interact with claude like I'm going to prompt claude a little bit or I'm going to build a little bit

of a UI around Claude like that that doesn't have a moat and you know you shouldn't be worried about anthropic in particular eating that revenue anyone can eat that revenue, right? It's not

it's not super valuable. But but you know what I would say is that in different fields there are different kinds of modes where you can do something that you know it

would be difficult for entropic to do and you know we we don't want to specialize in it. So for example you know there's a lot of stuff in the bio cross AI space that builds on our API.

you know, they want to do biological discovery. Like I happen to be a

discovery. Like I happen to be a biologist, but like you know, most people at Enthropic aren't biologists.

They're like AI scientists or they're product people or go to market people.

So like it's just really inefficient for us to like step in that space and like do all that work. Um you know the same would be applied for you know dealing with you know financial services

industry right where you know there's a huge amount of regulation like you need to know a bunch of stuff to comply with that regulation like you know it just it doesn't make sense for us to do that now there are some things that do make sense

for us to do like you know we're not going to promise never to build first party products right that we should be we should be honest about for example a bunch of people at Enthropic write code

and so you know we made this internal tool called claw code and because we ourselves write code we have you know I think a special and unique insight into you know how to use the how to best use

the AI models to write code um so you know I think I think I think in the code space you know we've we've become very strong very strong competitors because this is something we use oursel but I

don't think that gener generalizes to every possible industry >> again going back to my audience which is the 20 or 25 year old boy or girl in India

What industry do you think will get disrupted and what has a certain runway left? I'm asking from the lens of I'm

left? I'm asking from the lens of I'm trying to figure out what book to read, which college to go to, what skill set

to learn. Uh if I'm starting a startup

to learn. Uh if I'm starting a startup today, uh what has some kind of a tailwind >> for a short period of time is okay as well.

I mean, you know, I would I would think about tasks that are human- centered.

Um, uh, you know, tasks that involve relating to people, you know, I, you know, I think that the stuff like code and software engineering is, you know, is becoming more and more kind of AI AI focused, you know, things like math and

science.

>> Is that coding or engineering? If I were to segregate coding and engineering to be two completely different things.

>> Yeah. Is coding go going away or is the engineering element of software where you're an architect trying to figure out >> I think coding is going away first or coding is being you know done by the AI

models first and then the broader task of software engineering will take longer but I think that is you know that doing that end to end I think that is going to happen as well I would say um but you

know again the elements of like you know design or making something that's useful to users or knowing what the demand is or you know managing teams of like AI

models like you know those things uh uh may still be present again like there's this comparative advantage is surprisingly powerful right even if you're only doing like you know 5% of

the task like you know that 5% gets super amplified and levered because it's like you're only doing 5% of the task the AI does the other 95% and so you become you know 20 20 times more

productive again at some point you get to 99% 99 and then it becomes harder.

But um I think there's there's surprisingly much in that in that sort of um you know in that zone of comparative advantage. But I would

comparative advantage. But I would really think about the thing the things that are human- centered like I I think there's I think there's something to that. I think there's something to kind

that. I think there's something to kind of the physical world or or things that mix together human- centered the physical world one of those two and

analytical skills that somehow tie them together. you know sim similar to the

together. you know sim similar to the radiologist example I gave >> so what would I study say I'm actual use case I'm 25 years old I'm trying to pick

a profession for myself I want some kind of tailwind my outcome is a capitalistic win in the next decade what industry would I pick outside of something which has a

physical interface >> yeah again anything where you're building on AI like if AI is the tailwind you know if you can be part of some other other part of the supply chain you something in the semiconductor

space which you know I think is you know that's one example you know there has an element of kind of you know physical world and more traditional engineering not not software engineering um you know

again the the very kind of human- centered professions like you know that is that is something I would I would think in terms of and I think the other thing I always say is like in in the world in which you know AI can kind of

generate anything and and you know create anything having basic critical ical thinking skills may be the most important thing to to success. I I worry about, you know, these AI models that

that generate images and videos, and we don't make, you know, models that generate images and videos and for many reasons, but, you know, this is one of them. Um, it's really hard to tell

them. Um, it's really hard to tell what's real from what's not. Um and and so you know a significant part of success may be having the street smarts you know not to get not to get fooled by

by you know I mean hopefully we can crack down on and and regulate some of some of some of this fake content but but you know assume we can't um you know critical thinking skills are going to be

really important and you know you don't want to fall for things that are that are that are fake. You don't want to have false beliefs. You don't want to get scammed like you know that's that's really advice that I would give to someone.

>> If every innovation in the history of humanity killed a core human skills I I'll give you an example. If calculators

killed our ability to do arithmetic, if uh writing reduced the memory of human beings per se, what muscle is AI killing?

>> So, you know, first of all, I'm I'm I'm not I'm not so sure like, you know, I I still have I still do math in my head quite a lot. I still find it useful to do math in my head e you know even even

without a calculator just because it's like you know it's more integrated into my thought processes right you know I you know you know I might want to say oh yeah you know if like each user paid this amount then you know then the

revenue would be that you know I want to be able to close that loop in my head without having to you know without having to to give the answer to a calculator so I think a lot of these

skills are still pretty relevant um but you know I I would say that if you don't use things carefully that you can lose you can lose important skills. Um uh and

you know we you know I think we started to see it with you know students where you know it's like you know they have the AI like write the essay for it's basically just cheating on homework so you know we shouldn't do that. You know,

we did some studies around code and showed that, you know, depending on how you use the model, you know, we we can see deskilling in terms of writing code, right? There are different ways to use

right? There are different ways to use the model and some of them don't cause deskkilling and some of them do. But,

you know, definitely if folks are not thoughtful in how they use things than then deskkilling absolutely can happen.

>> Do you think humans will become stupider as a race in the next decade? Because if

we are in a way exporting thinking and cognition to systems. >> Yeah. I I think if we deploy again it's

>> Yeah. I I think if we deploy again it's the machines of loving grace and adolescence of technology. I think if we deploy AI in the wrong way, if we deploy

it carelessly, then yes, people could become stupider. Even if an AI is always

become stupider. Even if an AI is always going to be better than you at some thing, you can still learn that thing, right? You can still enrich yourself

right? You can still enrich yourself intellectually. And so that's that's a

intellectually. And so that's that's a choice we have to make as as individual companies, as individual people, and as society overall.

>> Dario, do you have a view on open-sourced versus closed? Uh I I was looking at some companies like ZAIS,

GLM5 or DeepSeek.

If you spend all this money on IP creation, on research, if these guys are able to reverse prompt and engineer and

get close to anthropic level answers, I'm not saying 100% but I was seeing the

GLMI numbers and they seemed quite good.

Where does the IP create uh where does the IP value in the world of AI lie? And

if I were to be building an application, can I make the assumption, it's a far-fetched extrapolation, but can I assume that eventually the AI model layers will get so democratized that I

should pick open-sourced every time when I'm building a agent or an application layer because that helps me retain the the revenue model that I

might be working with. So I there are a few things here. Um one is you know a lot of these models particularly the ones that come from China are optimized

for benchmarks and are distilled from uh you know from kind of the big US labs.

Um so you know there there was a test recently where you know some of these models scored very highly on the usual SUI benchmarks the usual software engineering benchmarks but then when

someone made a held back benchmark like that you know had not been publicly measured the models did a lot worse on that. Um and and so you know I think

that. Um and and so you know I think those models are optimized for benchmarks much more than uh you know for kind of real world use. Um but I

think there's a broader point than that which is that I think that the how things are being set up the economics of the models are very different than any previous technology. What we find is

previous technology. What we find is that there is a very strong preference for quality. It's a bit like human

for quality. It's a bit like human employees, right? So you know it's like

employees, right? So you know it's like if if you know if I said to you you can hire the best programmer in the world or the 10,000th best programmer in the world. I mean, they're both very

world. I mean, they're both very skilled, but like I think anyone who's hired a large number of people has this intuition that like there's this like power law longtail distribution of

ability. And we find the same thing in

ability. And we find the same thing in the models like within a range price doesn't matter that much if if a if if a model is is the best model, the most

cognitively capable model. Um uh price doesn't matter much. The forum in which it's presented doesn't matter much. So

I'm focused almost entirely just on having the smartest model and the best model for the task. Um my view is that's the only thing that matters.

>> Long-term uh geopolitics if anthropic were a restaurant I would say the raw ingredients the vegetables in this

particular case is data. Do you think the long term this is also pertinent to me the question because we are investing in a data center business which is Indian in nature. Do you think long-term

the world moves to a place where every country owns its data and you have to start paying more for the vegetables you use to cook?

>> Yeah. So I mean I think I think there are a few things I you know I do think there will be demand to build data centers around the world and we're like very supportive of that. Um uh I you

know it's it's it's data is getting kind of interesting because you know a lot of the data that we use today is RL environments that we train on right so

for example when you train on math or aentic coding environments um you're not really getting data like you're getting some math problems in the model like experiments with trying the math problems

>> it's more synthetic you're creating the data >> yeah you can think of it as synthetic data or you can think of it as trial and error and environment. So I think data is becoming static data is becoming less

important and what we might call like dynamic data that the model creates itself is you know for reinforcement learning is becoming more important. So,

you know, I I don't think data is is is quite the most central thing anymore, but it still matters. And, you know, I think to the extent that that that is the case, you know, a lot of the data is

just just available just kind of available on the open web. Although, if

you're trying to get data in certain languages, optimized for certain languages, that that that can be important. You know, I I I do think if

important. You know, I I I do think if data means like the data given to you by customers like that, you know, you you you process the data for some other for

some other company, then countries will and in the case of Europe already have passed laws that say that that kind of customer like you know personal proprietary personal proprietary data

needs to stay within the boundaries of the of the country and that's one reason to kind of you know to to build you know to to operate data centers around the world at different um um countries and

and you know to kind of you know keep the the models performing of the of the of the of the inference in those countries.

>> I really pushed Elon on this particular question. He was skeptical of answering

question. He was skeptical of answering it but I asked him to pick one stock he would put money in which is not his own.

And he said Google I'm going to ask you the question and I know you're going to be skeptical in it as well. If Daario

had a hundred dollars today and you had to make the binary decision of investing in a stock to win in capitalism, which stock would you pick?

>> Yeah, I I had better not answer that question because I know so much about so many public com like [laughter] I I I think I better not answer that question.

>> Maybe answer the question for a industry that you're not involved in, which I'm guessing today is seldom the case because you're involved in most industries.

>> Yeah. So, it's it's really um I mean I don't know. I I'm I'm positive on like

don't know. I I'm I'm positive on like I'm I I I think biotech is about to have a renaissance. Like ultimately we'll be

a renaissance. Like ultimately we'll be will be driven by AI. Um you know I'm not going to name a particular company but but like um you know nor will I say whether I think it's better to bet on

the big pharma companies or like you know emerging smaller biotechs. Um uh

but but like my my instinct is we're about to cure a lot of diseases and so >> can you give me a subset of biotech that I should focus on?

>> Yeah. Um, I think this idea of stuff that's more programmable and adaptive, you know, from the mRNA vaccines, although those are having trouble in the US for dumb reasons, but you know, I'm

very optimistic about the technology to kind of the peptide based therapies, right? Where you know, you know, again,

right? Where you know, you know, again, if you have a small molecule drug, you're like, there's only so many degrees of freedom you have and you know, you kind of one make one thing better, the other thing gets worse. like

peptides. It's it has this almost digital property where you can say, "Oh, I'm going to substitute in, you know, this amino acid here and this amino acid there." And so it allows for more

there." And so it allows for more continuous optimization. So, you know, I I I think

optimization. So, you know, I I I think I think those kind th those kinds of areas um you know, I would be optimistic about maybe also maybe also cell-based therapies, which is like a new new

>> stem cell.

>> No, no, no. So, so things like uh you know like I don't know like the CARTT therapy where you know you know you kind of genetically engineer your like you know take take basically take some um

you know cells cells out of your body genetically engineer them to you know to to to attack a particular cancer and put them back in the body.

>> Do stem cell therapies work? I spent the whole of last week doing this. I was at a do at a hospital for 3 hours a day getting nebulizer and stem cells into my

my veins. I am I am not up on the latest

my veins. I am I am not up on the latest of of of of stem cell therapies. You'd

have to ask a currently practicing biologist.

>> But peptides I think will blow up.

Right.

>> I I I mean you know again the design space is very broad.

>> Right. When I tried to use claude code for the first time I did struggle to get it to work. It was for somebody who's

very stupid and has no coding or programming knowledge. It's not uh it's

programming knowledge. It's not uh it's not very very easy. I think there's a learning curve. I heard someone say it

learning curve. I heard someone say it well. It's like even prompt engineering

well. It's like even prompt engineering is like playing a piano. You can't sit and start playing it. To my audience, I think it becomes increasingly relevant

where to learn how to set context, uh how to prompt, how to use cloud code better for somebody like me who comes with zero knowledge. Uh can you recommend how one does that? Yeah, I

mean first of all I would say you know we're trying in we're trying increasingly to kind of like make that learning curve easier. So like one of the things that caused us to release cla um which is basically claude code for

non-coders is you know oh man you know like we were noticing a bunch of non-technical people who really wanted to use claude code and we're struggling through the command line terminal um to do that which you know it's like like

coders use the command line terminal all the time but like non-coders you know it's just kind of like makes things unnecessarily complicated. Um so you

unnecessarily complicated. Um so you know co-work was designed to be more of a you know the you know the the kind of you know it was powered by by the cloud code engine on the back but you know the

idea was to kind of make it um you know more um u more like user friendly and and like easier to use. So, you know, we're we're definitely trying to introduce interfaces that kind of make

it make it easier. But I, you know, I would also say, you know, that there's um, you know, there there's like uh, you know, classes you can take that, you know, help you learn this thing. Now, I

think it's a very empirical science. You

mostly learn by doing, but you know, it's like anthropic has its like, you know, part of the company that we call the Ministry of Education. And, you

know, I think increasingly, you know, we'll put out videos on how to run effective agents and how to prompt models. you know, we've already done

models. you know, we've already done some of that and I think we're going to we're going to ramp it up cuz, you know, we do want everyone to be able to learn this.

>> Any fleeting thought? Last question.

Like, you want to leave us with something that we should bear in mind.

What does Daario know that Nikil and all of Nikl's people do not?

>> Yeah. I mean, I don't know that I know that many things, you know, particularly now that the, you know, the implications of the technology are kind of out there.

So I mean, you know, it can all be I I I think most aspects of my worldview can be derived from what from what's publicly visible now from from what we can see, you know, kind of kind of

outside in the world. But the thing I would say, and it's an experience I've had over and over again over the last 10 years is you know, there's this temptation to

believe, oh, you know, that can't happen. It would be too weird. It would

happen. It would be too weird. It would

be too big a change. Like, you know, I'm sure people are on that. like it would be too crazy if that occurred. No one

seems to think that'll happen. And you

know, o over over and over again, just extrapolating the simple curve or trying to reason out what will happen like leads you to these counterintuitive conclusions that almost no one believes.

Um and and you know, it's almost like you can predict the future for free just by you just just by just by saying well it stands to reason that and you know you need some empirical knowledge. You

need some intuition. You can't reason from pure from pure logic. I think

that's another type of mistake that I see people make. But but the right combination of a few empirical observations um uh with um you know just thinking

from first principles uh can allow you to predict the future in ways that you know are publicly available anyone should be able to do but but that happen surprisingly rarely.

>> Thank you Dario for doing this and hope to see you again soon.

>> Thank you.

>> Thank you. Cheers. All right.

>> Yeah.

>> Good. Was it okay?

>> Yeah. Seemed great.

Loading...

Loading video analysis...