LongCut logo

Sam Altman reveals exact date of intelligence explosion

By Matthew Berman

Summary

## Key takeaways - **AGI Timeline Revealed**: OpenAI has projected a timeline for AI development, anticipating an intern-level AI research assistant by September 2026 and a legitimate AI researcher by March 2028, marking a significant acceleration towards AGI. [00:46], [01:04] - **The Race to Self-Improving AI**: The core objective for OpenAI and other frontier AI labs is achieving self-improving AI, as whoever reaches this milestone first is predicted to 'win' due to the recursive nature of improvement. [01:50], [02:11] - **Chain of Thought Faithfulness**: OpenAI is researching 'chain of thought faithfulness' by allowing models to develop reasoning processes without direct supervision during training, aiming for more genuine and aligned internal thought processes. [05:13], [06:30] - **AI Infrastructure Investment**: OpenAI is planning a massive infrastructure expansion, including a $1.4 trillion investment for current needs and a future goal of building factories to produce AI factories, aiming for gigawatt-per-week output. [09:44], [10:21] - **OpenAI's New Structure**: OpenAI has finalized its corporate restructuring into a nonprofit foundation governing a public benefit corporation, with the foundation aiming to become the largest nonprofit ever and committing $25 billion to health and AI resilience. [11:04], [12:14] - **Concerns on Addictive Products**: Sam Altman expressed significant worry about AI products, including Sora and ChatGPT, becoming addictive like social media, stating OpenAI would cancel problematic products if they deviate from their creation-focused mission. [12:51], [13:28]

Topics Covered

  • AGI Timeline: From Intern to Researcher by 2028
  • OpenAI's Race to Self-Improving AI: The 'Winner Takes All' Dynamic
  • Chain of Thought Faithfulness: Letting AI Think Unsupervised
  • OpenAI's Massive Infrastructure: Building AI Factories for AI Factories
  • Sam Altman's Warning on Addictive AI Products

Full Transcript

We think it is plausible that by

September of next year we have sort of a

intern level AI research assistant and

that by March of 2028 we have like a a

legitimate AI researcher and this is the

core thrust of our research program.

>> Open AAI just finished their corporate

restructuring. They have a brand new

deal with Microsoft. They're continuing

their partnership and they did a live

stream and here's the thing. Sam Alman

and Jakob Pachaki gave an incredible Q&A

in which they revealed the exact date

for AGI. I couldn't believe they gave

such a precise date yet here we are. Let

me break down the entire live stream for

you. And a big thanks to Recraft for

sponsoring this video. More on them

later. So, first here is their timeline.

Look at this. We are here October 2025

in September 2026. Such a specific date.

automated AI research intern as they

describe it. Basically, a pretty good AI

researcher that can help facilitate AI

research. But here is where it gets

crazy. March 2028. It's hard for me to

imagine how they were able to come up

with such a precise date for this, but

automated AI research. Now, if you

remember back to the intelligence

explosion timeline from the situational

awareness paper, it actually came almost

at the exact time that open AI is

predicting. But when we have automated

AI research, then the acceleration of AI

is only limited by how much compute we

can throw at it. That is the time in

which we have what's shown here, the

intelligence explosion. And that is when

we rapidly hit super intelligence

shortly after. And so that is really

what open AI as a research lab is

heading towards. And I think that's

really what all of the frontier labs are

heading towards, which is whoever

reaches self-improving artificial

intelligence first just wins. Everyone

else loses. And that's because once you

hit self-improving AI, it's recursive.

it improves on its improvement and the

rate of improvement grows and once you

hit that how is anybody else supposed to

catch up and so that is why Mark

Zuckerberg is willing to misallocate

hundreds of billions of dollars because

the downside of missing the boat on AI

is far greater than a few measly

hundreds of billions of dollars and

obviously that's what Sam Alman believes

as well now another thing they covered

is the duration of automated tasks that

chatt And AI in general is able to

complete. And I keep hearing frontier

model companies talk about this. What

happens if AI can complete tasks

autonomously for 5 days or 5 months or 5

years? Well, that's what we're seeing

here. Right now, we can do 5 seconds, 5

minutes, 5 hours, but 5 days we're not

quite there. Then from there, we're

going to see five weeks, five months,

and fiveyear tasks. But of course, as

I've been saying for a while, it's not

just about the duration. It's about what

you can actually accomplish within that

duration. It's about how efficient you

can be with your token usage with the

compute during the duration. So again,

it's not just the duration. It is very

much about the efficiency as well. But

tying it back to the intelligence

explosion, remember at this point when

models can run autonomously for extended

periods of time, the only limiter, the

only thing preventing us from ramping up

the quality, from ramping up the

performance of artificial intelligence

is how much compute we can actually

throw at it. But of course, this type of

AI doesn't only need to be applied to AI

research. Imagine biomed research.

Imagine trying to have new material

science and drug discovery. All of these

things in which we have autonomous AI

researchers just completely running on

their own discovering incredible things

for humanity and the only thing we have

to do is provide it with sand and

electricity. And by the way, let me just

pause for a second and tell you about

the sponsor of today's video, Recraft.

Are you still wasting time jumping

between your AI chats and design canvas?

Recraft's new chat mode ends that for

good. So, imagine this. You generate a

logo, drag it into the chat, and say,

"Create a full brand kit from this.

Social posts, posters, mockups, all in

minutes." And at the end, you have a

consistent set of full brand assets that

you can use immediately. This is the

magic of Recraft's new chat mode. It's a

powerful AI chat assistant within

infinite canvas. Start in chat for

lightning fast exploration and then

switch to canvas for pixel perfect

control. No more switching between apps.

The entire creative process from first

prompt to final assets happens now in

one place. So stop just generating

images. Start creating with precision.

Join the beta chat mode weight list

today. Link in the description. Let them

know I sent you. Recraft has been a

phenomenal partner. Now back to the

video. The next thing I want to cover

from this live stream is what they call

chain of thought faithfulness. And it's

super interesting because I had not

heard OpenAI's thoughts on this before.

Let's watch it together and I'll give

you my thoughts along the way.

>> Starting from our first reasoning

models, we've been pursuing this new

direction interpretability. And the idea

is to keep parts of the model's internal

reasoning free from supervision. So

don't look at it during training and

thus let it remain representative of the

model's internal process. Um

so we refrain from from from kind of

guiding the model to think good thoughts

and and and and so let it let let it

remain a bit more faithful to to to what

it actually thinks. Right? And this is

not guaranteed to work of course, right?

we cannot make uh mathematical proofs

about deep learning and so this is

something we study. Uh but there are two

reasons to be optimistic. One reason is

that we have seen very promising

empirical results. Uh this is a

technology we employed a lot internally.

Uh we use this to understand um how our

models u um train h how their

propensities evolve over training.

>> So let me just describe what he's

talking about real quick. He is talking

about being able to trust the model to

have aligned models and look at their

chain of thought which is kind of the

reasoning steps they take before

providing you with an answer and having

trust that it is first aligned with

human incentives and it's actually

stating what it does really believe

rather than trying to react to what we

want it to believe and that'll make AI

in general much more safe and so to

really enable that kind of insight into

what the model's thinking. He's

basically saying, "Let the model run.

Let the model think and we're not going

to look at it along the way. We're going

to look at it after and see what it

thought without any human intervention

along the way."

>> And secondly, uh it is scalable and in

the sense that explicitly we make the

scalable objective not adversarial to

our ability to monitor the model. Um

and of course an objective not being

adversarial to the ability to monitor

the model is only half the battle. Um,

and you know, ideally you want it to to

to get it to help with monitoring the

model. And so this is something we're

we're researching quite heavily.

Um, but one important thing to

underscore about chain of thought

faithfulness is it's somewhat fragile.

Um it really requires drawing this clean

boundary uh and having this clear

abstraction uh and having restraint in

in what ways you can access the chain of

thought and this is something that is

present uh at OpenAI from algorithm

design to the way we design our products

right so so if you look at the chain of

thought summaries in CHP uh if we didn't

have the chain of summarizer if we just

make the chain of fully visible at all

times right that would make it kind part

of the overall experience and over time

it will be very difficult to not subject

it to any supervision.

Um and so long term we believe that by

preserving some amount of this

controlled privacy for the models uh we

can retain the ability to understand

their inner process.

>> I find that so interesting. He's

basically like because we're able to

give the models privacy. We're going to

leave them alone, allow them to think

what they want to think, it'll actually

give us more insights into how they

think. And I guess that makes sense, but

it's almost like treating these models

like a human. And that does rub me a

little bit the wrong way, but I find it

fascinating. Let's keep going.

>> And we believe this can be a very

impactful technique uh as we move

towards these very capable longunning

systems. Um I'll hand back to some.

>> Okay, that's very hard to follow uh with

the rest of this and obviously that's

the most important part of what we have

to say. But um you know just to to

reiterate uh we may be totally wrong. We

have set goals and missed them miserably

before. But with the picture we see, we

think it is plausible that by September

of next year, we have sort of a intern

level AI research assistant and that by

March of 2028, which I believe is almost

5 years to the month after the launch of

GPT4. Um, we have like a legitimate AI

researcher and this is the core thrust

of our research program.

>> All right. Next, he talks about OpenAI's

infrastructure plan. And to think that

they were grand before, well, let me go

through it with you right now. Again,

OpenAI's current infrastructure, current

30 plus GAWW currently being built, $1.4

trillion worth. A lot of people when

they saw his original Stargate plan for

7 trillion in funding to build out the

greatest AI infrastructure in the world

laughed. They thought 7 trillion, that

that's a mistake, right? That can't be

right. Well, he's already $1.4 trillion

of the way there. Crazy. And so the

thing he's going to talk about next is

building a factory to build AI

factories. It is not enough just to

build the factories. You have to build

the factory that builds the other

factories. And what they are talking

about internally, not committed to yet,

is a gigawatt per week coming out of a

factory that can produce that, which is

insane. And here is their first true

mention of robotics. Let me play this

clip.

>> To do this will require a ton of

innovation, a ton of partnerships,

obviously a lot of revenue growth. Um,

we'll have to repurpose our thoughts

about robotics to help us build data

centers instead of doing all the other

things. Um, but this is where we'd like

to go and over the coming months we are

going to do a lot of work to see if we

can get here. Um, it will be some time

before we're in a financial position

where we could actually pull the trigger

and get going on this.

>> All right. Next, he talks about the new

structure of Open AI. Remember, there

was the for-profit, there was the

nonprofit, there was the drama with Elon

Musk, there was the public benefit

corporation, and now everything is

finalized. The relationship with

Microsoft is finalized. We know how much

they own. We know what that partnership

looks like. We know what the IP

ownership looks like. So, let me show

you. First, it is much more simple.

There's the OpenAI Foundation which is

the nonprofit and the Open AAI group

which is a public benefit corporation. A

public benefit corporation is a company

in which its mission is not only to

deliver shareholder value but also to

deliver some kind of other mission. An

example of that is Patagonia which I

believe is kind of the most famous

public benefit corporation at least

historically until now. Now here are

some interesting tidbits. The nonprofit

governs the public benefit corporation.

It owns 26% of PBC equity and a little

asterisk warrant to potentially receive

more equity in the future. Uses

resources of ownership and the PBC can

attract the resources required to

succeed at OpenAI's mission. What does

that mean? That means fundraising and

inevitably an IPO. And he says the

OpenAI Foundation is going to be the

biggest nonprofit ever. The OpenAI

Foundation is also making a $25 billion

commitment to two very important areas

of AI. One, health and curing diseases,

and two, AI resilience. And next, he

goes through a bunch of Q&A questions.

And some of them are very interesting,

and his answers are just as interesting.

So, the first question Sam Alman gets is

about advertising and addictiveness of

social products and how Sora seems to be

following that same path of products

like Facebook and Tik Tok and Instagram.

And here's what he thinks and he is very

honest about it. He says, "Yeah, I'm

very worried. Let's watch."

>> We're definitely worried about this. Uh

I worry about it not just for things

like Sora and Tik Tok and ads and chbt

which are maybe known problems that we

can design carefully but you know we

have certainly seen people develop

relationships with chatbots that we

didn't expect and there can clearly be

addictive behavior there given the

dynamics and competition in the in the

world. I suspect some companies will

offer very addictive new kinds of

products. Um and I think you'll have to

just judge us on our actions. We'll have

to you know we'll make some mistakes.

We'll try to roll back models that are

problematic. If we ship Sora and it

becomes super addictive and not about

creation, we'll, you know, cancel the

product and you'll you'll have to just

judge us on that. My hope and belief is

that we will not make the same mistakes

that companies before us have made. Uh I

don't think they meant to make them

either. It's uh you know, we're all kind

of discovering this together. We

probably will make new ones, though, and

we'll just have to evolve quickly and

have a tight feedback loop. We we can

imagine all sorts of ways this

technology does incredible good in the

world. also obvious bad ones and um you

know we're guided by a mission where

we'll just continuously evolve evolve

the product.

>> All right. Next he gets asked if GPT40

is going to be around for a while.

>> We have no plans to sunset 40. Uh we are

not going to promise to keep it around

till the heat death of the universe

either. But we we understand that it's a

product that some of our users really

love. We also hope other people

understand why um it was not a model

that we thought was healthy for miners

to be using. Um, we hope that we build

better models over time that people like

more. You know, the people you have a

relationship with in your life, they

evolve and get smarter and change a

little bit over time. And we think that

we hope that the same thing will happen.

But yeah, no, no plans to uh no plans to

sunset for currently.

>> All right. Next, Yakob is going to be

asked when will AGI happen? I love this

and I love Sam just looking at him and

asking him the question. So, I'm going

to play that part real quick and then

give Yakob the chance to answer it.

Here's a good anonymous question for

Yakob. When will AGI happen?

>> Um, I think in in some number of years

we'll look back at these years and we'll

say, you know, this was kind of the

transition period when AGI happened. I

think as Sam said like early on adopting

we thought about AGI kind of emotionally

as this like thing that is like the kind

of ultimate solution of all the problems

and and and um it's it's this like

single point um for which there is

before and after I think um we found

that it's uh a bit more continuous than

that and so in particular for like

various kind of benchmarks that you know

seemed like kind of the obvious like

milestones towards AGI I think I think

we now think of them as kind of like

indicating like you know roughly how far

away we are in years. And so uh you know

if you look at a succession of of of of

milestones such as computers beating

humans at chess and then at go and then

uh you know computers being able to

speak in natural language and computers

being able to solve math problems right

I think well they clearly kind of get uh

closer together. I I would say I think

it's the AGI term has become hugely

overloaded and as Jakob said it'll be

this process over a number of years that

we're in the middle of. Uh but one of

the reasons we wanted to present what we

did today is I think it's much more

useful to say our intention our goal is

by March of 2028 to have a true

automated AI researcher and define what

that means uh than it is to sort of try

to you know satisfy everyone with a

definition of AGI. All right. So, next,

how about when GPT6? Let's watch.

>> Shindy says, "When GPT6?" I think I

think I think uh in time wise, maybe

that's more of a question for you. I

don't know either exactly when we'll

call it that, but I think a clear

message from us is say 6 months from

now, probably sooner. We expect to have

huge steps forward in model capability.

>> Next, Sam Alman is going to say, "When

we're going to get a Windows version of

Chachib Atlas,

>> when is Chacht Atlas for Windows

coming?" asks Lars. Uh I don't know an

exact time frame. some number of months

I would guess. Uh it's it's definitely

something we want to do and more

generally this idea that we can build

experiences like browsers and new

devices that let you take AI with you

that get towards this sort of ambient

always helpful uh assistant rather than

something you just query and response.

Uh this will be a very this will be a

very important direction for us to to

push more on.

>> And thanks once again to Recraft for

sponsoring this video. I'll drop a link

for them down below. Check them out.

They've been a great partner to this

channel. Let them know I sent you. So,

that's it. Those are all the most

interesting bits from this live stream.

If you enjoyed this video, please

consider giving a like and subscribe.

and I'll see you in the next

Loading...

Loading video analysis...