LongCut logo

"I Tried To Warn You" - Elon Musk LAST WARNING (2026)

By Elon Musk Fan Zone

Summary

Topics Covered

  • AI's existential threat dwarfs nuclear war.
  • AI experts overestimate their knowledge, discount superior intelligence.
  • Exponential AI growth makes humanity a diminishing intelligence fraction.
  • AI regulation is critically needed, but dangerously slow.
  • Unchecked AI power concentration risks a dystopian future.

Full Transcript

i think the danger of ai is much greater than the the danger of nuclear warheads by a lot [Music]

mark my words ai is far more dangerous than nukes i try to convince people to slow down slow down ai

to regulate ai [Music] this was futile i tried for years the biggest issue i see with so-called

ai experts is that they they think they know more than they do and they think they're smarter than they actually are [Music] this is this tends to play playing smart people they're defining themselves by

their intelligence and they they don't like the idea that a machine could be way smarter than them so they discount the idea which is fundamentally flawed that's the wishful

thinking situation i'm really quite close to very close to the cutting edge in ai and it scares the hell out of me it's capable of vastly more than almost

anyone knows and the rate of improvement is exponential it feels like we are the biological bootloader

for ai effectively we are building it [Music] and then we're building progressively greater intelligence and the percentage

of intelligence that is not human is increasing and eventually we will represent a very small percentage of intelligence it's going to come faster than anyone

appreciates i think it's with each passing year the sophistication of of computer intelligence is is growing dramatically i mean i

really think we're on an exponential uh improvement path of artificial intelligence and the number of smart humans that are developing ai is also increasing dramatically i mean if you look at like the

attendance at the ai conferences they're they're doubling every year um they're getting full um

i have a a sort of a young cousin of mine who's graduating from berkeley um in computer science and physics and i asked him like well how

many of the smart students are studying ai in computer science and the answer is all of them with a better approach or better outcome

is that uh we achieve democratization of ai technology meaning that uh no one company or a small set of individuals has control

over advanced ai technology i think that that's very dangerous um it could also get stolen by somebody bad you know like some evil dictator

the country could send their intelligence agency to go steal it and gain control it just becomes a very unstable situation i think if you've got any

um any incredibly powerful ai um you just don't know who's who's gonna control that so it's not as i think that the risk is that the ai would develop a will of its own right off the bat i think

it's more that's the concern is that some someone um may use it in a way that is bad um or and even if they weren't going to use it in a way that's bad that somebody could take it from them and use it in a

way that's bad that that i think is quite a big danger we are all of us already are cyborgs um so you have a machine extension of yourself

in the form of your your phone and your computer and all your applications you are already superhuman but by far you have more more powerful capability than

president united states had you know 30 years ago if you have an internet link you have an article of wisdom you can communicate to millions of people and communicate to

the rest of earth instantly and these are magical powers that didn't exist not that long ago so everyone is already

superhuman i think it's the singularity is probably the right word because we just don't know what's going to happen once there's intelligence substantially greater than that of

a human brain [Music] i mean most of the movies and tv featuring ai that they don't describe in quite the way it's likely to actually

take place but i think you just have to consider like even in the benign scenario where um ai if ai is much smarter than a person

um what what do we do yeah what what is that what job do we have i i have to say that when you know when something is a danger to the public then that

there needs to be some government agency like regulators the fact is like we've got regulators in um you know the aircraft industry

car industry uh with drugs food um you know and anything that's sort of a public risk and i mean i think this has to fall into the category of a public risk usually it'll be something some new

technology will cause damage or death there will be an outcry there will be an investigation years will pass there will be some sort of insight committee there

will be rule making then there will be oversight eventually regulations this all takes many years this is the normal course of things if you look at say automotive

regulations how long did it take for seat belts to be implemented to be required you know the water industry fought seat belts i think for more than a decade successfully fought

any regulations on seat belts even though the numbers were extremely obvious if you had a seatbelt on you would be far less likely to die

or be seriously injured it was unequivocal and the industry fought this for years successfully eventually after many many people died

regulators insisted on seat belts if this is a this time frame is not relevant to ai you can't take 10 years from the point which is dangerous it's too late

i'm not normally an advocate of regulation and oversight i mean i think once you generally you're on the side of minimizing those things but this is a case where you have a very

serious danger to the public and therefore there needs to be a public body that has insight and then oversight on to

confirm that everyone is developing ai safely this is extremely important i think the danger of ai is much greater than the the danger of nuclear warheads by a

lot um and nobody would suggest that we allow anyone to just build nuclear warheads if they want that would be insane so why do we have

no regulatory oversight this is insane and the intent with openai is to democratize

ai power um there's a quote that i love from lord acton he was the guy that came up with power corrupts and absolute power crafts absolutely um which is that uh freedom consists of the distribution

of power and despotism and its concentration and so i think it's important if we have this incredible power of ai that it not be concentrated in the hands of a few and potentially lead to

a world that we don't want i'm not really all that worried about the short term stuff the things that are like narrow ai is not a species level risk

um it it will result in dislocation uh in lost jobs and um you know that sort of better weaponry and that kind of thing

but it is not a fundamental species level risk whereas digital super intelligence is so it's really all about laying the

groundwork to make sure that if humanity collectively decides that creating digital super intelligence is the right move then

we should do so very very carefully very very carefully we're rapidly headed towards digital super intelligence that photo exceeds any human

i think it's very obvious you

Loading...

Loading video analysis...