LongCut logo

Google: The AI Company. Google is amazingly well-positioned... will they win in AI? (audio)

By Acquired

Summary

## Key takeaways - **Google's Transformer: The AI Revolution's Genesis**: Google researchers published the Transformer paper in 2017, a foundational invention that powers modern AI systems like ChatGPT and Gemini, yet the company initially treated it as just an incremental improvement for Google Translate. [02:44:44] - **Google's AI Talent Pool: A Decade Ago and Today**: A decade ago, Google employed nearly all leading AI talent, including future founders of OpenAI and Anthropic. Despite this concentration of expertise, the company was reportedly caught off guard by ChatGPT's launch. [06:25:29], [06:36:36] - **The Innovator's Dilemma: Protecting Search vs. Embracing AI**: Google faces a classic innovator's dilemma: their highly profitable search business, a near-monopoly, could be disrupted by AI chatbots that offer direct answers, potentially cannibalizing their core advertising revenue. [01:04:04], [01:10:04] - **DeepMind Acquisition: A Strategic Masterstroke or Missed Opportunity?**: Google's $550 million acquisition of DeepMind in 2014, a company with no products but ambitious AI goals, is now seen as a critical move that secured vital AI talent and infrastructure, though it reportedly angered investors like Elon Musk. [52:23:26], [54:48:47] - **The TPU: Google's Secret Weapon in the AI Chip Race**: To address the immense computational needs of AI, Google developed its own Tensor Processing Units (TPUs), custom-designed chips for neural networks that offer significant efficiency gains over GPUs and provide a crucial advantage in the AI infrastructure race. [45:31:37], [45:45:49] - **The 'Cat Paper' Breakthrough: Unsupervised Learning at Scale**: Google Brain's 2011 'cat paper' demonstrated that large neural networks could learn meaningful patterns from unlabeled YouTube video frames using distributed computing, a pivotal moment showing the potential of unsupervised learning and a key milestone cited by Sundar Pichai. [41:44:47], [42:17:17]

Topics Covered

  • Google is a textbook case of innovator's dilemma.
  • The AI era began in 2012, not 2022.
  • The founder of DeepMind first warned Elon Musk about AI.
  • Google built custom AI chips to avoid doubling its datacenters.
  • The Transformer's breakthrough was its elegant, scalable simplicity.

Full Transcript

I went and looked at a studio. Well, a

little office that I was going to turn

into a studio nearby, but it was not

good at all. It had drop ceilings, so I

could hear the guy in the office next to

me. You would be able to hear him

talking on episodes.

>> Third co-host.

>> Third co-host.

>> Is it Howard?

>> No, it was like a lawyer. It seemed to

be like talking through some horrible

problem that I didn't want to listen to,

but I could hear every word.

>> Does he want millions of people

listening to this conversation?

>> Right.

>> All right.

>> All right. Let's do a podcast. Let's do

a podcast.

>> Who got the truth? Now,

is it you? Is it you? Is it you? Sit me

down. Say it straight. Another story on

the way. Got the truth.

>> Welcome to the fall 2025 season of

Acquired, the podcast about great

companies and the stories and playbooks

behind them. I'm Ben Gilbert.

>> I'm David Rosenthal. and we are your

hosts. Here's a dilemma. Imagine you

have a profitable business. You make

giant margins on every single unit you

sell and the market you compete in is

also giant. One of the largest in the

world, you might say. But then on top of

that, lucky for you, you also are a

monopoly in that giant market with 90%

share and a lot of lock in.

>> And when you say monopoly, monopoly as

defined by the US government. That is

correct. But then imagine this. In your

research lab, your brilliant scientists

come up with an invention. This

particular invention when combined with

a whole bunch of your old inventions by

all your other brilliant scientists

turns out to create the product that is

much better for most purposes than your

current product. So you launched the new

product based on this new invention.

Right.

>> Right. I mean, especially because out of

pure benevolence, your scientists had

published research papers about how

awesome the new invention is and lots of

the inventions before also. So, now

there's new startup competitors quickly

commercializing that invention. So, of

course, David, you change your whole

product to be based on the new thing,

right?

>> Uh, this sounds like a movie.

>> Yes. But here is the problem. You

haven't figured out how to make this new

incredible product anywhere near as

profitable as your old giant cash

printing business. So maybe you

shouldn't launch that new product.

David, this sounds like quite the uh

dilemma to me. Of course listeners this

is Google today and in perhaps the most

classic textbook case of the innovators

dilemma ever the entire AI revolution

that we are in right now is predicated

by the invention of the transformer out

of the Google brain team in 2017. So

think open AI and chat GBT anthropic

NVIDIA hitting all-time highs all the

craziness right now depends on that one

research paper published by Google in

2017. And consider this. Not only did

Google have the densest concentration of

AI talent in the world 10 years ago that

led to this breakthrough, but today they

have just about the best collection of

assets that you could possibly ask for.

They've got a top tier AI model with

Gemini. They don't rely on some public

cloud to host their model. They have

their own in Google Cloud that now does

$50 billion in revenue. That is real

scale. They're a chip company with their

tensor processing units or TPUs, which

is the only real scale deployment of AI

chips in the world besides Nvidia GPUs.

Maybe AMD maybe, but these are

definitely the top two. Somebody put it

to me in research that if you don't have

a foundational frontier model or you

don't have an AI chip, you might just be

a commodity in the AI market. And Google

is the only company that has both.

>> Google still has a crazy bench of

talent. And despite ChatGpt becoming

kind of the Kleenex of the era, Google

does still own the textbox, the single

one that is the front door to the

internet for the vast majority of people

anytime anyone has intent to do anything

online. But the question remains, what

should Google do strategically? Should

they risk it all and lean into their

birthright to win in artificial

intelligence? Or will protecting their

gobs of profits from search hamstring

them as the AI wave passes them by? But

perhaps first we must answer the

question, how did Google get here? David

Rosenthal. So listeners, today we tell

the story of Google, the AI company.

>> Woo.

>> You like that, David? Was that good?

>> I love it. Did you hire like a Hollywood

script writing consultant without

telling me?

>> I wrote that 100% myself with no AI.

Thank you very much.

>> No AI.

>> Well, listeners, if you want to know

every time an episode drops, vote on

future episode topics or get access to

corrections from past episodes, check

out our email list. That's

acquired.fm/e.

Come talk about this episode with the

entire acquired community in Slack after

you listen. That's acquired.fm/slack.

Speaking of the acquired community, we

have an anniversary celebration coming

up. We do 10 years of the show. We're

going to do an open Zoom call with

everyone to celebrate. Kind of like how

we used to do our LP calls back in the

day with LPS. And we are going to do

that on October 20th, 2025 at 4:00 p.m.

Pacific time. Check out the show notes

for more details.

>> If you want more acquired, check out our

interview show, ACQ2. Our last interview

was super fun. We uh sat down with Toby

Lutka, the founder and CEO of Shopify,

about how AI has changed his life and

where he thinks it will go from here.

So, search ACQ2 in any podcast player.

And before we dive in, we want to

briefly thank our presenting partner JP

Morgan Payments.

>> Yes, just like how we say every company

has a story, every company's story is

powered by payments. And JP Morgan

Payments is a part of so many of their

journeys from seed to IPO and beyond.

>> So, with that, this show is not

investment advice. David and I may have

investments in the companies we discuss

and this show is forformational and

entertainment purposes only. David,

Google, the AI company.

>> So Ben, as you were alluding to in that

fantastic intro, really, you're really

up in the game here.

If we rewind 10 years ago from today,

before the Transformer paper comes out,

all of the following people, as we've

talked about before, were Google

employees. Ilia Sidskever, founding

chief scientist of OpenAI, who along

with Jeff Hinton and Alex Koschesky had

done the seinal AI work on Alexnet and

just published that a few years before.

All three of them were Google employees,

as was Daario Amade, the founder of

Anthropic,

Andre Karpathy, chief scientist at Tesla

until recently, Andrew Ing, Sebastian

Thrron, Nam Shazir, all the deep mind

folks. Demisabis, Shane Le, Mustafa

Sullean, Mustafa now, in addition to in

the past having been a founder of

DeepMind runs AI at Microsoft.

Basically,

every single person of note in AI worked

at Google with the one exception of Yan

Lun who worked at Facebook.

>> Yeah, it's pretty difficult to trace a

big AI lab now back and not find Google

in its origin story. Yeah, I mean the

analogy here is it's almost as if at the

dawn of the computer era itself, a

single company like say IBM had hired

every single person who knows how to

code. So it' be like you know if anybody

else wants to write a computer program

oh sorry you can't do that. Anybody who

knows how to program works at IBM. This

is how it was with AI and Google in the

mid2010s. But learning how to program a

computer wasn't so hard that people out

there couldn't learn how to do it.

learning how to be an AI researcher

significantly more difficult,

>> right? It was the stuff of very specific

PhD programs with a very limited set of

advisers and a lot of infighting in the

field of where the direction of the

field was going, what was legitimate

versus what was crazy heretical

religious stuff.

>> Yeah. So then yes, the question is how

do we get to this point? Well, it goes

back to the start of the company. I

mean, Larry Page always thought of

Google as an artificial intelligence

company. And in fact, Larry Page's dad

was a computer science professor and had

done his PhD at the University of

Michigan in machine learning and

artificial intelligence, which was not a

popular field in computer science back

then.

>> Yeah. In fact, a lot of people thought

specializing in AI was a waste of time

because so many of the big theories from

30 years prior to that had been kind of

disproven at that point, or at least

people thought they were disproven. And

so it was frankly contrarian for Larry's

dad to spend his life and career and

research work in AI.

>> And that rubbed off on Larry. I mean, if

you squint, page rank, the page rank

algorithm that Google was founded upon

is a statistical method. You could

classify it as part of AI within

computer science. And Larry, of course,

was always dreaming much much bigger.

here. I mean, there's the quote that

we've said before on this show in the

year 2000, 2 years after Google's

founding when Larry says artificial

intelligence would be the ultimate

version of Google. If we had the

ultimate search engine, it would

understand everything on the web. It

would understand exactly what you wanted

and it would give you the right thing.

That's obviously artificial

intelligence. We're nowhere near doing

that now. However, we can get

incrementally closer and that is

basically what we work on here. It's

always been an AI company.

>> Yep. And that was in 2000.

Well, one day in either late 2000 or

early 2001, the timelines are a bit hazy

here, a Google engineer named Gor Heric

is talking over lunch with Ben Gomes,

famous Google engineer who I think would

go on to lead search and a relatively

new engineering hire named Gnome Shazir.

Now, Gor was one of Google's first 10

employees, incredible engineer. And just

like Larry Paige's dad, he had a PhD in

machine learning from the University of

Michigan. And even when George went

there, it was still a relatively rare

contrarian subfield within computer

science. So, the three of them are

having lunch and George says

off-handedly to the group that he has a

theory from his time as a PhD student

that compressing data is actually

technically equivalent to understanding

it. And the thought process is if you

can take a given piece of information

and make it smaller, store it away and

then later reinstantiate it in its

original form. The only way that you

could possibly do that is if whatever

force is acting on the data actually

understands what it means because you're

losing information going down to

something smaller and then recreating

the original thing. It's like you're a

kid in school. You learn something in

school. You read a long textbook. You

store the information in your memory.

Then you take a test to see if you

really understood the material. And if

you can recreate the concepts, then you

really understand it.

>> Which kind of foreshadows big LLMs today

are like compressing the entire world's

knowledge into some number of terabytes

that's just like this smash down little

vector set. Little at least compared to

all the information in the world. But

it's kind of that idea, right? You can

store all the world's information in an

AI model in something that is like kind

of incomprehensible and hard to

understand. But then if you uncompress

it, you can kind of bring knowledge back

to its original form.

>> Yep. And these models demonstrate

understanding right?

>> Do they? That's the question. That's the

question. They certainly mimic

understanding.

>> So this conversation is happening. You

know, this is 25 years ago. And Gnome,

the new hire, the, you know, young buck,

he sort of stops in his tracks and he's

like, "Wow, if that's true, that's

really profound."

>> Is this in one of Google's micro

kitchens?

>> This is in one of Google's micro

kitchens. They're having lunch.

>> Where did you find this, by the way? A

25-year-old

>> uh this is in in the Plex. This is like

a small little passage in Steven Levy's

great book that's been a source for all

of our Google episodes, In the Plex.

There's a small little throwaway passage

in here about this because this book

came out before ChatgBT and AI and all

that. So Gnome kind of latches on to

George and keeps vibing over this idea

and over the next couple months the two

of them decide in the most googly

fashion possible that they are just

going to stop working on everything else

and they're going to go work on this

idea on language models and compressing

data and can they generate machine

understanding with data and if they can

do that that that would be good for

Google. I think this coincides with that

period in 2001 when Larry Pageige fired

all the managers in the engineering

organization and so everybody was just

doing whatever they wanted to do.

>> Funny.

>> So there's this great quote from Gor in

the book. A large number of people

thought it was a really bad thing for

Nome and I to spend our talents on, but

Sanjay Gamat Sanjay of course being Jeff

Dean's famous prolific coding partner

thought it was cool. So George would

posit the following argument to any

doubters that they came across. Sanjay

thinks it's a good idea and no one in

the world is as smart as Sanjay. So why

should Nome and I accept your view that

it's a bad idea?

>> It's like if you beat the best team in

football, are you the new best team in

football no matter what?

>> Yeah. So all of this ends up taking

Noman George deep down the rabbit hole

of probabilistic models for natural

language. Meaning for any given sequence

of words that appears on the internet,

what is the probability for another

specific sequence of words to follow?

This should sound pretty familiar for

anybody who knows about LLM's work

today.

>> Oh, kind of like a next word predictor.

>> Yeah. Or next token predictor if you

generalized it.

>> Yep.

>> So, the first thing that they do with

this work is they create the did you

mean spelling correction in Google

search.

>> Oh, that came out of this. that came out

of this. Gnome created this.

>> So this is huge for Google because

obviously it's a bad user experience

when you mistype a query and then need

to type another one. But it's attacks to

Google's infrastructure because every

time these mistyped queries are going

well, Google's infrastructure goes and

serves the results to that query that

are useless and immediately overwritten

with the new one,

>> right? And it's a really tightly scoped

problem where you can see like, oh wow,

80% of the time that someone types in

god groomer. Oh, they actually mean dog

groomer and they retype it. And if it's

really high confidence, then you

actually just correct it without even

asking them and then ask them if they

want to opt out instead of opting in.

It's a great feature and it's sort of a

great first use case for this in a very

narrowly scoped domain.

>> Totally. So they get this win and they

keep working on it nomen and they end up

creating a fairly large I'm using large

in quotes here you know for the time

language model that they call

affectionately Phil the probabilistic

hierarchical inferential learner.

>> These AI researchers love creating their

uh backronyms.

>> They love their word puns.

>> Yeah.

>> Yep.

So fast forward to 2003 and Susan

Majiski and Jeff Dean are getting ready

to launch AdSense. They need a way to

understand the content of these third

party web pages, the publishers, in

order to run the Google ad corpus

against them. Well, Phil is the tool

that they use to do it.

>> Huh. I had no idea that language models

were involved in this.

>> Yeah. So Jeff Dean borrows Phil and

famously uses it to code up his

implementation of AdSense in a week

because he's Jeff Dean. And boom,

AdSense. This is billions of dollars of

new revenue to Google overnight because

it's the same corpus of ads that are

adwords that are search ads that they're

now serving on third party pages. They

just massively expanded the inventory

for the ads that they already have in

the system. Thanks to Phil. Thanks to

Phil. All right, this is a moment where

we got to stop and just give some Jeff

Dean facts. Jeff Dean is going to be the

throughine of this episode of Wait, how

did Google pull that off? How did Jeff

Dean just go home and over the weekend

rewrite some entire giant distributed

system and figure out all of Google's

problems? Back when Chuck Norris facts

were big, Jeff Dean facts became a thing

internally at Google. I just want to

give you some of my favorites. The speed

of light in a vacuum used to be about 35

mph. Then Jeff Dean spent a weekend

optimizing physics.

>> So good.

>> Jeff Dean's pin is the last four digits

of pi.

>> Only Googlers would come up with these.

>> Yes. To Jeff Dean, NP means no

problemmo.

>> Oh yeah, I've seen that one before. I

think that one's my favorite.

>> Yes.

>> Oh, man. So, so good. Also a wonderful

human being who we spoke to in research

and was very, very helpful. Thank you,

Jeff.

>> Yes. So, language models definitely

work, definitely going to drive a lot of

value for Google, and they also fit

pretty beautifully into Google's mission

to organize the world's information and

make it universally accessible and

useful if you can understand the world's

information and compress it and then

recreate it. Yeah, that fits the

mission. I think I think that checks the

box.

>> Absolutely. So Phil gets so big that

apparently by the mid 2000s Phil is

using 15% of Google's entire data center

infrastructure and I assume a lot of

that is AdSense ad serving but also did

you mean and all the other stuff that

they start using it for within Google.

>> So uh early natural language systems

computationally expensive.

>> Yes. So okay now mid 2000s fast forward

to 2007 which is a very very big year

for the purposes of our story. Google

had just recently launched the Google

translate product. This is the era of

all the great great products coming out

of Google that we've talked about. You

know maps and Gmail and docs and all the

wonderful things that Chrome and Android

are going to come later. They had like a

10-year run where they basically

launched everything you know of at

Google except for search truly in a

10-year run. And then there were about

10 years after that from 2013 on where

they basically didn't launch any new

products that you've heard about until

we get to Gemini, which is this

fascinating thing. But this 03 to 2013

era was just so rich with hit after hit

after hit,

>> magical. And so one of those products

was Google Translate. you know, not the

same level of user base or perhaps

impact on the world as Gmail or maps or

whatnot, but still a magical magical

product. And the chief architect for

Google Translate was another incredible

machine learning PhD named Fron O. So

Fran had a background in natural

language processing and machine learning

and that was his PhD. He was German. He

got his PhD in Germany at the time.

DARPA,

>> the Defense Advanced Research Projects

Agency, division of the government,

>> had one of their famous challenges going

for machine translation. So Google and

France of course enters this and France

builds an even larger language model

that blows away the competition in this

year's version of the DARPA challenge.

This is either 2006 or 2007. gets a

astronomically high blue score for the

time. It's called the bilingual

evaluation understudy is the sort of

algorithmic benchmark for judging the

quality of translations at the time,

higher than anything else possible. So

Jeff Dean hears about this and the work

that France and the translate team have

done and it's like this is great. This

is amazing. Uh when are you guys going

to ship this in production?

>> Oh, I heard this story.

>> So Jeff and Nome talk about this on the

Door Cash podcast. Yes,

>> that episode is so so good. And Fron is

like, "No, no, no, no, Jeffy, you don't

understand. This is research. This isn't

for the product. We can't ship this

model that we built. This is a n g

language model." Grams are like number

of words in a cluster. And we've trained

it on a corpus of two trillion words

from the Google search index. This thing

is so large it takes it 12 hours to

translate a sentence. So the way the

DARPA challenge worked in this case was

you got a set of sentences on Monday and

then you had to submit your machine

translation of those set of sentences by

Friday.

>> Plenty of time for the servers to run.

>> Yeah. They were like, "Okay, so we have

whatever number of hours it is from

Monday to Friday. Let's use as much

compute as we can to translate these

couple sentences.

Hey, learn the rules of the game and use

them to your advantage.

>> Exactly. So Jeff Dean being the

engineering equivalent of Chuck Norris,

he's like, let me see your code. So Jeff

goes and parachutes in and works with

the translate team for a few months. And

he rearchitects the algorithm to run on

the words and the sentences in parallel

instead of sequentially. Because when

you're translating a set of sentences or

a set of words in a sentence, you don't

necessarily need to do it in order. You

can break up the problem into different

pieces, work on it independently. You

can parallelize it

>> and you won't get a perfect translation,

but you know, imagine you just translate

every single word. You can at least go

translate those all at the same time in

parallel, reassemble the sentence and

like mostly understand what the initial

meaning was.

>> Yeah. And as Jeff knows very well

because he and Sanjay basically built it

with Zhoza, Google's infrastructure is

extremely parallelizable, distributed.

You can break up workloads into little

chunks, send them all over the various

data centers that Google has, reassemble

the projects, return that to the user.

>> They are the single best company in the

world at parallelizing workloads across

CPUs across multiple data centers.

>> CPUs. We're still talking CPUs here.

>> Yep. And Jeff's work with the team gets

that average sentence translation time

down from 12 hours to 100 milliseconds.

And so then they ship it in Google

Translate. And it's amazing.

>> This sounds like a Jeff Dean fact. Well,

you know, it used to take 12 hours and

then Jeff Dean took a few months with

it. Now it's a 100 milliseconds.

>> Right. Right. Right. Right. Right.

Right. So this is the first large I'm

using large in quotes here language

model used in production in a product at

Google. They see how well this works

like hm maybe we could use this for

other things like predicting search

queries as you type.

That might be interesting, you know, and

of course the crown jewel of Google's

business. That also might be interesting

application for this. The ad quality

score for Adwords is literally the

predicted click-through rate on a given

set of ad copy. You can see how an LLM

that is really good at ingesting

information, understanding it, and

predicting things based on that might be

really useful for calculating ad quality

for Google.

>> Yep. Which is the direct translation to

Google's bottom line.

>> Indeed. Okay. So, obviously all of that

is great on the language model front. I

said 2007 was a big year. Also in 2007

begins

the sort of momentous intersection of

several computer science professors

on the Google campus. So in April of

2007, Larry Page hires Sebastian Thrun

from Stamford to come to Google and work

first part-time and then full-time on

machine learning applications. Sebastian

was the head of sale at Stanford, the

Stanford artificial intelligence

laboratory. Legendary AI laboratory that

was big in the sort of first wave of AI

back in the ' 60s7s when Larry's dad was

active in the field then actually shut

down for a while and then had been

restarted and re-energized here in the

early 2000s. And Sebastian was the

leader, the head of sale.

>> Funny story about Sebastian, the way

that he actually comes to Google.

Sebastian was kind enough to speak with

us to prep for this episode. I didn't

realize it was basically an aqua hire.

He and some I think it was grad students

were in the process of starting a

company had term sheets from Benchmark

and Sequoia.

>> Yes.

>> And Larry came over and said, "What if

we just acquire your company before it's

even started in the form of signing

bonuses?"

>> Yes. Probably a very good decision on

their part. So sale, this group within

the CS department at Stanford, not only

had some of the most incredible, most

accomplished professors and PhD AI

researchers in the world, they also had

this stream of Stanford undergrads that

would come through and work there as

researchers while they were working on

their CS degrees or symbolic system

degrees or, you know, whatever it was

that they were doing as Stanford

undergrads. One of those people was

Chris Cox who's the chief product

officer at Meta. Yeah, that was kind of

how he got his start in

>> all of this and AI and obviously

Facebook and Meta are going to come back

into the story here in a little bit.

>> Wow.

>> You really can't make this up. Another

undergrad who passed through sale while

Sebastian was there was a young freshman

and sophomore who would later drop out

of Stanford to start a company that went

through Y Combinator's very first batch

in summer 2005.

>> I'm on the edge of my seat. Who is this?

>> Any guesses?

>> Uh Dropbox, Reddit. I'm trying to think

who else was in the first batch.

>> Oh, no. No. But way more on the nose for

this episode.

The company was a failed local mobile

social network.

>> Oh, Sam Alman looped.

>> Sam Alman.

>> That's amazing. He was at sale at the

same time.

>> He was at sale. Yep. As an undergrad

researcher.

>> Wow.

>> Wild, right? We told you that it's a

very small set of people that are all

doing all of this.

>> Man, I miss those days. Sam presenting

at the WWDC with Steve Jobs on stage

with the double popped collar, right?

>> Different time in tech.

>> Yeah, the double popped collar. That was

amazing. That was a vibe. That was a

moment. Oh, man. All right. So, April

2007, Sebastian comes over from sale

into Google, Sebastian Thread. And one

of the first things he does over the

next set of months is a project called

Ground Truth for Google Maps,

>> which is essentially Google Maps.

>> It is essentially Google Maps. Before

ground truth, Google Maps existed as a

product, but they had to get all the

mapping data from a company called Tel

Atlas.

>> And I think there were two. They were

sort of a duopoly. Navtech was the other

one.

>> Yeah. Navtech and Tel Atlas.

>> But it was this like kind of crappy

source of truth map data that everyone

used and you really couldn't do any

better than anyone else because you all

just use the same data.

>> Yep. It was not that good and it cost a

lot of money. Tell Atlas and Navtech

were multi-billion dollar companies. I

think maybe one or both of them were

public at some point then got acquired

but a lot of money lot of revenue.

>> Yep. And Sebastian's first thing was

street view, right? So he already had

the experience of orchestrating this

fleet of all these cars to drive around

and take pictures.

>> Yes. So then coming into Google, ground

truth is this sort of moonshot type

project to recreate all the tea atlas

data

>> mostly from their own photographs of

streets from street view. And they

incorporated some other data. There was

like census data they used. I think it

was 40 something data sources to bring

it all together. But ground truth was

this very ambitious effort to create new

maps from whole cloth.

>> Yep. And just like all of the AI and AI

enabled projects within Google that

we're talking about here works very very

well.

Huge win.

>> Well, especially when you hire a

thousand people in India to help you uh

sift through all the discrepancies in

the data and actually handdraw all the

maps. Yes, we are not yet in an era of a

whole lot of AI automation. So on the

back of this win with ground truth,

Sebastian starts lobbying to Larry and

Sergey. Hey, we should do this a lot. We

should bring in AI professors,

academics, I know all these people into

Google part-time. They don't have to be

full-time employees. Let them keep their

posts in academia, but come here and

work with us on projects for our

products. They'll love it. They get to

see their work used by millions and

millions of people. We'll pay them.

They'll make a lot of money. They'll get

Google stock and they get to stay

professors at their academic

institutions.

>> Win-winwin.

>> Win-winwin. So, as you would expect,

Larry and Sergey are like, "Yeah, yeah,

yeah, that's a good idea. Let's do that.

More of that." So, in December of 2007,

Sebastian brings in a relatively little

known machine learning professor from

the University of Toronto named Jeff

Hinton to the Google campus to come and

give a tech talk, not yet hiring him,

but come give a tech talk to, you know,

all the folks at Google and talk about

some of the new work, Jeff, that you and

your PhD and postoc students there at

the University of Toronto are doing on

blazing new paths with neural networks

>> and Jeff Hinton for anybody who doesn't

know the name now very much known as the

godfather of neural networks and really

the godfather of kind of the whole

direction that AI went in

>> modern AI

>> he was kind of a fringe academic

>> at this point in history I mean neural

networks were not a respected subtree of

AI

>> no totally not

>> and part of the reason is there had been

a lot of hype 30 40 years before around

neural networks that just didn't pan

out. So it was effectively everyone

thought disproven and certainly

backwater.

>> Yep. Then do you remember from our

Nvidia episodes my favorite piece of

trivia about Jeff Hinton.

>> Oh yes. That his grandfather

great-grandfather was George Bool.

>> Yep. He is the great great grandson of

George and Mary Bool who invented

Boolean algebra and Boolean logic

>> which is hilarious now that I know more

about this because that's the basic

building block of symbolic logic of

defined deterministic computer science

logic. And the hilarious thing about

neural nets is it's not it's not

symbolic AI. It's not I feed you these

specific instructions and you follow a

big if then tree. It is

non-deterministic. It is the opposite of

that field.

>> Which actually just underscores again

how sort of heretical this branch of

machine learning and computer science

was.

>> Right.

>> So Ben, as you were saying earlier,

neural networks not a new idea and had

all of this great promise in theory, but

in practice just took too much

computation to do multiple layers. You

could really only have a single or maybe

small singledigit number of layers in a

computer neural network up until this

time. But Jeff and his former postto a

guy named Yan Lun start evangelizing

within the community, hey, if we can

find a way to have multi-layered,

deep layered neural networks, something

we call deep learning, we could actually

realize the promise here. It's not that

the idea is bad. It's that the

implementation which would take a ton of

compute to actually do all the math to

do all the multiplication required to

propagate through layer after layer

after layer of neural networks to sort

of detect and understand and store

patterns. If we could actually do that,

a big multi-layered neural network would

be very valuable and possibly could

work.

>> Yes. Here we are now in 2007, mid200s.

Moore's law has increased enough that

you could actually start to try to test

some of these theories. Yep. So Jeff

comes and he gives this talk at Google.

It's on YouTube. You can go watch it.

We'll link to it in the show notes. This

is incredible. This is an artifact of

history sitting there on YouTube. And

people at Google, Sebastian, Jeff Dean,

and all the other folks who are talking

about, they get very, very, very excited

because they've already been doing stuff

like this with translate and the

language models that they're working

with. That's not using deep neural

networks that Jeff's working on. So

here's this whole new architectural

approach that if they could get it to

work would enable these models that

they're building to work way better,

recognize more sophisticated patterns,

understand the data better. Very, very

promising.

>> Again, kind of all in theory at this

point.

>> Yep. So Sebastian Throne brings Jeff

Hinton into the Google fold after this

tech talk. I think first as a consultant

over the next couple years and then this

is amazing. Later, Jeff Hinton

technically becomes an intern at Google.

Like that's how they get around the

>> That's correct.

>> part-time, full-time policies here.

>> Yep. He was a summer intern in somewhere

around 2011, 2012. And mind you, at this

point, he's like 60 years old.

>> Yes. So in the next couple years after

2007 here, Sebastian's concept of

bringing these computer science machine

learning academics into Google as

contractors or part-time or interns,

basically letting them keep their

academic posts and work on big projects

for Google's products internally goes so

well that by late 2009, Sebastian and

Larry and Sergey decided, hey, we should

just start a whole new division within

Google and it becomes Google X the

moonshot factory the first project

within Google X Sebastian leads himself

>> David don't say it don't say it

>> I won't say the name of it we will come

back to it later but for our purposes

for now the second project would be

critically important not only for our

story but to the whole world everything

in AI changing the entire world and that

second project is called Google Brain.

But before we tell the Google Brain

story, now is a great time to thank our

friends at JP Morgan Payments.

>> Yes. So today we are going to talk about

one of the core components of JP Morgan

Payments, their Treasury solutions. Now

treasury is something that most

listeners probably do not spend a lot of

time thinking about, but it's

fundamental to every company.

>> Yep. Treasury used to be just a back

office function, but now great companies

are using it as a strategic lever. With

JP Morgan Payments Treasury Solutions,

you can view and manage all your cash

positions in real time and all of your

financial activities across 120

currencies in 200 countries. And the

other thing that they acknowledge really

in their whole strategy is that every

business has its own quirk. So, it's not

a cookie cutter approach. They work with

you to figure out what matters most for

you and your business and then help you

gain clarity, control, and confidence.

So whether you need advanced automation

or just want to cut down on manual

processes and approvals, their real-time

treasury solutions are designed to keep

things running smoothly. Whether your

treasury is in the millions or billions,

or perhaps like the company we're

talking about this episode, in the

hundreds of billions of dollars.

>> And they have some great strategic

offerings like Payby Bank, which lets

customers pay you directly from their

bank account. It's simple, secure,

tokenized, and you get faster access to

funds and enhance data to optimize

revenue and reduce fees. This lets you

send and receive real-time payments

instantly just with a single API

connection to JP Morgan. And because JP

Morgan's platform is global, that one

integration lets you access 45 countries

and counting and lets you scale

basically infinitely as you expand. As

we've said before, JP Morgan Payments

moves $10 trillion a day. So scale is

not an issue for your business.

>> Not at all. If you're wondering how to

actually manage all that global cash, JP

Morgan again has you covered with their

liquidity and account solutions that

make sure you have the right amount of

cash in the right currencies in the

right places for what you need. So

whether you're expanding into new

markets or just want more control over

your funds, JP Morgan Payments is the

partner you want to optimize liquidity,

streamline operations, and transform

your treasury. To learn more about how

JP Morgan can help you and your company,

just go to jporggan.com/acquired

and tell them that Ben and David sent

you.

>> All right, David. So, Google Brain.

>> So, when Sebastian left Stanford

full-time and joined Google full-time,

of course, somebody else had to take

over sales. And the person who did is a

another computer science professor,

brilliant guy named Andrew Ing.

>> This is like all the hits.

>> All the hits. This is all the AI hits on

this episode.

So, what does Sebastian do? He recruits

Andrew to come part-time, start spending

a day a week on the Google campus. And

this coincides right with the start of X

and Sebastian formalizing this division.

So, one day in 2010, 2011 time frame,

Andrew's spending his day a week on the

Google campus and he bumps into who

else? Jeff Dean. And Jeff Dean is

telling Andrew about what he and Fron

have done with language models and what

Jeff Hinton is doing in deep learning.

Of course, Andrew knows all this. And

Andrew is talking about what he and Sale

are doing at Stanford. and they decide,

you know, the time might finally be

right to try and take a real big swing

on this within Google and build a

massive really large deep learning model

in the vein of what Jeff Hinn has been

talking about on highly paralyzable

Google infrastructure.

>> And when you say the time might be

right, Google had tried twice before and

neither project really worked. They

tried this thing called brains on Borg.

Borg is sort of an internal system that

they use to run all of their

infrastructure. They tried the Cortex

project and neither of these really

worked. So there's a little bit of scar

tissue in the sort of research group at

Google of are large-scale neural

networks actually going to work for us

on Google infrastructure. So the two of

them, Andrew Ang and Jeff Dean, pull in

Greg Curado, who is a neuroscience PhD

and amazing researcher who was already

working at Google. And in 2011, the

three of them launch the second official

project within X, appropriately enough,

called Google Brain. And the three of

them get to work building a really,

really big, deep neural network model.

>> And if they're going to do this, they

need a system to run it on. You know,

Google is all about taking this sort of

frontier research and then doing the

architectural and engineering system to

make it actually run.

>> Yes. So Jeff Dean is working on this

system on the infrastructure and he

decides to name the infrastructure

disbelief which of course is a pun both

on the distributed nature of the system

and also on of course the word disbelief

because

>> no one thought it was going to work.

>> Most people in the field thought this

was not going to work and most people in

Google thought this was not going to

work.

>> And here's a little bit on why and it's

a little technical but follow me for a

second. All the research from that

period of time pointed to the idea that

you needed to be synchronous. So all the

compute needed to be sort of really

dense happening on a single machine with

really high parallelism kind of like

what GPUs do that you really would want

it all sort of happening in one place so

it's really easy to kind of go look up

and see hey what are the computed values

for everything else in the system before

I take my next move. What Jeff Dean

wrote with disbelief was the opposite.

it was distributed across a whole bunch

of CPU cores and potentially all over a

data center or maybe even in different

data centers. So in theory, this is

really bad because it means you would

need to be constantly waiting around on

any given machine for the other machines

to sync their updated parameters before

you could proceed. But instead, the

system actually worked asynchronously

without bothering to go and get the

latest parameters from other cores. So

you were sort of updating parameters on

stale data. You would think that

wouldn't work. The crazy thing is it

did. Yes. Okay. So you've got disbelief.

What do they do with it now? They want

to do some research. So they try out can

we do cool neural network stuff? And

what they do in a paper that they

submitted in 2011 right at the end of

the year is I'll give you the name of

the paper first. building high-level

features using largecale unsupervised

learning. But everyone just calls it the

cat paper.

>> The cat paper.

>> You talk to anyone at Google, you talk

to anyone in AI, they're like, "Oh yeah,

the cat paper." What they did was they

trained a large nine layer neural

network to recognize cats from unlabeled

frames of YouTube videos using 16,000

CPU cores on a thousand different

machines. And listeners, just to like

underscore how seminal this is, we

actually talked with Sundar in prep for

the episode. And he cited seeing the cat

paper come across his desk as one of the

key moments that sticks in his brain in

Google's story.

>> Yeah. A little later on, they would do a

TGIF where they would present the

results of the CAT paper and you talk to

people at Google, they're like, "That

TGIF, oh my god, that's when it all

changed."

>> Yeah. It proved that large neural

networks could actually learn meaningful

patterns without supervision and without

labeled data. And not only that, it

could run on a distributed system that

Google built to actually make it work on

their infrastructure. And that is a huge

unlock of the whole thing. Google's got

this big infrastructure asset. Can we

take this theoretical computer science

idea that the researchers have come up

with and use disbelief to actually run

it on our system? Yep, that is the

amazing technical achievement here. That

is almost secondary to the business

impact of the CAT paper. I think it's

not that much of a leap to say that the

cat paper led to probably hundreds of

billions of dollars of revenue generated

by Google and Facebook and by dance over

the next decade.

>> Definitely pattern recognizers in data.

So YouTube

had a big problem at this time, which

was that people would upload these

videos, and there's tons of videos being

uploaded to YouTube, but people are

really bad at describing what is in the

videos that they're uploaded. And

YouTube is trying to become more of a

destination site, trying to get people

to watch more videos, trying to build a

feed, increase dwell time, etc., etc.

And the problem is the recommener is

trying to figure out what to feed and

it's only just working off titles and

descriptions that people were writing

about their own videos,

>> right? And whether you're searching for

a video or they're trying to figure out

what video to recommend next, they need

to know what the video is about.

>> Yep. So the CAT paper proves that you

can use this technology, a deep neural

network running on disbelief

to go inside of the videos in the

YouTube library and understand what they

were about and use that data to then

figure out what videos to serve to

people.

>> If you can answer the question, cat or

not a cat, you can answer a whole lot

more questions, too.

>> Here's a quote from Jeff Dean about

this. We built a system that enabled us

to train pretty large neural nets

through both model and data parallelism.

We had a system for unsupervised

learning on 10 million randomly selected

YouTube frames. As you were saying, Ben,

it would build up unsupervised

representations based on trying to

reconstruct the frame from the highle

representations. We got that working and

training on 2,000 computers using 16,000

cores. After a little while, that model

was actually able to build a

representation at the highest neural net

level where one neuron would get excited

by images of cats. It had never been

told what a cat was, but it had seen

enough examples of them in the training

data of head-on facial views of cats

that that neuron would then turn on for

cats and not much else. It's so crazy. I

mean, this is the craziest thing about

unlabelled data, unsupervised learning,

that a system can learn what a cat is

without ever being explicitly told what

a cat is and that there's a cat neuron.

>> Yeah. And so then there's a iPhone

neuron and a San Francisco Giants neuron

and all the things that YouTube

recommends,

>> not to mention porn filtering, explicit

content filtering,

>> not to mention copyright identification

and enabling revenue share with

copyright holders. Yeah, this leads to

everything in YouTube. Basically puts

YouTube on the path to today becoming

the single biggest property on the

internet and the single biggest media

company in the planet. This kicks off a

10-year period from 2012 when this

happens until Chat GPT on November 30th,

2022

when AI is already shaping the human

existence for all of us and driving

hundreds of billions of dollars of

revenue. It's just in the YouTube feed

and then Facebook borrows it and they

hire Yan Lun and they start Facebook AI

research and then they bring it into

Instagram and then Tik Tok and Bite

Dance take it and then it goes back to

Facebook and YouTube with reals and

shorts. This is the primary way that

humans on the planet spend their leisure

time for the next 10 years.

>> This is my favorite David Rosenthalism.

Everyone talks about 2022 onward as the

AI era. And I love this point from you

that actually for anyone that could make

good use of a recommener system and a

classifier system, basically any company

with a social feed, the AI era started

in 2012.

>> Yes, the AI era started in 2012 and part

of it was the cat paper. The other part

of it was what Jensen at NVIDIA always

calls the big bang moment for AI, which

was AlexNet.

>> Yes. So, we talked about Jeff Hinton

back at the University of Toronto. He's

got two grad students who he's working

with in this era. Alex Kreseky and Ilia

Sutskyver,

>> of course,

>> future co-founder and chief scientist of

OpenAI. And the three of them are

working with Jeff's deep neural network

ideas and algorithms to create an entry

for the famous imageet competition in

computer science.

>> This is Fe Lee's thing from Stanford.

>> It is a annual machine vision algorithm

competition. And what it was was FFE had

assembled a database of 14 million

images that were handlabeled. Famously,

she used Mechanical Turk on Amazon, I

think, to get them all handlabeled.

>> Yes. And I think that's right. And so

then the competition was what team can

write the algorithm that without looking

at the labels, so just seeing the images

could correctly identify the largest

percentage, the best algorithms that

would win the competitions

year-over-year. We're still getting more

than a quarter of the images wrong. So

like 75% success rate, great. Way worse

than a human.

>> Can't use it for much in a production

setting when quarter the time you're

wrong. So then the 2012 competition

along comes Alex Net its error rate was

15%. Still high but a 10% leap from the

previous best being a 25% error rate all

the way down to 15 in one year. A leap

like that had never happened before.

>> It's 40% better than the next best.

>> Yes.

>> On a relative basis.

>> Yes.

>> And why is it so much better, David?

What did they figure out that would

create a $4 trillion company in the

future?

>> So, what Jeff and Alex and Ilia did is

they knew like we've been talking about

all episode that deep neural networks

had all this potential and Moors law

advanced enough that you could use CPUs

to create a few layers. They had the aha

moment of what if we rearchitected

this stuff not to run on CPUs but to run

on a whole different class of computer

chips that were by their very nature

highly highly highly parallelizable

video game graphics cards made by the

leading company in the space at the time

Nvidia.

not obvious at the time and especially

not obvious that this highly advanced

cutting edge academic computer science

research

>> that was being done on supercomputers

usually

>> that was being done on supercomputers

with incredible CPUs would use these toy

video game cards

>> that retail for $1,000.

>> Yeah. Less at that point in time. A

couple hundred bucks. So the team in

Toronto, they go out to like the local

Best Buy or something. They buy two

Nvidia GeForce GTX 580s, which were

Nvidia's top-of-the-line gaming cards at

the time. The Toronto team rewrites

their neural network algorithms in CUDA,

Nvidia's programming language. They

train it on these two off-the-shelf GTX

580s and this is how they achieve their

deep neural network and do 40% better

than any other entry in the imageet

competition. So when Jetson says that

this was the big bang moment of

artificial intelligence, a he's right.

This shows everybody that holy crap, if

you can do this with two off-the-shelf

GTX 580s, imagine what you could do with

more of them or with specialized chips.

And B, this event is what sets Nvidia on

the path from a somewhat struggling PC

gaming accessory maker to the leader of

the AI wave and the most valuable

company in the world today. And this is

how AI research tends to work is there's

some breakthrough that gets you this big

step change function and then there's

actually a multi-year process of

optimizing from there where you get

these kind of diminishing returns curves

on breakthroughs where the first half of

the advancement happens all at once and

then the second half takes many years

after that to figure out. It's rare and

amazing and it must be so cool when you

have an idea, you do it, and then you

realize, "Oh my god, I just found the

next giant leap in the field."

>> It's like I unlocked the next level to

use the video game analogy.

>> Yes,

>> I leveled up. So after Alexet, the whole

computer science world is a buzz.

>> People are starting to stop doubting

neural networks at this point.

>> Yes. So after Alexnap, the three of them

from Toronto, Jeff Hinton, Alex

Kashevky, and Ilaskever do the natural

thing, they start a company called DNN

Research, deep neural network research.

This company does not have any products.

This company has AI researchers

>> who just won a big competition.

>> And predictably, as you might imagine,

it gets acquired by Google almost

immediately.

>> Oh, are you intentionally shortening

this?

>> That's what I thought the story was. Oh,

it is not immediately.

>> Oh, okay.

>> There's a whole crazy thing that happens

where the first bid is actually from BU.

Oh,

>> I did not know that.

>> So, BU offers $12 million.

Jeff Henton doesn't really know how to

value the company and doesn't know if

that's fair. And so, he does what any

academic would do to best determine the

market value of the company. He says,

"Thank you so much. I'm gonna run an

auction now and I'm going to run it in a

highly structured manner where every

time anybody wants to bid the clock

resets and there's another hour where

anybody else can submit another bid.

>> No way.

>> So,

>> I didn't know this. This is crazy.

>> He gets in touch with everyone that he

knows from the research community who is

now working at a big company who he

thinks, hey, this would be a good place

for us to do our research. That includes

BYU, that includes Google, that includes

Microsoft, and there's one other

>> Facebook. Of course,

>> it's a two-year-old startup.

>> Oh, wait. So, it does not include

Facebook.

>> It does not include Facebook. Think

about the year. This is 2012.

So, Facebook's not really in the AI game

yet. They're still trying to build their

own AI lab.

>> Yeah. Yeah. Because Yan Lun and Fairwood

start in 2013. Is it Instagram?

>> Nope. It is the most important part of

the end of this episode.

>> Wait. Well, it can't be Tesla because

Tesla is older than that.

>> Nope.

>> Well, OpenAI wouldn't get founded for

years.

>> Wow. Okay, you really got me here.

>> What company slightly predated OpenAI

doing effectively the same mission?

>> Oh,

of course. Of course. Hiding in plain

sight.

Deep Mind. Wow. Deep Mind, baby. They

are the fourth bidder in a four-way

auction for DNN Research. Now, of

course, right after the bidding starts,

DeepMind has to drop out. They're a

startup. They don't actually have the

cash to be able to buy.

>> Yeah. Didn't even cross my mind cuz my

first question was like, where the hell

would they get the money because they

had no money.

>> But Jeff Hinton already knows and

respects Demis.

>> Ah,

>> even though he's just doing this at the

time startup called DeepMind.

>> That's amazing. Wait, how is Deep Mind

in the auction, but Facebook is not?

Isn't that wild?

>> That's wild.

>> So, the timing of this is concurrent

with the it was then called NIPS, now

it's called Nurips Conference. So, Jeff

Hinton actually runs the auction from

his hotel room at the Hera's Casino in

Lake Tahoe.

>> Oh my god, amazing.

>> So, the bids all come in and we got to

thank Cade Mets, the author of Genius

Makers, great book on the whole history

of AI that we're actually going to

reference a lot in this episode. The

bidding goes up and up and up. At some

point, Microsoft drops out. They come

back in. Told you DeepMind drops out.

So, it's BU and Google really going at

the end. And finally, at some point, the

researchers look at each other and they

say, "Where do we actually want to land?

We want to land at Google." And so, they

stop the bidding at $44 million and just

say, "Google, this is more than enough

money. We're going with you."

>> Wow. I knew it was about $40 million. I

did not know that whole story. It's

almost like Google itself and you know

the Dutch auction IPO process, right?

How fitting.

>> That's kind of a perfect DNA. Yes.

>> Wow.

>> And the three of them were supposed to

split it 33 each and Alex and Ilia go to

Jeff and say, "I really think you should

have a bigger percent. I think you

should have 40% and we should each have

30." And that's how it ends up breaking

down.

>> Ah, wow. What a team. Well, that leads

to the three of them joining Google

Brain directly. And turbocharging

everything going on there. Spoiler

alert, a couple years later, Astro

Teller, who would take over running

Google X after Sebastian Threaten left,

he would get quoted in the New York

Times in a profile of Google X, that the

gains to Google's core businesses and

search and ads and YouTube from Google

Brain have way more than funded all of

the other bets that they have made

within Google X and throughout the

company over the years. It's one of

these things that if you make something

a few% better that happens to do tens of

billions of dollars or hundreds of

billions of dollars in revenue, you find

quite a bit of loose change in those

couch cushions.

>> Yes, quite quite a bit of loose change.

But that's not where the AI history ends

within Google. There is another very

important piece of the Google AI story

that is an acquisition from outside of

Google. The AI equivalent of Google's

acquisition of YouTube. It's what we

talked about a minute ago, Deep Mind.

But before we tell the Deep Mind story,

now is a great time to thank a new

partner of ours, Sentry.

>> Yes, listeners, that is S N T R Y, like

someone's standing guard.

>> Yes, Sentry helps developers debug

everything from errors to latency and

performance issues, pretty much any

software problem, and fix them before

users get mad. As their homepage puts

it, they are considered quote unquote

not bad by over four million software

developers.

>> And today we're talking about the way

that Sentry works with another company

in the acquired universe, Anthropic.

Anthropic used to have some older

monitoring systems in place, but as they

scaled and became more complex, they

adopted Sentry to find and fix issues

faster.

>> So when you're building AI models, like

we're talking about all episode here,

small issues can ripple out into big

ones fast. Let's say you're running a

huge compute job like training a model.

If one node fails, it can have massive

downstream impacts, costing huge amounts

of time and money. Sentry helped

Anthropic detect bad hardware early so

they could reject it before causing a

cascading problem and taking debugging

down to hours instead of days for them.

And one other fun update from Sentry,

they now have an AI debugging agent

called Seir. Seir uses all the context

that Sentry has about your app usage to

run root cause analysis as issues are

detected. It uses errors, span data,

logs, and tracing and your code to

understand the root cause, fix it, and

get you back to shipping. It even

creates pull requests to merge code

fixes in. And on top of that, they also

recently launched agent and MCP server

monitoring. AI tooling tends to offer um

limited visibility into what's going on

under the hood, shall we say. Century's

new tools make it easy to understand

exactly what's going on. This is

everything from actual AI tool calls to

performance across different models and

interactions between AI and the

downstream services. We're pumped to be

working with Sentry. We're big fans of

the company and of all the great folks

we're working with there. They have an

incredible customer list including not

only Anthropic, but Cursor, Verscell,

Linear, and more. And actually, if

you're in San Francisco or the Bay Area,

Sentry is hosting a small invite only

event with Dave and I in San Francisco

for product builders on October 23rd.

You can register your interest at

century.io/acquired.

That's century sy.io/acquired.

And just tell them that Ben and David

sent you. All right, David. Deep mind. I

kind of like your framing. The YouTube

of AI.

>> The YouTube of AI for Google. They

bought this thing for, we'll talk about

the purchase price, but it's worth what

$500 billion today. I mean, this is as

good as Instagram or YouTube in terms of

greatest acquisitions of all time.

>> 100%. So, I remember when this deal

happened, just like I remember when the

Instagram deal happened cuz the number

was big at the time.

>> It was big, but I remember it for a

different reason. It was like when

Facebook bought Instagram, like, "Oh my

god, this is wow, what a tectonic shift

in the landscape of tech." In January

2014, I remember reading on TechCrunch

this random news,

>> right? You're like, "Deep what?"

>> That Google is spending a lot of money

to buy something in London that I've

never heard of that's working on

artificial intelligence.

>> Right. This really illustrates how

outside of mainstream tech AI was at the

time.

>> Yeah. And then you dig in a little

further and you're like, this company

doesn't seem to have any products. And

it also doesn't even really say anything

on its website about what Deep Mind is.

It says it is a quote unquote

cuttingedge artificial intelligence

company.

>> Wait, did you look this up on the way

back machine?

>> Yeah, I did. I did.

>> Oh, nice. to build generalpurpose

learning algorithms for simulations,

e-commerce, and games. This is 2014.

This does not compute, does not

register.

>> Simulations, e-commerce, and games. It's

kind of a random spattering of

>> Exactly. It turns out though, not only

was that description of what DeepMind

was fairly accurate, this company and

this purchase of it by Google was the

butterfly flapping its wings equivalent

moment that directly leads to OpenAI,

Chat, GPT, Anthropic, and basically

everything.

>> Certainly Gemini

>> that we know. Yeah, Gemini directly in

the world of AI today

>> and probably XAI given Elon's

involvement.

>> Yeah, of course XAI.

>> In a weird way, it sort of leads to

Tesla self-driving too. Carpathy.

>> Yeah, definitely. Okay, so what is the

story here? Deep Mind was founded in

2010 by a neuroscience PhD named Demis

Hassabis who previously started a video

game company.

>> Oh yeah. and a posttock named Shane Le

at University College London and a third

co-founder who was one of Demis' friends

from growing up, Mustafa Sullean. This

was unlikely to say the least.

>> This would go on to produce a knight and

Nobel Prize winner.

>> Yes. So Demis, the CEO, was a childhood

chess prodigy turned video game

developer who when he was aged 17 in

1994,

he had gotten accepted to the University

of Cambridge, but he was too young and

the university told him, "Hey, take a,

you know, gap year, come back." He

decided that he was going to go work at

a video game developer at a video game

studio called Bullfrog Productions for

the year. And while he's there, he

created the game Theme Park, if you

remember that. It was like a theme park

version of Sim City. This was a big

game. This was very commercially

successful. Roller Coaster Tycoon would

be sort of a clone of this that would

have many, many sequels over the years.

>> Oh, I played a ton of that. Yeah, it

sells 15 million copies in the mid 90s.

Wow, wild. Then after this, he goes to

Cambridge, studies computer science

there. After Cambridge, he gets back

into gaming, founds another game studio

called Elixir that would ultimately

fail. And then he decides, you know

what, I'm going to go get my PhD in

neuroscience. And that is how Demis ends

up at University College London. There

he meets Shane leg who's there as a

postoc. Shane is a self-described at the

time member of the lunatic fringe in the

AI community in that he believes

this is 2008 9 10 he believes that AI is

going to get more and more and more

powerful every year and that it will

become so powerful that it will become

more intelligent than humans and Shane

is one of the people who actually

popularizes the term artificial general

intelligence AGI.

>> Oh, interesting. Which of course lots of

people talk about now and approximately

zero people were afraid of that. I mean,

you had like the Nick Bostonramm type

folks, but very few people were thinking

about super intelligence or the

singularity or anything like that. For

what it's worth, not Elon Musk. He's not

included in that list because Demis

would be the one who tells Elon about

this.

Yes, we'll get to it. So, Demis and

Shane hit it off. They pull in Mustafa,

Demis' childhood friend, who is himself

extremely intelligent. He had gone to

the University of Oxford and then

dropped out, I think, at age 19 to do

other startupy type stuff. So, the three

of them decided to start a company,

DeepMind. The name of course being a

reference to deep learning, Jeff

Hinton's work and everything coming out

of the University of Toronto. and the

goal that the three of these guys have

of actually creating an intelligent mind

with deep learning. Like Jeff and Ellie

and Alex aren't really thinking about

this yet. As we said, this is lunatic

fringe type stuff.

>> Yes, AlexNet, the cat paper, that whole

world is about better classifying data.

Can we better sort into patterns? It's a

giant leap from there to say, "Oh, we're

going to create intelligence."

>> Yes. I think probably some people

almost almost certainly at Google were

thinking, "Oh, we can create narrow

intelligence that'll be better than

humans at certain tasks."

>> I mean, a calculator is better than

humans at certain tasks,

>> right? But I don't think too many people

were thinking, "Oh, this is going to be

general intelligence, smarter than

humans,

>> right?"

>> So, they decide on the tagline for the

company is going to be solve

intelligence and use it to solve

everything else.

>> Ooh, I like it. I like it. Yeah. Yeah. I

mean, they're they're they're good

marketers, too, these guys.

>> So, there's just one problem to do what

they want to do.

>> Money. Just saying. Money is the

problem.

>> Right. Right. Right. Money is the

problem for lots of reasons. But even

more so than any other given startup in

the 2010 era, it's not like they can

just go spin up an AWS instance and like

build an app and deploy it to the app

store. They want to build really,

really, really, really, really big deep

learning neural networks that requires

Googleiz levels of compute. Well, it's

interesting. It actually they don't

require that much funding yet. The AI of

the time was go grab a few GPUs. We're

not training giant LLMs. That's the

ambition eventually, but right now, what

they just need to do is raise a few

million bucks. But who's going to give

you a few million bucks when there's no

business plan? When you're just trying

to solve intelligence, you need to find

some lunatics.

>> It's a tough cell to VCs,

>> except for the exact right,

>> as you say, they need to find some

lunatics.

>> Oh, I chose my words carefully, didn't

you?

>> Yeah, we use the term lunatic in uh

>> it's endearing is

>> most endearing possible way here given

that they were all basically right. So,

in June 2010, Demis and Shane managed to

get invited to the Singularity Summit in

San Francisco, California,

>> cuz they're not raising money for this

in London.

>> Yeah, definitely not. I think they tried

for a couple months and learned that

that was not going to be a viable path.

>> Yes. This summit, the Singularity

Summit, organized by Ray Kerszswhile,

uh, future Google employee, I think,

chief futurist, noted futurist,

>> Elzar Yudkowski,

and

Peter Teal.

>> Yes. So, Demis and Shane are uh excited

about getting this invite. like this is

probably our one chance to get funded.

>> But we probably shouldn't just walk in

guns blazing and say, "Peter, can we

pitch you?"

>> Yeah. So, they finagle their way into

Demis getting to give a talk on stage at

the summit.

>> Always the hack.

>> They're like, "This is great. This is

going to be the hack. The talk is going

to be our pitch to Peter and Founders

Fund." Peter has just started Founders

Fund at this point. you know, obviously

member of the PayPal mafia, very

wealthy.

>> I think he had a big Roth IRA at this

point is the right way to frame it.

>> Big Roth IRA that he had invested in

Facebook, first investor in Facebook. He

is the perfect target. They architect

the presentation at the summit to be a

pitch directly to Peter essentially a

thinly veiled pitch. Shane has a quote

in Parm Olsen's great book Supremacy

that we used as a source for a lot of

this deep mind story. And Shane says,

"We needed someone crazy enough to fund

an AGI company. Somebody who had the

resources not to sweat a few million and

liked super ambitious stuff." They also

had to be massively contrarian because

every professor that he would go talk to

would certainly tell him absolutely do

not even think about funding this.

That ven diagram sure sounds a lot like

Peter Teal. So they show up at the

conference. Demis is going to give the

talk. Goes out on stage. He looks out

into the audience. Peter is not there.

Turns out Peter wasn't actually that

involved in the conference.

>> He's a busy guy. He's a co-founder or

co-organizer, but is a busy guy.

>> Yes. Guy's like, shoot. Oh, we missed

our chance.

What are we gonna do? And then fortune

turns in their favor. They find out that

Peter is hosting an afterparty that

night at his house in San Francisco.

They get into the party. Deis seeks out

Peter and he's Deis is very very very

smart as anybody who's ever listened to

him talk would immediately know. He's

like rather than just pitching Peter

headon. I'm going to come about this

obliquely. He starts talking to Peter

about chess because he knows as

everybody does that Peter Teal loves

chess. And Demis had been the second

highest ranked player in the world as a

teenager in the under 14 category.

>> Good strategy.

>> Great strategy. The man knows his chess

moves. So Peter's like, "hm, I like you.

You seem smart. What do you do?" And

Deis explains, he's got this AGI

startup. They were actually here. He

gave a talk on stage as part of the

conference. People are excited about

this. And Peter says, "Okay, all right.

Come back to Founders Fund tomorrow and

give me the pitch." So they do. They

make the pitch. It goes well. Founders

Fund leads Deep Minds seed round of

about $2 million. My how times have

changed for AI company seed rounds these

days.

>> Oh yes.

>> Imagine leading Deep Mind seed round

with less than $2 million check. And

through Peter and Founders Fund, they

get introduced,

>> hey Elon, you should meet this guy.

>> To another member of the PayPal mafia,

Elon Musk.

>> Yes.

So, it's teed up in a pretty low-key

way. Hey, Elon, you should meet this

guy. He's smart. He's thinking about

artificial intelligence. So, Elon says,

"Great. Come over to SpaceX. I'll give

you the tour of the place." So, Deus

comes over for lunch and a tour of the

factory. Of course, Deus thinks it's

very cool, but really, he's trying to

reorient the conversation over to

artificial intelligence. And I'll read

this great excerpt from an article in

the Guardian. Musk told Hassabis his

priority was getting to Mars as a backup

planet in case something went wrong

here. I don't think he'd thought much

about AI at this point. Hassabis pointed

out a flaw in his plan. I said, "What if

AI was the thing that went wrong here?

Then being on Mars wouldn't help you

because if we got there, then it would

obviously be easy for an AI to get there

through our communication systems or

whatever it was." He hadn't thought

about that. So he sat there for a minute

without saying anything, just sort of

thinking, hm, that's probably true.

Shortly after, Musk too became an

investor in DeepMind.

>> Yes.

>> Yes. Yes.

>> I think it's crazy that Demis is sort of

the one that woke Elon up to this idea

of we might not be safe from the AI on

Mars either.

>> Right. Right. I hadn't considered that.

So, uh, this is the first time the bit

flips for Elon of we really need to

figure out a safe, secure AI for the

good of the people. That sort of seed

being planted in his head.

>> Yep.

>> Which of course is what Deep Mind's

ambition is. We are here doing research

for the good of humanity like scientists

in a peer-reviewed way.

>> Yep. I think all that is true. Also

in the intervening months to year after

this meeting between Demis and Elon and

Elon investing in DeepMind, Elon also

starts to get really really excited and

convinced about the capabilities of AI

in the near term and specifically the

capabilities of AI for Tesla.

>> Yes. Like with everything else in Elon's

world, once the bit flips and he becomes

interested, he completely changes the

way he views the world. Completely sheds

all the old ways and actions that he was

taking. And it's all about what do I

most do to embrace this new worldview

that I have?

>> And other people have been working on

for a while already by this point. AI

driving cars.

>> Yep.

>> That sounds like it would be a pretty

good idea for Tesla. does.

>> So Elon

starts trying to recruit as many AI

researchers as he possibly can and

machine vision and machine learning

experts into Tesla. And then Alexet

happens and man, Alex Net's really,

really, really good at identifying and

classifying images and cat videos on

YouTube and the YouTube recommener feed.

Well, is that really that different from

a live feed of video from a car that's

being driven and understanding what's

going on there?

>> Can we process it in real time and look

at differences between frames?

>> Perhaps controlling the car not all that

different. So Elon's excitement

channeled initially through Deep Mind

and Demis about AI and AI for Tesla

starts ratcheting up big time.

>> Yep. Meanwhile, back in London, DeepMind

is getting to work. They're hiring

researchers. They're getting to work on

models. They're making some vague noises

about products to their investors. Maybe

we could do something in shopping. Maybe

something in gaming like the description

on the website at the time of

acquisition said. But mostly what they

really really want to do is just build

these models and work on intelligence.

And then one day in late 2013,

they get a call from Mark Zuckerberg. He

wants to buy the company. Mark has woken

up to everything that's going on at

Google after Alexet and what AI is doing

for social media feed recommendations at

YouTube, the possibility of what it can

do at Facebook and for Instagram. He's

gone out and recruited Yan Lun Jeff

Hinton's old postoc who's together with

Jeff one of the sort of godfathers of AI

and deep learning

>> and really popularized the idea of

convolutional neural networks the next

hot thing in the field of AI at this

point in time

>> and so with Yan they have created fair

Facebook AI research which is a Google

brain rival within Facebook and remember

who the first investor in Facebook was

who's still on the board

>> Peter and is also the lead investor in

DeepMind. Where do you think Mark

learned about DeepMind? Peter Teal,

>> was it? Do you know for sure that it was

from Peter?

>> No, I don't know for sure, but like how

else could Mark have learned about this

startup in London?

>> I've got a great story of how Larry

Paige found out about it.

>> Oh, okay. Well, we'll get to that in one

sec.

>> So, Mark calls and offers to buy the

company. And there are various rumors of

how much Mark offered, but according to

Parmmy Olsson in her book, Supremacy,

the reports are that it was up to $800

million. Company with no products and a

long way from AGI.

>> That squares with what Cade Mets has in

his book that the founders would have

made about twice as much money from

taking Facebook's offer versus taking

Google's offer.

>> Yep. So Demis of course takes this news

to the investor group which by the way

is kind of against everything the

company was founded on. The whole aim of

the company and what he's promised the

team is that DeepMind is going to stay

independent, do research, publish in the

scientific community. We're not going to

be sort of captured and told what to do

by the whims of a capitalist

institution.

>> Yep. So definitely some deal point

negotiating that has to happen with Mark

and Facebook if this offer is going to

come through.

>> But Mark is so desperate at this point.

He is open to these very large dealpoint

negotiations such as Yan Lun gets to

stay in New York. Yan Lun gets to stay

operating his lab at NYU. Yan Lun is a

professor. He's flexible on some things.

Turns out Mark is not flexible on

letting Demis keep control of Deep Mind

if he buys it. Demis sort of argued for

we need to stay separate and carved out

and we need this independent oversight

board with his ability to intervene if

the mission of Deep Mind is no longer

being followed. And Mark's like, "No,

you'll be a part of Facebook."

>> Yeah. And you'll make a lot of money.

So, as this negotiation is going on, of

course, the investors in Deep Mind get

wind of this. Elon

finds out about what's going on. He

immediately calls up Demis and says, "I

will buy the company right now with

Tesla stock."

This is late 2013, like early 2014.

Tesla's market cap is about $20 billion.

So Tesla stock from then to today is

about a 70x runup.

Deis and Shane and Mustafa are like,

"Wow." Okay, there's a lot going on

right now. But to your point, they have

the same issues with Elon and Tesla that

they had with Mark. Elon wants them to

come in and work on autonomous driving

for Tesla. They don't want to work on

autonomous driving,

>> right? Or at least exclusively.

>> At least exclusively. Yep. So then

Dennis gets a third call from Larry

Page.

>> Do you want my story of how Larry knows

about the company? I absolutely want

your story of how Larry knows about the

company.

>> All right, so this is still early in

Deep Mind's life. We haven't progressed

all the way to this acquisition point

yet. Apparently, Elon Musk is on a

private jet with Luke Nosk, who's

another member of the PayPal mafia and

an angel investor in DeepMind, and

they're reading an email from Demis with

an update about a breakthrough that they

had where DeepMind AI figured out a

clever way to win at the Atari game

breakout.

>> Yes. And the strategy it figured out

with no human training was that you

could bounce the ball up around the

edges of the bricks and then without

needing to intervene, it could bounce

around along the top and win the game

faster without you needing to have a

whole bunch of interactions with the

paddle down at the bottom. They're

watching this video of how clever it is.

And flying with them on the same private

plane is Larry Page. Of course, because

Elon and Larry used to be very good

friends.

>> Yes. And Larry is like, "Wait, what are

you watching? What company is this?" And

that's how he finds out.

>> Wow.

>> Yes.

>> Elon must have been so angry about all

this.

>> And the crazy thing is this kinship

between Larry and Demis is I think the

reason why the deal gets done at Google.

Once the two of them get together, they

are like peas in a pod. Larry has always

viewed Google as an AI company.

>> Yeah.

>> Demis of course views DeepMind so much

as an AI company that he doesn't even

want to make any products until they can

get to AGI.

>> And Demis, in fact, we should share with

listeners. Demus told us this when we

were talking to him to prep for this

episode, just felt like Larry got it.

Larry was completely on board with the

mission of everything that DeepMind was

doing. And there's something else very

convenient about Google. They already

have Brain. So Larry doesn't need Demis

and Shane and Mustafa and DeepMind to

come work on products within Google.

>> Right?

>> Brain is already working on products

within Google. Demis can really believe

Larry when Larry says, "Nah, stay in

London. Keep working on intelligence. Do

what you're doing. I don't need you to

come work on products within Google."

brain is like actively going and

engaging with the product groups trying

to figure out, hey, how can we deploy

neural nets into your product to make it

better? That's like their reason for

being. So, they're happy to agree to

this

>> and it's working. Brain and neural nets

are getting integrated into search, into

ads, into Gmail, into everything. It is

the perfect home for Deep Mind. Home

away from home, shall we say?

>> Yes. And and there's a third reason why

Google's the perfect fit for Deep Mind.

infrastructure. Google has all the

compute infrastructure you could ever

want right there on tap.

>> Yes. At least with CPUs so far.

>> Yes.

>> So, how's the deal actually happen?

Well, after buying DNN research, Alan

Eustace, who David you spoke with,

right?

>> Yep.

>> Was Google's head of engineering at the

time, he makes up his mind that he

wanted to hire all the best deep

learning research talent that he

possibly could and he had a clear path

to do so. A few months earlier, Larry

Pageige held a strategy meeting on an

island in the South Pacific in Cade

Mets's book, It's an Undisclosed Island.

>> Of course, he did.

>> Larry thought that deep learning was

going to completely change the whole

industry. And so, he tells his team,

this is a quote, "Let's really go big."

Which effectively gave Allen a blank

check to go secure all the best

researchers that he possibly could. So,

in 2013, he decides, I'm going to get on

a plane in December before the holidays

and go meet DeepMind. Crazy story about

this. Jeff Hinton, who's at Google at

the time, had a thing with his back

where he couldn't sit down. He either

has to stand or lay. And so a long

flight across the ocean is not doable.

But he needs to be there as a part of

the diligence process. You have Jeff

Hinton. You need to use him to figure

out if you're going to buy a deep

learning company. And so Alan Eustace

decides he's going to charter a private

jet. And he's going to build this crazy

custom harness rig so that

Jeff Hinton won't be sliding around when

he's laying on the floor during takeoff

and landing.

>> Wow. I was thinking the first part of

this I'm pretty sure Google has planes.

They could just get into Google Play.

>> For whatever reason, this was a separate

charter.

>> But it's not solvable just with a

private plane. You need also a harness,

>> right? And Allan is the guy who set the

record for jumping out of the world's

highest Was it a balloon? I actually

don't know. The highest freef fall jump

that anyone has ever done, even higher

than that Red Bull stunt a few years

before. So, he's like very used to

designing these custom rigs for

airplanes. He's like, "Oh, no problem.

You just need a bed and some straps. I

jumped out of the atmosphere in a scuba

suit. I think we'll be fine."

>> That is amazing.

>> So, they fly to London. They do the

diligence. They make the deal. Deis has

true kinship with Larry and it's done.

$550 million US. There's an independent

oversight board that is set up to make

sure that the mission and goals of

DeepMind are actually being followed and

this is an asset that Google owns today

that again I think is worth half a

trillion dollars if it's independent.

>> Do you know what other member of the

PayPal mafia gets put on the ethics

board after the acquisition?

>> Reed Hoffman.

>> Reed Hoffman

>> has to be given the open AI tie later.

We are gonna come back to Reed in just a

little bit here.

>> Yes. So after the acquisition, it goes

very well very quickly. Famously the

data center cooling thing happens where

DeepMind carved off some part of the

team to go and be an emissary to Google

and look for ways to use DeepMind. And

one of them is around data center

cooling. Very quickly, July of 2016,

Google announces a 40% reduction in the

energy required to cool data centers. I

mean, Google's got a lot of data

centers, a 40% energy reduction. I

actually talked with Jim Gao, who's a

friend of the show and actually led a

big part of this project. And I mean, it

was just the most obvious application of

neural networks inside of Google right

away. Pays for itself.

>> Yeah. Imagine that paid for the

acquisition pretty quickly there.

>> Yes. David, should we talk about AlphaGo

on this episode?

>> Yeah. Yeah. Yeah.

>> I watched the whole documentary that

Google produced about it. It's awesome.

This is actually something that you

would enjoy watching. Even if you're not

researching a podcast episode and you're

just looking to pull something up and

spend an hour or two, I highly recommend

it. It's on YouTube. It's the story of

how Deep Mind postacquisition from

Google trained a model to beat the world

Go champion at Go. And I mean, everyone

in the whole Go community coming in

thought there's no chance. This guy Lee

Seedall is so good that there's no way

that an AI could possibly beat him. It's

a fivegame thing. It just won the first

three games straight. I mean, completely

cleaned up and with inventive new

creative moves that no human has played

before. That's sort of the big crazy

takeaway.

>> There's a moment in one of the games,

right, where it makes a move of people

like, is that a mistake? Like that must

have just been an error. Yeah. Move 37.

Yeah. Yeah. And then a 100 moves later

it plays out and

>> that it was like completely genius. And

humans are now learning from Deep Mind's

strategy of playing the game and

discovering new strategies. A fun thing

for acquired listeners who are like, why

is it go? Go is so complicated compared

to chess. Chess has 20 moves that you

can make at the beginning of the game in

any given turn and then midame there's

like 30 to 40 moves that you could make.

Go on any given turn has about 200. And

so if you think cominatorilally, the

number of possible configurations of the

board is more than the number of atoms

in the universe.

>> That's a great Demis quote by the way.

>> Yeah.

>> And so he says, even if you took all the

computers in the world and ran them for

a million years as of 2017, that

wouldn't be enough compute power to

calculate all the possible variations.

So it's cool because it's a problem that

you can't brute force. You have to do

something like neural networks. And

there is this white space to be creative

and explore. And so it served as this

amazing breeding ground for watching a

neural network be creative against a

human.

>> Yeah. And of course it's totally in with

Demis' background and the DNA of the

company of playing games. You know Deus

was chess champion. And then after go

then they play Starcraft, right?

>> Oh really? I actually didn't know that.

>> Yeah. That was the next game that they

tackle was Starcraft, a real-time

strategy game against an opponent. And

that'll um come back up in a sec with

another opponent here in OpenAI.

>> Yes, David. But before we talk about the

creation of the other opponent, should

we thank another one of our friends here

at Acquired?

>> Yes, we should.

>> All right, listeners, we are here today

to tell you about a new friend of the

show we are very excited about. Work OS.

>> Yes. If you're building software that is

used in enterprises, you've probably

felt the pain of integrating things like

SSO, SCIM or SKIM, permissions, audit

logs, and all the other features that

are required by big customers. And if

you haven't felt this pain yet, just

wait until you get your first big

enterprise customer. And trust us, you

will.

>> Yes, Work OS turns these potential deal

blockers into simple drop-in APIs. And

while work OS had great product market

fit a few years ago with developers who

just want to save on some headache, they

really have become essential in the AI

era.

>> Yeah, I was shocked when they sent over

their latest customer list. Almost all

the big AI companies use work OS today

as the way that they've been able to

rapidly scale revenue so fast. Companies

like OpenAI, Anthropic, Cursor,

Perplexity Sierra Replet Verscell

hundreds of other AI startups all rely

on work OS as their O solution. So I

called the founder to ask why, and he

said it's basically two things. One, in

the AI era, these companies scale so

much faster that they need things like

authorization, authentication, and SSO

quickly to become enterprise ready and

keep up with customer demand even early

in life. Unlike older SAS companies of

yestery year, and two, unlike that world

where you could bring your own little

SAS product just for you and your little

team, these AI products reach deep into

your company's systems and data to

become the most effective. So, IT

departments are scrutinizing heavier

than ever to make sure that new products

are compliant before they can adopt

them.

>> Yeah, it's this kind of like second

order effect of the AI era that the days

of, oh, just swipe a credit card, bring

your own SAS solution for your product

team. You actually need to be enterprise

ready a lot sooner than you did before.

>> Yeah, it's not just about picking up

that big potential customer for the

revenue itself either. It's about doing

it so your competitors don't. Enterprise

readiness has become so table stakes for

companies no matter their stage. And

work OS is basically the weapon of

choice for the best software companies

to shortcut this process and get back to

focusing on what makes their beer taste

better, building the product itself.

>> Amen. Amen. So if you're ready to get

started with just a few lines of code

for SAML, SKIM, Arbok, SSO,

authorization authentication and

everything else to please IT admins and

their checklists, check out work OS.

It's the modern software platform to

make all this happen. That's works.com

and just tell them that Ben and David

sent you.

>> All right, David. So what are the second

order effects of Google buying DMIN?

Well, there's one person who is really,

really, really upset about this and

maybe two people if you include Mark

Zuckerberg, but Mark tends to play his

cards a little closer to the vest. Of

course, Elon Musk is very upset about

this acquisition. When Google buys

DeepMind out from under him, Elon goes

ballistic. As we said, Elon and Larry

had always been very close. And now

here's Google who Elon has already

started to sour on a little bit as he's

now trying to hire AI researchers. And

you've got Alan Eustace flying around

the world sucking up all of the AI

researchers into Google

and Elon's invested in DeepMind wanted

to bring Deep Mind into his own AI team

at Tesla and gone out from under him.

So, this leads to one of the most

fateful dinners in Silicon Valley's

history, organized in the summer of 2015

at the Rosewood Hotel on Sand Hill Road.

Of course, where else would you do a

dinner in Silicon Valley, but the

Rosewood, by two of the most leading

figures in the valley at the time, Elon

Musk and Sam Alman. Sam of course being

president of Y Combinator at the time.

So what is the purpose of this dinner?

They are there to make a pitch to all of

the AI researchers that Google and to a

certain extent Facebook have sucked up

and basically created this duopoly

status on.

>> Again, Google's business model and

Facebook's business model. feed

recommenders or these classifiers turn

out to be unbelievably valuable. So they

can, it's funny in hindsight saying

this, pay tons of money to these people,

>> tons of money, like millions of dollars,

>> take them out of academia and put them

into their dirty capitalist research

labs inside the companies

>> selling advertising.

>> Yes.

>> How dirty could you be? And the question

and the pitch that Elon and Sam have for

these researchers gathered at this

dinner is what would it take to get you

out of Google for you to leave? And the

answer they go around the table from

almost everybody is nothing. You can't.

Why would we leave? We're getting paid

way more money than we ever imagined.

Many of us get to keep our academic

positions and affiliations

and we get to hang out here at Google

>> with each other

>> with each other.

>> Iron sharpens iron. These are some of

the best minds in the world getting to

do cutting edge research with enormous

amount of resources and hardware at

their disposal. It's amazing.

>> It's the best infrastructure in the

world. We've got Jeff Dean here. There

is nothing you could tell us that would

cause us to leave Google.

Except there's one person who is

intrigued. And to quote from an amazing

Wired article at the time by Cade Mets,

who would later write Genius Makers,

right?

>> Yep. Exactly. Quote is the trouble was

so many of the people most qualified to

solve these problems were already

working for Google. And no one at the

dinner was quite sure that these

thinkers could be lured into a new

startup even if Musk and Maltman were

behind it. But one key player was at

least open to the idea of jumping ship.

And then there's a quote from that key

player. I felt like there were risks

involved, but I also felt like it would

be a very interesting thing to try.

>> It's the most Ilia quote of all time.

the most Ilia quote of all time because

that person was Ilia Sutskkever of

course of AlexNet and DNN research and

Google and about to become founding

chief scientist of Open AI. So the pitch

that Elon and Sam are making to these

researchers is let's start a new

nonprofit AI research lab where we can

do all this work out in the open. You

can publish free of the forces of

Facebook and Google and independent of

their control.

>> Yes, you don't have to work on products.

You can only work on research. You can

publish your work. It will be open. It

will be for the good of humanity. All of

these incredible advances, this

intelligence that we believe is to come

will be for the good of everyone, not

just for Google and Facebook.

>> And for one of the researchers, it

seemed too good to be true. So, they

basically weren't doing it cuz they

didn't think anyone else would do it.

It's sort of an activation energy

problem where once Ilia said, "Okay, I'm

in." And once he said, "I'm in, by the

way," Google came back with a big

counter, something like double the

offer. And I think it was delivered from

Jeff Dean personally, and Ilia said,

"Nope, I'm doing this." That was massive

for getting the rest of the top

researchers to go with him.

>> And it was nowhere near all of the top

researchers who left Google to do this,

but it was enough. It was a group of

seven or so researchers who left Google

and joined Elon and Sam and Greg

Brockman from Stripe who came over to

create open AI because that was the

pitch. We're all going to do this in the

open.

>> And that's totally what it was.

>> It totally is what it was. And the

stated mission of OpenAI was to quote

advance digital intelligence in the way

that is most likely to benefit humanity

as a whole unconstrained by a need to

generate financial return which is fine

as long as the thing that you need to

fulfill your mission doesn't take tens

of billions of dollars.

>> Yes.

>> So here's how they would fund it.

Originally there was a billion dollars

pledged.

>> Yes.

>> And that came from famously Elon Musk,

Sam Alman, Reed Hoffman, Jessica

Livingston, who I think most people

don't realize was part of that initial

trunch, and Peter Teal.

>> Yep.

>> Founders Fund of course would go on to

put massive amounts of money into OpenAI

itself later as well. The funny thing is

it was later reported that a billion

dollars was not actually collected. Only

about 130 million of it was actually

collected to fund this nonprofit. And

for the first few years that was plenty

for the type of research they were

doing, the type of compute they needed.

>> Most of that money was going to paying

salaries to the researchers. Not as much

as they could make at Google and

Facebook, but still million or$2 million

for these folks,

>> right? And Yeah. So that really worked

until it really didn't.

>> Yeah. So David, what were they doing in

the early days?

>> Well, in the first days, it was all

hands-on deck recruiting and hiring

researchers. And there was the initial

crew that came over and then pretty

quickly after that in early 2016, they

get a big big win when Dario Amade

leaves Google, comes over, joins Ilia

and crew at OpenAI

dream team, you know, assembling here.

And was he on Google Brain before this?

>> He was on Google Brain. Yep. And he

along with Ilia would run large parts of

OpenAI for the next couple years before

of course leaving to start Anthropic.

But we're still a couple years away from

Anthropic, Clawud, Chat GPT, Gemini,

everything today. for at least the first

year or two. Basically, the plan at

OpenAI is let's look at what's happening

at DeepMind and show the research

community that we can do as a new lab do

the same incredible things that they're

doing and maybe even do them better.

>> Is that why it looks so game like and

game focused?

>> Yes. Yes. So, they started building

models to play games. Famously, the big

one that they do is Dota 2, Defense of

the Ancients 2, the uh massively online

battle arena video game. They're like,

"All right, well, Deep Mind, you're

playing Starcraft. Well, we'll go play

Dota 2. That's even more complex, more

real time."

>> And similar to the emergent properties

of Go, the game would devise unique

strategies that you wouldn't see humans

trying. So, it clearly wasn't humans

coded their favorite strategies and

rules in, it was emergent.

>> Yeah,

>> they did other things. They had a

product called Universe which was around

training computers to play thousands of

games from Atari games to open world

games like Grand Theft Auto. They had

something where they were teaching a

model how to do a Rubik's cube. And so

it was a diverse set of projects that

didn't seem to coales around one of

these is going to be the big thing.

>> Yeah. It was research stuff. It was what

Deepmind was doing.

>> Yeah. It was like a university research.

It was like Deep Mind. And if you think

back to Elon being an investor in Deep

Mind, being really upset about Google

acquiring it out from under him makes

sense.

>> And I think Elon deserves a lot of

credit for having his name and his time

attached to OpenA at the beginning. A

lot of the big heavy hitter recruiting

was Elon throwing his weight behind

this. I'm willing to take a chance.

>> Absolutely.

>> Okay. So that's what's going on over at

OpenAI doing a lot of Deep Mind like

stuff. Bunch of projects, not one single

obvious big thing they're coalesing

around. It's not chat GBT time. Let's

put it that way. Let's go back to Google

cuz last we sort of checked in on them.

Yeah, they bought Deep Mind, but they

had their talent rated. And I don't want

you to get the wrong impression about

where Google is sitting just because

some people left to go to OpenAI. So

back in 2013 when Alex Kashesky arrives

at Google with Jeff Hinton and

Ilaskever,

he was shocked to discover that all

their existing machine learning models

were running on CPUs. People had asked

in the past for GPUs since machine

learning workloads were well suited to

run in parallel, but Google's

infrastructure team had pushed back and

said the added complexity and expanding

and diversifying the fleet. Let's keep

things simple. That doesn't seem

important for us.

>> We're a CPU shop here.

>> Yes. And so to quote from Genius Makers,

in his first days at the company, he

went out and bought a GPU machine, this

is Alex, from a local electronic store,

stuck it in the closet down the hall

from his desk, plugged it into the

network, and started training his neural

networks on this lone piece of hardware

just like he did in academia, except

this time Google's paying for the

electricity. Obviously, one GPU was not

sufficient, especially as more Googlers

wanted to start using it, too. And Jeff

Dean and Alan Eustace had also come to

the conclusion that disbelief while

amazing had to be rearchitected to run

on GPUs and not CPUs. So spring of 2014

rolls around. Jeff Dean and John Gandra

>> who we haven't talked about this

episode.

>> Yeah, JG.

>> Yes, you might be wondering, wait, isn't

that the Apple guy? Yes, he went on to

be Apple's head of AI who at this point

in time was at Google and oversaw Google

Brain 2014. They sit down to make a plan

for how to actually formally put GPUs

into the fleet of Google's data centers,

which is a big deal. It's a big change,

but they're seeing enough reactions to

neural networks that they know to do

this.

>> Yeah. After Alex, it's just a matter of

time.

>> Yeah. So, they settle on a plan to order

40,000 GPUs

from Nvidia.

>> Yeah, of course. Who else are you going

to order them from?

>> For a cost of $130 million.

That's a big enough price tag that the

request gets elevated to Larry Page who

personally approves it even though

finance wanted to kill it because he

goes look the future of Google is deep

learning. As an aside, let's look at

Nvidia at the time. This is a giant

giant order. Their total revenue was $4

billion. This is one order for 130

million.

>> I mean Nvidia is primarily consumer

graphics card company at this point.

>> Yes. and their market cap is $10

billion.

It's almost like Google gave Nvidia a

secret that hey, not only does this work

in research like the imageet

competition, but neural networks are

valuable enough to us as a business to

make a hundred plus million dollar

investment in right now, no questions

asked. We got to ask Jensen about this

at some point. This had to be a tell.

>> This had to really give Nvidia the

confidence. Oh, we should way forward

invest on this being a giant thing in

the future. So, all of Google wakes up

to this idea. They start really putting

it into their products. Google Photos

happened. Gmail starts offering typing

suggestions. David, as you pointed out

earlier, Google's giant Adwords business

started finding more ways to make more

money with deep learning. In particular,

when they integrated it, they could

start predicting what ads people would

click in the future. And so Google

started spending hundreds of millions

more on GPUs on top of that 130 million,

but very quickly paying it back from

their ad system. So it became more and

more of a no-brainer to just buy as many

GPUs as they possibly could. But once

neural nets started to work, anyone

using them, especially at Google scale,

kind of had this problem. Well, now we

need to do giant amounts of matrix

multiplications anytime anybody wants to

use one. The matrix multiplications are

effectively how you do that propagation

through the layers of the neural

network. So you sort of have this

problem.

>> Yes, totally. There's the inefficiency

of it, but then there's also the

business problem of wait a minute, it

looks like we're just going to be

shipping hundreds of millions, soon to

be billions of dollars over to Nvidia

every year for the foreseeable future.

>> Right? So there's this amazing moment

right after Google rolls out speech

recognition, their latest use case for

neural nets just on Nexus phones because

again they don't have the infrastructure

to support it on all Android phones. it

becomes a super popular feature and Jeff

Dean does the math and figures out if

people use this for I don't know call it

three minutes a day and we roll it out

to all billion Android phones we're

going to need twice the number of data

centers that we currently have across

all of Google just to handle it

>> just for this feature yeah

>> there's a great quote where Jeff goes to

Holtzel and goes we need another Google

or David, as you were hinting at, the

other option is we build a new type of

chip customized for just our particular

use case.

>> Yep. Matrix multiplication, tensor

multiplication, a tensor processing

unit, you might say.

>> Ah, yes. Wouldn't that be nice? So,

conveniently, Jonathan Ross, who's an

engineer at Google, has been spending

his 20% time at this point in history

working on an effort involving FPGAAS.

These are essentially expensive but

programmable chips that yield really

fantastic results. So they decide to

create a formal project to take that

work combine it with some other existing

work and build a custom ASIC or an

application specific integrated circuit.

So enter David as you said the tensor

processing unit made just for neural

networks that is far more efficient from

GPUs at the time with the trade-off that

you can't really use it for anything

else. It's not good for graphics

processing. It's not good for lots of

other GPU workloads, just matrix

multiplication and just neural networks,

but it would enable Google to scale

their data centers without having to

double their entire footprint. So the

big idea behind the TPU, if you're

trying to figure out like what was the

core insight, they use reduced

computational precision. So it would

take numbers like 4586.8272

and round it just to 4586.8

or maybe even just 4586 with nothing

after the decimal point. And this sounds

kind of counterintuitive at first. Why

would you want less precise rounded

numbers for this complicated math? The

answer is efficiency. If you can do the

heavy lifting in your software

architecture or what's called

quantization to account for it, you can

store information as less precise

numbers, then you can use the same

amount of power and the same amount of

memory and the same amount of

transistors on a chip to do far more

calculations per second. So you can

either spit out answers faster or use

bigger models. The whole thing is quite

clever behind the TPU. M

>> the other thing that has to happen with

the TPU is it needs to happen now cuz

it's very clear speech to text is a

thing. It's very clear some of these

other use cases at Google.

>> Yeah. Demand for all of this stuff

that's coming out of Google Brain is

through the roof immediately.

>> Right. And we're not even two LLMs yet.

It's just like everyone sort of expects

some of this whether it's computer

vision in photos or speech recognition

like it's just becoming a thing that we

expect and it's going to flip Google's

economics upside down if they don't have

it. So the TPU was designed, verified,

built, and deployed into data centers in

15 months.

>> Wow.

>> It was not like a research project that

could just happen over several years.

This was like a hair on fire problem

that they launched immediately. One very

clever thing that they did was a they

used the FPGAAS as a stop gap. So even

though they were like too expensive on a

unit basis, they could get them out as a

test fleet and just make sure all the

math worked before they actually had the

AS6 printed at I don't know if it was a

TSMC, but you know, fabbed and ready.

The other thing they did is they fit the

TPU into the form factor of a hard

drive, so it could actually slot into

the existing server racks. You just pop

out a hard drive and you pop in a TPU

without needing to do any physical

rearchitecture.

>> Wow, that's amazing. That's the most

googly infrastructure story

>> since the corkboards.

>> Exactly. Also, all of this didn't happen

in Mountain View. It was at a Google

satellite office in Madison, Wisconsin.

>> Whoa.

>> Yes.

>> Why Madison, Wisconsin?

>> There was a particular professor out of

the university and there was a lot of

students that they could recruit from

and

>> Wow.

>> Yeah. I mean, it was probably them or

Epic. Where are you going to go work?

>> Yeah.

>> Wow. They also then just kept this a

secret,

>> right? Why would you tell anybody about

this?

>> Because it's not like they're offering

these in Google Cloud, at least at

first, and why would you want to tell

the rest of the world what you're doing?

So, the whole thing was a complete

secret for at least a year before they

announced it at Google IO. So, really

crazy. The other thing to know about the

TPUs is they were done in time for the

AlphaGo match. So, that match ran on a

single machine with four TPUs in Google

Cloud. And once that worked, obviously

that gave Google a little bit of extra

confidence to go really, really rip

production. So that's the TPU. V1 by all

accounts was not great. They're on V7 or

V8 now. It's gotten much better. TPUs

and GPUs look a lot more similar than

they used to than they've sort of

adopted features from each other. But

today, Google, it's estimated, has 2 to

3 million TPUs. For reference, Nvidia

shipped, people don't know for sure,

somewhere around 4 million GPUs last

year. So people talk about AI chips like

it's this just oh one horse race with

Nvidia. Google has like an almost Nvidia

scale internal thing making their own

chips at this point for their own and

for Google Cloud customers. The TPU is a

giant deal in AI in a way that I think a

lot of people don't realize.

>> Yep. This is one of the great ironies

and maddening things to OpenAI and Elon

Musk is that OpenAI gets founded in 2015

with the goal of, hey, let's shake all

this talent out of Google and level the

playing field and Google just

accelerates,

>> right? They also build TensorFlow.

That's the framework that Google Brain

built to enable researchers to build and

train and deploy machine learning

models. And they built it in such a way

that it doesn't just have to run on

TPUs. super portable without any

rewrites to run on GPUs or even CPUs

too. So this would replace the old disc

belief system and kind of be their

internal and external framework for

enabling ML researchers going forward.

So somewhat paradoxically during these

years after the founding of Open AI,

yes, some amazing researchers are

getting siphoned off from Google and

Google Brain, but Google Brain is also

firing on all cylinders during this time

frame,

>> delivering on the business purposes for

Google left and right.

>> Yes. And pushing the state-of-the-art

forward in so many areas. And then in

2017, a paper gets published from eight

researchers on the Google brain team

kind of quietly. These eight folks were

obviously very excited about the paper

and what it described and the

implications of it and they thought it

would be very big. Google itself, uh,

cool, this is like the next iteration of

our language model work. Great.

>> Which is important to us. But are we

sure this is the next Google? No.

>> No. There are a whole bunch of other

things we're working on that seem more

likely to be the next Google. But this

paper and its publication would actually

be what gave OpenAI the opportunity

>> to build the next Google

>> to grab the ball and run with it and

build the next Google because this is

the transformer paper.

>> Okay. So where did the transformer come

from? like what was the latest thing

that language models had been doing at

Google? So coming out of the success of

Fran Ox's work on Google Translate and

the improvements that happened there

>> in like the late 2000sish 2007

>> yeah mid to late 2000s they keep

iterating on translate and then once

Jeff Hitten comes on board and AlexNet

happens they switch over to a neural

networkbased language model for

translate which was dramatically better

and like a big crazy cultural thing

because you've got these researchers

parachuting in again led by Jeff Dean

saying I'm pretty sure our neural

networks can do this way better than the

classic methods that we've been using

for the last 10 years. What if we take

the next several months and do a proof

of concept? They end up throwing away

the entire old codebase and just

completely wholesale switching to this

neural network. There's actually this

great New York Times magazine story that

ran in 2016 about it. And I remember

reading the whole thing with my jaw on

the floor. Like, wow. Neural networks

are a big effing deal. And this was the

year before the Transformer paper would

come out.

>> Before the Transformer paper. Yes. So,

they do the rewrite of Google Translate,

make it based on recurrent neural

networks, which were state-of-the-art at

that point in time. And it's a big

improvement. But as teams within Google

Brain and Google Translate keep working

on it, there's some limitations. And in

particular, a big problem was that they

quote unquote forgot things too quickly.

I don't know if it's exactly the right

analogy, but you might say in sort of

like today's transformer world speak,

you might say that their context window

was pretty short. As these language

models progressed through text, they

needed to sort of remember everything

they had read so that when they need to

change a word later or come up with the

next word, they could have a whole

memory of the body of text to do that.

>> So, one of the ways that Google tries to

improve this is to use something called

long short-term memory networks or LSTMs

as the acronym that people use for this.

And basically what LSTMs do is they

create a persistent or long

shortterm memory.

You got to use your brain a little bit

here for the model so that it can keep

context as it's going through a whole

bunch of steps.

>> And people were pretty excited about

LSTMs at first.

>> People are thinking like, oh, LSTMs are

what are going to take language models

and large language models mainstream,

>> right? And indeed in 2016 they

incorporated into Google Translate these

LSTMs. It reduces the error rate by 60%.

Huge jump. Yep.

>> The problem with LSTMs though, they were

effective but they were very

computationally intensive and they

didn't parallelize that great. All the

efforts that are coming out of Alex Net

and then the TPU project of

parallelization. This is the future.

this is how we're going to make AI

really work. LSTMs are a bit of a

roadblock here. Yes. So, a team within

Google Brain starts searching for a

better architecture that also has the

attractive properties of LSTMs that it

doesn't forget context too quickly, but

can parallelize and scale better

>> to take advantage of all these new

architectures.

>> Yes. And a researcher named Jakob

Oscarite had been toying around with the

idea of broadening the scope of quote

unquote attention in language

processing. What if rather than focusing

on the immediate words, instead what if

you told the model, hey, pay attention

to the entire corpus of text, not just

the next few words. Look at the whole

thing. And then based on that entire

context and giving your attention to the

entire context, give me a prediction of

what the next translated word should be.

Now, by the way, this is actually how

professional human translators translate

text. You don't just go word by word. I

actually took a translation class in

college, which was really fun. You read

the whole thing of the original in the

original language. you get and

understand the context of what the

original work is and then you go back

and you start to translate it with the

entire context of the passage in mind.

>> So it would take a lot of computing

power for the model to do this but it is

extremely parallelizable. So Yakob

starts collaborating with a few other

people on the brain team. They get

excited about this. They decide that

they're going to call this new technique

the transformer because one, that is

literally what it's doing. It's taking

in a whole chunk of information,

processing, understanding it, and then

transforming it. And B, they also love

transformers as kids. That's not not why

they named it the transformer.

>> And it's taking in the giant corpus of

text and storing it in a compressed

format. Right.

>> Yeah. I bring this up because that is

exactly how you pitched the micro

kitchen conversation with Nom Shazir in

2000 2001 17 years earlier who is a

co-author on this paper.

>> Yes. Well, so speaking of Nam Shazir, he

learns about this project and he

decides, hey, I've got some experience

with this. This sounds pretty cool.

LSTMs definitely have problems. This

could be promising. I'm going to jump in

and work on it with these guys.

And it's a good thing he did because

before Gnome joined the project, they

had a working implementation of the

transformer, but it wasn't actually

producing any better results than LSTMs.

Gnome joins the team, basically pulls a

Jeff Dean, rewrites the entire codebase

from scratch, and when he's done, the

transformer now crushes

the LSTMbased

Google Translate solution. And it turns

out that the bigger they make the model,

the better the results get. It seems to

scale really, really, really well.

Steven Levy wrote a piece in Wired about

the history of this. And there are all

sorts of quotes from the other members

of the team just littered all over this

piece with things like Gnome is a

magician. Gnome is a wizard. Gnome took

the idea and came back and said, "It

works now."

Yeah. And you wonder why Noom and Jeff

Dean are the ones together working on

the next version of Gemini now.

>> Yes. Noom and Jeff Dean are definitely

two peas in a pod here.

>> Yes. So we talked to Greg Curado from

Google Brain, one of the founders of

Google Brain, and it was a really

interesting conversation because he

underscored how elegant the transformer

was. And he said it was so elegant that

people's response was often, "This can't

work. It's too simple. Transformers are

barely a neural network architecture,

>> right? It was another big change from

the AlexNet Jeff Hinton lineage neural

networks.

>> Yeah, it actually has changed the way

that I look at the world cuz he pointed

out that in nature, this is Greg, the

way things usually work is the most

energyefficient way they could work.

almost from an evolution perspective

that the most simple, elegant solutions

are the ones that survive because they

are the most efficient with their

resources. And you can kind of port this

idea over to computer science, too, that

he said he's developed a pattern

recognition inside of the research lab

to realize that you're probably on to

the right solution when it's really

simple and really efficient versus a

complex idea.

>> Mhm.

>> It's very clever. It's I think it's very

true. You know how when you sit around

and you have a thorny problem and you

debate and you whiteboard and you come

up with all and then you're like, "Oh my

god, oh my god, it's so simple." And

that ends up being the right answer.

>> Yeah. There's an elegance to the

transformer.

>> Yes. And that other thing that you

touched on there, this is the beginning

of the modern AI, just feed it more

data. The famous piece, the bitter

lesson by Rich Sutton, wouldn't be

published until 2019. For anyone who

hasn't read it, it's basically we always

think as AI researchers, we're we're so

smart and our job is to come up with

another great algorithm, but effectively

in every field from language to computer

vision to chess, you just figure out a

scalable architecture and then the more

data wins. Just these infinitely

scaling,

>> more data, more compute, better results.

>> Yes. And this is really the start of

when that starts to be like oh we have

found the scalable architecture that

will go at so far for I don't know close

to a decade of just more data in more

energy more compute better results.

>> So the team and noom like yo this thing

has a lot a lot of potential.

>> This is more than better translate. We

can really apply this.

>> Yeah this is going to be more than

better Google translate. The rest of

Google though definitely slower to wake

up to the potential.

>> They build some stuff within a year.

They build BERT, the large language

model.

>> Yes, absolutely true. It is a false

narrative out there that Google did

nothing with the transformer after the

paper was published. They actually did a

lot.

>> In fact, BERT was one of the first LLMs.

>> Yes, they did a lot with

transformer-based large language models

after the paper came out. What they

didn't do was treat it as a wholesale

technology platform change,

>> right? They were doing things like BERT

and uh MUM, this other model, you know,

they could work it into search results

quality. And I think that did

meaningfully move the needle even though

Google wasn't bragging about it and

talking about it. They got better at

query comprehension. They were working

it into the core business just like

every other time Google Brain came up

with something great.

>> Yep. So, in perhaps one of the greatest

decisions ever for value to humanity and

maybe one of the worst corporate

decisions ever for Google, Google allows

this group of eight researchers to

publish the paper under the title

attention is all you need. Obviously, a

nod to the classic Beatles song about

love. As of today in 2025,

this paper has been cited over

173,000 times in other academic papers,

making it currently the seventh most

cited paper of the 21st century. And I

think all of the other papers above it

on the list have been out much longer.

Wow. And also of course within a couple

years all eight authors of the

transformer paper had left Google to

either start or join AI startups

including open AAI. Brutal. And of

course Noom starting Character AI which

what are we calling it? Aquisition. He

would end up back at Google via some

strange licensing and IP and hiring

agreement on the few billion dollars

order. very very expensive mistake on

Google's part. It

>> is fair to say that 2017 begins the

5-year period of Google not sufficiently

seizing the opportunity that they had

created

>> with the transformer. Yes. So speaking

of seizing opportunities, what is going

on at OpenAI during this time?

>> And does anyone think the transformer is

a big deal over there?

>> Yes. Yes, they did. But here's where

history gets really, really crazy. Right

after Google publishes the Transformer

paper in September of 2017,

Elon gets really, really fed up with

what's going on at OpenAI.

>> There's like seven different strategies,

are we doing video games? Are we doing

competitions? What's the plan?

>> What is happening here? As best as I can

tell, all you're doing is just trying to

copy Deep Mind. Meanwhile, I'm here

building SpaceX and Tesla. Self-driving

is becoming more and more clear as

critical to the future of Tesla. I need

AI researchers here, and I need great AI

advancements to come out to help what

we're doing at Tesla. Open AAI isn't

cutting it. So, he makes an ultimatum to

Sam and the rest of the OpenAI board. He

says, "I'm happy to take full control of

OpenAI and we can merge this into Tesla.

I don't even know how that would be

possible to merge a nonprofit into

Tesla."

>> But in Elon Land, if he takes over as

CEO of Open AI, it almost doesn't

matter. We're just treating it as if

it's the same company anyway, just like

we do with the deals with all of my

companies,

>> right? or he's out completely along with

all of his funding. And Sam and the rest

of the board are like, "No."

>> And as we know now, they're sort of

calling capital into the business. It's

not like they actually got all the cash

up front,

>> right? So they're only 130 millionish

into the billion dollars of commitment.

They don't reach a resolution and by

early 2018, Elon is out along with him

the main source of OpenAI's funding. So

either this is just a really really

really bad misjudgment by Elon

or the sort of panic that this throws

Open AI into is the catalyst that makes

them reach for the transformer and say,

"All right, we got to figure things out.

Necessity is the mutter of invention.

Let's go for it."

>> It's true. I don't know if during this

personal tension between Elon and Sam if

they had already decided to go all in on

Transformers or not because the thing

you very quickly get to if you decide

transformers language models were going

all in on that. You do quickly realize

you need a bunch of data, you need a

bunch of compute, you need a bunch of

energy, and you need a bunch of capital.

And so if your biggest backer is walking

away, the 3D chess move is, "Oh, we got

to keep him because we're about to pivot

the company and we need his capital for

this big pivot we're doing." The 4D

chess is if he walks away, maybe I can

turn it into a for-profit company and

then raise money into it and eventually

generate enough profits to fund this

extremely expensive new direction we're

going in. I don't know which of those it

was.

>> Yeah, I don't know either. I suspect the

truth is it's sum of both.

>> Yes. But either way, how nuts is it that

a these things happened at the same time

and b the company wasn't burning that

much cash and then they decided to go

allin on we need to do something so

expensive that we need to be a

for-profit company in order to actually

achieve this mission cuz it's just going

to require hundreds of billions of

dollars for the far foreseeable future.

>> Yep. So in June of 2018, OpenAI releases

a paper describing how they have taken

the transformer and developed a new

approach of pre-training them on very

large amounts of general text on the

internet and then fine-tuning that

general pre-training to specific use

cases. And they also announced that they

have trained and run the first proofof

concept model of this approach which

they are calling GPT1

generatively pre-trained transformer

version one

>> which we should say is right around the

same time as BERT and right around the

same time as another large language

model based on the transformer out of

here in Seattle the Allen Institute.

>> Yes indeed. So it's not as if this is

heretical and a secret. Other AI labs

including Google's own is doing it. But

from the very beginning, OpenAI seem to

be taking this more seriously given the

cost of it would require betting the

company if they continued down this

path.

>> Yeah. Or betting the nonprofit, betting

the entity.

>> Yes.

>> We're going to need some new terminology

here.

>> Yes.

>> So Elon's just walked out the door.

Where are they going to get the money

for this? Sam turns to one of the other

board members of OpenAI, Reed Hoffman.

Reed just a year or so earlier had sold

LinkedIn to Microsoft and Reed is now on

the board of Microsoft. So Reed says,

"Hey, why don't you come talk to Satia

about this?"

>> Do you know where he actually talks to

Satia?

>> Oh, I do. Oh, I do. In July of 2018,

they set a meeting for Sam Alman and

Satia Nadella to sit down while they're

both at the Allen and Company Sun Valley

Conference in Sun Valley, Idaho.

>> It's perfect.

>> And while they're there, they hash out a

deal for Microsoft to invest $1 billion

into OpenAI in a combination of both

cash and Azure cloud credits. And in

return, Microsoft will get access to

OpenAI's technology, get an exclusive

license to OpenAI's technology for use

in Microsoft's products. And the way

that they will do this is OpenAI the

nonprofit will create a captive

for-profit entity called OpenAI LP

controlled by the nonprofit OpenAI Inc.,

and Microsoft will invest into the

captive for-profit entity. Reed Hoffman

joins the board of this new structure

along with Sam, Ilia, Greg Brockman,

Adam D'Angelo, and Tasha Macaulay. And

thus, the modern OpenAI forprofit

nonprofit question mark is created.

>> The thing that's still being figured out

even today here in 2025 is created. This

is like the complete history of AI. This

is not just the Google AI episode.

>> Well, these things are totally

inextricable. And I was just going to

say this is the Google part three

episode. Microsoft, they're back.

Microsoft is Google's mortal enemy. Yes.

That in our first episode on the

founding of Google and search and then

in the second episode on Alphabet and

all the products that they made, the

whole strategy at Google was always

about Microsoft. They finally beat them

on every single front and here they are

>> showing up again saying, "What was

Sati's line? We just want to see them

dance." I think the line that would come

a couple years later is we want the

world to know that we made Google dance.

Oh man. But this is all still pre-Chat

GPT. This is just Sam lining up the

financing he needs for what appears to

be a very expensive scaling exercise

they're about to embark on with GPT2 and

onward.

>> Yep. And this is the right time to talk

about why from OpenAI's perspective

Microsoft is the absolute perfect

partner. It's not just that they have a

lot of money,

>> although that helps.

>> I mean, that helps. That helps a lot.

But more important than money, they have

a really, really great public cloud.

Azure.

>> Yes. OpenAI is not going to go buy a

bunch of NVIDIA GPUs and then build

their own data center here at this point

in 2018. That's not the scale of company

that they are. They need a cloud

provider in order to actually do all the

compute that they want to do. If they

were back at Google and these

researchers are doing it, great. Then

they have all the infrastructure. But

OpenAI needs to tie themselves to

someone with the infrastructure.

>> And there's basically only two non-

Googlele options. They're both in

Seattle.

And hey, one of them in Microsoft is

really interested, also has a lot of

cash. It seems like a great partnership.

>> That's true. I wonder if they did talk

to AWS at all about it cuz I think this

is a crazy Easter egg. I hesitate to say

it out loud, but I think AWS was

actually in the very first investment

with Elon in Open AI.

>> Oh wow. And I don't know if it was in

the form of credits or what the deal

was, but I'd seen it reported a couple

places that AWS actually was in that

nonprofit round.

>> Yeah, in the uh nonprofit funding, the

donations to

>> Yes.

>> the early OpenAI.

>> Anyway, Microsoft Open AI, they end up

tying up

>> a match made in heaven. Satya and Sam

are on stage together talking about how

this amazing partnership and marriage

has come together and they're off to

model training.

>> Yeah. And this paves the way for the GPT

era of OpenAI. But before we tell that

story,

>> yes, now is a great time to thank one of

our favorite companies, Shopify.

>> Yes. And this is really fun because we

have been friends and fans of Shopify

for years. We just had Toby on ACQ2 to

talk about everything going on in AI and

everything that has happened at Shopify

in the six years now since we covered

the company on acquired.

>> It's been a pretty insane transformation

for them.

>> Yeah. So, back at their IPO, Shopify was

the go-to platform for entrepreneurs and

small businesses to get online. What's

happened since is that is still true.

And Shopify has also become the world's

leading commerce platform for

enterprises of any size, period.

>> Yeah. So, what's so cool about the

company is how they've managed to scale

without losing their soul. Even though

companies like Everlane and Vori and

even older established companies like

Mattel are doing billions of revenue on

Shopify, the company's mission is still

the same as the day Toby founded it to

create a world where more entrepreneurs

exist.

>> Oh, yeah. Ben, you got to tell everyone

your favorite enterprise brand that is

on Shopify.

>> Oh, I'm saving that for next episode. I

have a whole thing planned for episode

two of this season.

>> Okay. Okay, great. Anyway, the reason

enterprises are now also using Shopify

is simple. Because businesses of all

sizes just sell more with Shopify. They

built this incredible ecosystem where

you can sell everywhere. Obviously, your

own site. That's always been true. But

now with Shopify, you can easily sell on

Instagram, YouTube, Tik Tok, Roblox,

Roku ChatgPT Perplexity anywhere.

Plus, with Shop Pay, their accelerated

checkout, you get amazing conversion,

and it has a built-in user base of 200

million people who have their payment

information already stored with it.

Shopify is the ultimate example of not

doing what doesn't make your beer taste

better. Even if you're a huge brand,

you're not going to build a better

e-commerce platform for your product.

But that is what Toby and Shopify's

entire purpose is. So, you should use

them.

>> Yes. So, whether you're just getting

started or already at huge scale, head

on over to shopify.com/acquired.

That's hop fy.com/acquired.

And just tell them that Ben and David

sent you.

>> All right. So, what are we in GPT2? Is

that what's being trained right here?

>> Yes, GPT2. This was the first time I

heard about it. Data scientists around

Seattle were talking about this cool,

>> right? So, after the first Microsoft

partnership, the first billion dollar

investment in 2019, OpenAI releases

GPT2, which is still early but very

promising that can do a lot of things,

>> a lot of things, but it required an

enormous amount of creativity on your

part. You kind of had to be a developer

to use it. And if you were a consumer,

there was a very heavy load put on you.

You had to go write a few paragraphs and

then paste those few paragraphs into the

language model and then it would suggest

a way to finish what you were writing

based on the source paragraphs. But it

wasn't interactive.

>> Yes, it was not a chat interface.

>> Yes,

>> there was no interface essentially for

it.

>> It was an API, but it can do things like

obviously translate text. I mean,

Google's been doing that for a long

time, but GPT2, you could do stuff like

make up a fake news headline and give it

to GPT2 and it would write a whole

article. You would read it and you'd be

like, "Uh, sounds like it was written by

a bot."

>> Yeah.

>> But again, there was no front door to it

for normal people. You had to really be

willing to wait in the muck to use this

thing. So then the next year in June of

2020 GPT3

comes out. Still no front door, you

know, user interface to the model, but

it's very good. GPT2 showed the promise

of what was possible. GPT3,

it's starting to be in the conversation

of can this thing pass the Turing test.

>> Oh, yeah.

>> You have a hard time distinguishing

between articles that GPT wrote and

articles that humans wrote. It's very

good. And there starts to be a lot of

hype around this thing. And so even

though consumers aren't really using it,

the broader awareness is that there's

something interesting on the horizon. I

think the number of AI pitch decks that

VCs are seeing is starting to tick up

around this time as is the Nvidia stock

price.

>> Yes.

>> So then in the next year in the summer

of 2021,

Microsoft

releases

GitHub Copilot using GPT3. This is the

first not just Microsoft product that

comes out with GPT baked into it. But

first

>> productization

>> product anywhere. Yeah. First

productization of GPT.

>> Yes. Of any open AI technology.

>> Yeah. It's big. This starts a massive

change in how software gets written in

the world.

>> Slowly then all at once. It's one of

these things where at first just a few

software engineers and there was a lot

of whispers of how cool is this? It

makes me a little bit more efficient.

And now you get all these comments like

75% of all companies code is written

with AI.

>> Yep. So after that, Microsoft invests

another $2 billion in open AI, which

seemed like a lot of money at the time.

So that takes us to the end of 2021.

There's an interesting kind of context

shift that happens around here.

>> Yeah. The bottom falls out on tech

stocks, crypto, the broader markets

really, everyone suddenly goes from risk

on to risk off. And part of it was war

in Ukraine, but a lot of it was interest

rates going up. And Google gets hit

really hard. The high water mark was

November 19th of 2021. Google was right

at $2 trillion of market cap. About a

year after that slide began, they were

worth a trillion dollars. Nearly a 50%

draw down.

>> Wow. So towards the end of 2022 leading

up to the launch of Chat GPT,

>> people I think are starting to realize

Google's slow. They're slow to react to

things. It feels like they're a old

crusty company. Are they like the

Microsoft

2000s where they haven't had a

breakthrough product in a while?

People are not bright on the future of

Google and then chat GPT comes out.

>> Yeah. Wow. Which means if you were

bullish on Google back then and

contrarian, you could have invested at a

trillion dollar market cap.

>> Which is interesting. Like in October of

21, the market was saying that the

fourthcoming AI wave will not be a

strength for Google. Or maybe what it

was saying is we don't even know

anything about a forthcoming AI wave cuz

people are talking about AI, but they've

been talking about VR and they've been

talking about crypto and they've been

talking about all this frontier tech and

like that's not the future at all. This

company just feels slow and unadaptive.

and slow and unadaptive at that point in

history I think would have been a fair

characterization. They had an internal

chatbot right?

>> Yes, they did. All right. So, before we

talk about chat GPT, Google had a

chatbot. So, Nom Shazir, incredible

engineer, rearchitected the transformer,

made it work, one of the lead authors of

the paper, storyried career within

Google, has all of this sway, should

have all of this sway within the

company. After the transformer paper

comes out, he and the rest of the team

are like, "Guys, we can use this for a

lot more than Google Translate." And in

fact, the last paragraph of the paper.

>> Are you about to read the transformer

paper?

>> Yes, I am. We are excited about the

future of attention-based models and

plan to apply them to other tasks. We

plan to extend the transformer to

problems involving input and output

modalities other than text and to

investigate large inputs and outputs

such as images, audio, and video. This

is in the paper.

>> Wow.

>> Google obviously does not do any of that

for quite a while. Gnome though

immediately starts advocating to Google

leadership, hey, I think this is going

to be so big. the transformer that we

should actually consider just throwing

out the search index and the 10 blue

links model and go all in on

transforming all of Google into one

giant transformer model. And then Gnome

actually goes ahead and builds a chatbot

interface to a large transformer model.

>> Is this Lambda?

>> This is before Lambda. Mina is what he

calls it.

>> And there is a chatbot in the like late

teens 2020 time frame that Gnome has

built within Google that arguably is

pretty close to chat GPT. Now, it

doesn't have any of the post-trading

safety that chat GPT does. So, it would

go off the rails.

>> Yeah. Someone told us that you could

just ask it who should die and it would

come up with names for you of people

that should die. It was not a shippable

product. It was a very raw, not safe,

not post-trained chatbot and model,

>> right? But it existed within Google and

they didn't ship it.

>> And technically, not only did it not

have post- training, it didn't have RLHF

either. This very core component of the

models today, the reinforcement learning

with human feedback that chat GPT, I

don't know if it had it in three, but it

did in 3.5 and it did for the launch of

ChatGpt. realistically it wasn't

launchable even if it was an open AI

thing cuz it was so bad. But a company

of Google stature certainly could not

take the risk. So strategically they

have this working against them. But

aside from the strategy thing, there's

two business model problems here. One,

if you're proposing drop the 10 blue

links and just turn google.com into a

giant AI chatbot, revenue drops when you

provide direct answers to questions

versus showing advertisers and letting

people click through to websites. That

upsets the whole Apple cart. Obviously,

they're thinking about it now, but until

2021, that was an absolute non-starter

to suggest something like that. Two,

there were legal risks of sitting in

between publishers and users. I mean,

Google at this point had spent decades

fighting the public perception and court

rulings that they were disintermediating

publishers from readers. So, there was

like a very high bar internally,

culturally to clear if you were going to

do something like this. Even those info

boxes that popped up that took until the

201s to make it happen, those really

were mostly on non-monetizable queries

anyway. So anytime that you were going

to say, "Hey, Google's going to provide

you an answer instead of 10 blue links,"

you had to have a bulletproof case for

it.

>> Yeah. And there was also a brand promise

and trust issue, too. Consumers

trusted Google so much for us even

today. You know, when I'm doing research

for acquired, we need to make sure we

get something right. I'm going to

Google.

>> I look something up in Claude. Yeah.

>> It gives me an answer. I'm like, that's

a really good answer. And then I verify

by searching Google that I can find

those facts too if I can't click through

the sources on Claude. That's my

workflow.

>> Which sort of sounds funny today, but

it's important. If you're going to

propose replacing the 10 blue links with

a chatbot, you need to be really damn

sure that it's going to be accurate.

>> Yes.

>> And in 2020 2021, that was definitely

not the case. Arguably still isn't the

case today. And there also wasn't a

compelling reason to do it because

nobody was really asking for this

product,

>> right?

>> Gnome knew and people in Google knew

that you could make a chatbot interface

to a transformer-based LLM and that was

a really compelling product. The general

public didn't know. Open AAI didn't even

really know. I mean GPT was out there.

>> Do you know the story of the launch of

Chat GPT? Well, I think I do. I have it

in my notes here.

>> All right. So, they've got GPT 3.5. It's

becoming very, very useful.

>> Yeah, this is late 2022. They've got

3.5,

>> but there's still this problem of how am

I supposed to actually use it? How is it

productized? And Sam just kind of says,

"We should make a chatbot. That seems

like a natural interface for this. Can

someone just make a chat?" And within

like a week internally,

>> someone makes a chat. They just turn

calls to the chat GBT 3.5 API into a

product where you're just chatting with

it. And every time you kick off a chat

message, it just calls GBT3.5 on the API

and that turns out to be this magic

product. I don't think they expected it.

I mean, servers are tipping over.

They're working with Microsoft to try to

get more compute. They're cutting deals

with Microsoft in real time to try to

get more investment to get more Azure

credits or get advances on their Azure

credits in order to handle the

incredible load in November of 2022

that's coming in of people wanting to

use this thing. They also just throw up

a payw wall randomly because they

thought that the business was going to

be an API business. They thought that

the projections were all about how much

revenue they were going to do through

B2B licensing deals and then they just

realized, oh, there's all these

consumers trying to use this. Put up a

payw wall to at least dampen the most

expensive use of this thing so we can

kind of offset the cost or slow the roll

out,

>> right? This isn't uh Google search, you

know, 89% gross margin stuff here,

>> right? So they end up having incredibly

fast revenue take off just from the

quick stripe payw wall that they threw

up over a weekend to handle all the

demand. So to say that OpenAI had any

idea what was coming would also be

completely false. They did not get that

this would be the next big consumer

product when they launched it.

>> Ben Thompson loves to call Open AAI the

accidental consumer tech company, right?

>> Yes,

>> it was definitely accidental. Now there

is actually another slightly different

version of the motivation for launching

the chat.

>> Is this the Daario

>> interface? Yeah, the Daario and

Anthropic version. So Anthropic was

working on what would become Claude and

rumors were out there and people at

OpenAI got wind of like, oh hey,

Anthropic and Daario are working on a

chat interface.

We should probably do one, too. and if

we're going to do one, we should

probably launch it before they launch

theirs. So, I think that had something

to do with the timing, but again, I

don't think anybody including OpenAI

realized what was going to happen, which

is Ben, you alluded to it, but to give

the actual numbers, on November 30th,

2022,

>> basically Thanksgiving,

>> OpenAI launches a research preview of an

interface to the new GPT 3.5 called

ChatGpt.

That morning on the 30th, Sam Alman

tweets, "Today we launched ChatGpt. Try

talking with it here." And then a link

to chat.

Within a week, less than a week

actually, it gets 1 million users. By

the end of the year, so you know, one

month later, December 31st, 2022, it has

30 million users. By the end of the next

month, by the end of January 23, so two

months after launch, it crosses 100

million registered users. The fastest

product in history to hit that

milestone. Completely insane. Completely

insane. Before we talk about what that

unleashes within Google, which is the

famous code red, to rewind a little bit

back to Gnome and the chatbot within

Google, Mina, Google does keep working

on Mina. They develop it into something

called Lambda, which is also a chatbot,

also internal.

>> I think it was a language model. At this

point in time, they still differentiated

between the underlying model brand name

and the application name.

>> Yes, Lambda was the model and then there

also was a chat interface to Lambda that

was internal for Google use only. Gnome

is still advocating to leadership, we

got to release this thing. He leaves in

2021 and founds a chatbot company,

Character AI, that still exists to this

day. And they raise a lot of money, as

you would expect. And then Google

ultimately in 2024 after ChatGpt

launches, pays $2.7 billion, I think, to

do a licensing deal with Character AI,

the net of which Gnome comes back to

Google. Yeah, I think Larry and Sergey

were like, if we're going to compete

seriously, we kind of need Gnome back

and blank check to go get him.

>> Yeah. So, throughout 2021 2022, Google's

working on the Lambda model and then the

chat interface to it. In May of 2022,

they do release something that is

available to the public called AI test

kitchen, which is a AI product test area

where people can play around with

Google's internal AI products, including

the Lambda chat interface.

>> Yep. And all fairness, predates Chat

GPT.

>> Do you know what they do to nerf chat so

that it doesn't go too far off the

rails? This is amazing.

>> No. For the version of Lambda chat that

is in AI test kitchen, they stop all

conversations after five turns. So you

can only have five turns of conversation

with the chatbot and then it's just and

we're done for today. Thank you.

Goodbye.

>> Oh wow.

>> And the reason they did that was for

safety of like, you know, if the more

turns you had with it, the more likely

it would start to go off the rails.

>> And honestly, it was a fair concern. I

mean, this thing was not for public

consumption. And if you remember back a

few years before, Microsoft released

Tay, which was this crazy racist

chatbot.

>> Yeah. They launched it as a Twitter bot,

right? And it was going off the rails on

Twitter. This was in 2016, I think.

>> Right. Maximal impact of badness.

>> Yeah. And so despite Google all the way

back in 2017 Sundar declared we are an

AI first company is being understandably

very cautious in real public AI launches

especially on consumerf facing things.

>> Yep. And as far as anyone else is

concerned before chat GPT they are an AI

first company and they're launching all

this amazing AI stuff. It's just within

the vector of their existing products.

Right? So chat GPD comes out becomes the

fastest product in history to 100

million users. It is immediately obvious

to Sundar, Larry, Sergey, all of Google

leadership that this is an existential

threat to Google. Chat GPT is a better

user experience to do the same job

function that Google search does. And to

underscore this, so if you didn't know

it in November of 22, you sure knew it

by February of 23 because good old

Microsoft, our biggest scariest enemy.

Oh yeah.

>> announces a new Bing powered by OpenAI.

And Satia has a quote. It's a new day

for search. The race starts today.

There's an announcement of a new AI

powered search page. He says, "We want

to rethink what search was meant to be

in the first place. In fact, Google's

success in the initial days came by

reimagining what could be done in

search. And I think the AI era we're

entering gets us to think about it. This

is the worst possible thing that could

happen to Google. That now Microsoft can

actually challenge Google on their own

turf intent on the internet with a

legitimately

different better differentiated

product vector. Not what Bing was trying

to do, copycat. This is the full leaprog

and they have the technology partnership

to do it.

>> Or so everybody thinks at the moment.

>> Oh my god, terrifying. This is when

Satia says the quote in an interview

around this launch with Bing. I want

people to know that we made Google

dance.

Oh boy. Well, hey, if you come at the

king, you'd best not miss,

>> right?

>> And this big launch kind of misses.

>> Yes. So what happens in Google December

2022 even before the big launch but

after the chat GPT moment Sundar issues

a code red within the company

>> and what does that mean?

>> Up until this point Google and Sundar

and Larry and everyone had been thinking

about AI as a sustaining innovation in

Klay Christensen's terms. This is great

for Google. This is great for our

products. Look at all these amazing

things that we're doing. It further

entrenches incumbents.

>> It further is entrenching our lead in

all of our already leading products.

>> We can deploy more capital in a

predictable way to either drive down

costs or make our product experiences

that much better than any startup could

make.

>> Got more monetized that much better. All

the things. Once chat GPT comes out on a

dime overnight, AI shifts from being a

sustaining innovation to a disruptive

innovation. It is now an existential

threat. And many of Google's strengths

from the last 10, 15, 20 years of all

the AI work that's happened in the

company are now liabilities. They have a

lot of existing castles to protect.

>> That's right. They have to run

everything through a lot of filters

before they can decide if it's a good

idea to go try to out open AAI open AAI.

>> Yep. So this code red that Sundar issues

to the company is actually a huge moment

because what it means and what he says

is we need to build and ship real native

AI products ASAP. This is actually what

you need to do in the textbook response

to a disruptive innovation as the

incumbent. You need to not bury your

head in the sand and you need to say,

"Okay, we need to like actually go build

and ship products that are comparable to

these disruptive innovators." And you

need to be laser operationally

in all the details to try and figure out

where is it that the new product is

actually cannibalizing our old product

and where is it that the new product can

be complimentary and just lean into all

the ways in which you can be

complimentary in all the different

little scenarios. And really what

they've been trying to do, this ballet

from 2022 onward, is protect the growth

of search while also creating the best

AI experiences they can. And so it's

very clever the way that they do AI

overviews for some but not all queries.

And they have AI mode for some but not

all users. And then they have Gemini,

the full AI app, but they're not

redirecting Google.com to Gemini. It's

this like very delicate dance of

protecting the existing franchise while

also building a hopefully

non-cannibalizing as much as we can new

franchise.

>> Yep. And you see them really going hard

and I think building leading products in

nonarch cannibalizing categories like

video,

>> right? V3 or nano banana. These are

things that don't in any way cannibalize

the existing franchise. They in fact use

some of Google's strength, all the

YouTube training data and stuff like

that.

>> Yeah. So, what happens next? As you

might expect, it gets worse before it

gets better.

Code Red goes out December 2022.

>> Bard baby launch Bard.

>> Oh boy. Well, even before that, January

23, when OpenAI hits 100 million

registered users for ChatGpt, Microsoft

announces they are investing another $10

billion in OpenAI and says that they now

own 49% of the for-profit entity.

Incredible in and of itself. But then

now think about this from the Google

lens of Microsoft, our enemy. They now

arguably own obviously in retrospect

here they don't own open AAI but it

seems at the time like oh my god

Microsoft might now own open AI which is

our first true existential threat in our

history as a company.

>> Not great Bob.

>> So then February 2023 the Bing

integration launches. Satia has the

quote about wanting to make Google

dance. Meanwhile Google is scrambling

internally to launch AI products as fast

as possible. So the first thing they do

is they take the Lambda model and the

chatbot interface to it. They rebrand it

as Bard.

>> They ship that publicly

>> and they release it immediately.

February 2023, ship it publicly.

Available GA to anyone,

>> which maybe was the right move, but god

it was a bad product.

>> It was really bad.

>> I didn't know the term at the time,

RLHF, but it was clear it was missing a

component of some magic that ChatGpt

had. this reinforcement learning with

human feedback where you could really

tune the appropriateness, the tone, the

voice, the sort of correctness of the

responses, it just wasn't there.

>> Yep. So, to make matters worse, in the

launch video for Bard, a video, this is

a choreographed pre-recorded video where

they're showing conversations with Bard.

Bard gives an inaccurate factual

response to one of the queries that they

include in the video.

>> This is one of the worst keynotes in

history.

>> After the Bard launch and this keynote,

Google's stock drops 8% on that day. And

then like we were saying, once the

actual product comes out, it becomes

clear it's just not good.

>> Yep.

>> And it pretty quickly becomes clear,

it's not just that the chatbot isn't

good, it's the model isn't good. So in

May they replace Lambda with a new model

from the Brain team called Palm. It's a

little bit better, but it's still

clearly behind not only GPT3.5, but in

March of 2023, OpenAI comes out with

GPT4, which is even better.

>> You can access that now through chatgpt.

And here is where Sundar makes two

really, really big decisions. Number

one, he says, "We cannot have two AI

teams within Google anymore. We're

merging Brain and Deep Mind into one

entity called Google Deep Mind,

>> which is a giant deal. This is in full

violation of the original deal terms of

bringing Deep Mind in."

>> Yep. And the way he makes it work is he

says, "Demis, you are now CEO of the AI

division of Google, Google DeepMind.

This is all hands on deck and you and

Deep Mind are going to lead the charge.

You're going to integrate with Google

Brain and we need to change all of the

past 10 years of culture around building

and shipping AI products within Google."

To further illustrate this, when

Alphabet became Alphabet, they had all

these separate companies, but things

that were really core to Google, like

YouTube actually stayed a part of

Google. DeepMind was its own company.

That's how separate this was. They're

working on their own models. In fact,

those models are predicated on

reinforcement learning. That was the big

thing that DeepMind had been working on

the whole time. And so reading in

between the lines, it's Sundar looking

at his two AI labs and going, "Look, I

know you two don't actually get along

that well, but look, I don't care that

you had different charters before. I am

taking the responsibility of Google

Brain and giving it to DeepMind and

DeepMind is absorbing the Google Brain

team." I think that's what you should

sort of read into it because as you look

at where the models went from here, they

kind of came from DeepMind.

>> Yep. There's a little bit of interesting

backstory to this too. So Mustafa

Sullean, the third co-founder of

DeepMind,

at some point before this,

>> he became like the head of Google AI

policy or something.

>> He had already shifted over to Brain and

to Google.

>> He stayed there for a little while and

then he ended up getting close with who

else? Reed Hoffman. Remember Reed is on

the ethics board for DeepMind and

Mustafa and Reed leave and go found

Inflection AI which fast forward now

into 2024 after the absolute insanity

that goes down at OpenAI in Thanksgiving

2023 when Sam Alman gets fired over the

weekend during Thanksgiving and then

brought back by Monday when all the team

threatened to quit and go to Microsoft.

Open eye loves Thanksgiving. Can't wait

for this year.

>> They love Thanksgiving. Yeah. Gosh.

After all that, which certainly strains

the Microsoft relationship, remember

again, Reed is on the board of

Microsoft. Microsoft does one of these

acquisition type deals with Inflection

AI and brings Mustafa in as the head of

AI for Microsoft.

>> Crazy.

>> Wild, right? Just wild.

>> Crazy turn of events. Okay, so that

first big decision that Sundar makes is

unifying deep mind and brain. That was

huge. Equally big, he says, I want you

guys to go make a new model and we're

just going to have one model that is

going to be the model for all of Google

internally for all of our AI products

externally. It's going to be called

Gemini. No more different models, no

more different teams. just one model for

everything. This is also a huge deal.

>> It's a giant deal and it's twofold. It's

push and it's pull. It's saying, "Hey,

if anyone's got a need for an AI model,

you got to start using Gemini." But two,

it's actually kind of the plus thing

where they go to every team and they

start saying, "Gemini is our future. You

need to start looking for ways to

integrate Gemini into your product."

>> Yes, I'm so glad you brought up Plus.

This came up with a few folks I spoke to

in the research. Obviously, this is all

playing out real time, but the point a

lot of people at Google made is the

Gemini situation is very different than

the Google+ situation. This is a

technical thing, A, which has always

been Google's wheelhouse, but B, even

more importantly, this is the rational

business thing to do in the age of these

huge models. Even for a company like

Google, there are massive scaling laws

to models.

>> The more data you put in, the better

it's going to get, the better all the

outputs are going to be.

>> And because of scaling laws, you need

your models to be as big as possible in

order to have the best performance

possible. If you're trying to maintain

multiple models within a company, you're

repeating multiple huge costs to

maintain huge models. You definitely

don't want to do that. You need to

centralize on just one model.

>> Yeah, it's interesting. There's also

something to read into where at first it

was the Gemini model underneath the Bard

product. Bard was still the consumer

name. Then at some point they said, "No,

we're just calling it all Gemini and

Gemini became the userfacing name."

Also, this pulls in my quintessence from

the Alphabet episode. I know it's a

little bit woowoo, but with Google

saying, "We're actually going to name

the consumer service the name of the AI

model." They're sort of admitting to

themselves, this product is nothing but

technology. There isn't productiness to

do on top of it. It's just like Gmail.

Gmail was technology. It was fast

search. It was lots of storage. It was

use it in the web. The productiness

wasn't particular the way that like

Instagram was all about the product.

Gemini the model, Gemini, the chatbot

says, "We're just exposing our amazing

breakthrough technology to you all and

you get to interface directly with it."

Anthropologically looking from afar, it

kind of feels like it's that principle

at work. I totally agree. I think it's

actually a really important branding

point and sort of rallying point to

Google and Google culture to do this,

>> right? All right, so this is all the

stuff going on in Google 2023ish

in AI. Before we catch up to the

present, I have a whole other branch of

Alphabet that has been a real bright

spot for AI. Can I go there? Can I take

this offramp, if you will?

>> Can you uh take the wheel, so to speak?

>> May I take the wheel? May I investigate

another bet?

>> Yeah, please tell us the Whimo story.

>> Awesome. So, we got to rewind back all

the way to 2004, the DARPA Grand

Challenge, which was created as a way to

spur research into autonomous ground

robots for military use. And actually,

what it did for our purposes here today

is create the seed talent for the entire

self-driving car revolution 20 years

later. So, the competition itself is

really cool. There is a 132m raceourse.

Now, mind you, this is 2004 in the

Mojave Desert that the cars have to race

on. It is a dirt road. No humans are

allowed to be in or interact with the

cars. They are monitored 100% remotely.

And the winner gets $1 million.

>> $1 million,

>> which was a break from policy. Normally,

these are grants, not prize money. So,

this needs to be authorized by an act of

Congress. The $1 million eventually felt

comical. So the second year they raised

the pot to $2 million. It's crazy

thinking about what these researchers

are worth today. That that was the prize

for the whole thing. So the first year

in 2004 went fine. There were some

amazing tech demonstrations on these

really tight budgets, but ultimately

zero of the 100 registered teams

finished the race. But the next year in

2005 was the real special year. The

progress that the entire industry made

in those first 12 months from what they

learned is totally insane. Of the 23

finalists that were entering the

competition, 22 of them made it past the

spot where the furthest team the year

before had made it. The amount that the

field advanced in that one year is

insane. Not only that, five of those

teams actually finished all 132 miles.

Two of them were from Carnegie Melon and

one was from Stanford, led by a name

that all of you will now recognize,

Sebastian Thrun.

>> Indeed,

>> this is Sebastian's origin story before

Google. Now, as we said, Sebastian was

kind enough to help us with prep for

this episode, but I actually learned

most of this from watching a 20-year-old

NOVA documentary that is available on

Amazon Prime Video. Thanks to Brett

Taylor for giving us the tip on where to

find this documentary. Yes, the hot

research tip.

>> So, what was special about this Stanford

team? Well, one, there's a huge problem

with noisy data that comes out of all of

these sensors. You know, it's in a car

in the desert getting rocked around.

It's in the heat. It's in the sun. So,

common wisdom and what Carnegie Melon

did was to do as much as you possibly

can on the hardware to mitigate that. So

things like custom rigging and gimbals

and giant springs to stabilize the

sensors. Carnegie Melon would

essentially buy a Hummer and rip it

apart and rebuild it from the wheels up.

We're talking like welding and real

construction on a car. The Stanford team

did the exact opposite. They viewed any

new piece of hardware as something that

could fail. And so in order to mitigate

risks on race day, they used all

commodity cameras and sensors that they

just mounted on a nearly unmodified

Volkswagen. So they only innovated in

software and they figured they would

just kind of come up with clever

algorithms to help them clean up the

messy data later. Very googly, right?

>> Very googly.

>> The second thing they did was an early

use of machine learning to combine

multiple sensors. They mounted laser

hardware on the roof just like what

other teams were doing. And this is the

way that you can measure texture and

depth of what is right in front of you.

And the data, it's super precise, but

you can't drive very fast because you

don't really know much about what's far

away since it's this fixed field of

view. It's very narrow. Essentially, you

can't answer that question of how fast

can I drive or is there a turn coming

up. So, on top of that, the way they

solved it was they also mounted a

regular video camera. That camera can

see a pretty wide field of view just

like the human eye, and it can see all

the way to the horizon just like the

human eye. And crucially, it could see

color. So what it would do, this is like

really clever. They would use a machine

learning algorithm in real time in 2005.

This computer is like sitting in the

middle of the car. They would overlay

the data from the lasers on top onto the

camera feed. And from the lasers, you

would know if the area right in front of

the car was okay to drive or not. Then

the algorithm would look up in the

frames coming off the camera overlaid

what color that safe area was and then

extrapolate by looking further ahead at

other parts of the video frame to see

where that safe area extended to

>> so you could figure out your safe path

through the desert.

>> That's awesome.

>> It's so awesome.

>> I'm imagining like a Dell PC sitting in

the middle of this car in 2005.

>> It's not far off. In the email that we

send out, we'll share some photos of it.

It could then drive faster with more

confidence and it knew when turns were

coming up. Again, this is real time on

board the camera. 2005 is wild on that

tech. So ultimately, both of these bets

worked and the Stanford team won in

super dramatic fashion. They actually

passed one of the Carnegie Melon teams

autonomously through the desert. It's

like this big dramatic moment in the

documentary. So you would kind of think,

so then Sebastian goes to Google and

builds Whimo. No. As we talked about

earlier, he does join Google through

that crazy, please don't raise money

from Benchmark and Sequoia and we'll

just hire you instead. But he goes and

works on Street View and Project Ground

Truth and co-founds Google X. David, as

you were alluding to earlier, this

project chauffeur that would become

Whimo is the first project inside Google

X. And I think the story, right, is that

Larry came to Sebastian and was like,

"Yes, yo, that self-driving car stuff,

like, do it." And Sebastian was like,

"No, come on. That was a DARPA

challenge." And Larry's like, "No, no,

you should do it." He's like, "No, no,

that won't be safe. There's people

running around cities. I'm not just

going to put multi-tonon killer robots

on roads and go and potentially harm

people." And Larry finally comes to him

and says, "Why? What is the technical

reason that this is impossible?" And

Sebastian goes home, has sleep on it,

and he comes in the next morning and he

goes, "I realized what it was. I'm just

afraid."

>> Such a good moment.

>> So they start, he's like, "There's not a

technical reason. As long as we can take

all the right precautions and hold a

very high bar on safety, let's get to

work." So Larry then goes, "Great. I'll

give you a benchmark so that way you

know if you're succeeding." He comes up

with these 10 stretches of road in

California that he thinks will be very

difficult to drive. It's about a

thousand miles and the team starts

calling it the Larry 1000 and it

includes driving to Tahoe, Lumbard

Street in San Francisco, Highway 1 to

Los Angeles, the Bay Bridge. This is the

bogey.

>> Yep. If you can autonomously drive these

stretches of road, pretty good

indication that you can probably do

anything.

>> Yep. So they start the project in 2009.

Within 18 months, this tiny team, I

think they hired, I don't know, it's

like a dozen people or something,

they've driven thousands of miles

autonomously, and they managed to

succeed in the full Larry 1000 within 18

months.

>> Totally unreal how fast they did it. And

then also totally unreal how long it

takes after that to productize and

create the Whimo that we know today.

>> Right. It's like the first 99% and then

the second 99% that takes 10 years.

>> Yeah. Self-driving is one of these

really tricky types of problems where

it's surprisingly easy to get started

even though it seems like it would be an

impossible thing. But then there's edge

cases everywhere. Weather, road

conditions, other drivers, novel road

layouts, night driving. So it takes this

massive amount of work for a production

system to actually happen. So then the

question is what business do we build?

What is the product here? And there was

what Sebastian wanted which was highway

assist. Sort of the lowest stakes, most

realistic. Let's make a better cruise

control. There's what Eric Schmidt

wanted, which is crazy. He proposed, oh,

let's just go buy Tesla and that'll be

our starting place and then we'll just

put all of our self-driving equipment on

all the cars. David, do you know what it

would have cost to buy Tesla at the

time?

>> I think at the time that negotiations

were taking place between Elon and Larry

and Google, this was in the depths of

the Model S production scaling wos. I

think Google could have bought the

company for $5 billion. That's what I

remember.

>> It was three billion.

>> $3 billion. Oh my goodness.

>> Obviously, that didn't happen, but what

a crazy alternative history that could

have been,

>> right? I mean, I think if that had

happened, DeepMind would not have gone

down in the same way and probably OpenAI

would not have gotten founded.

>> That's probably right.

>> I think that is obviously unprovable,

>> right? The counterfactuals that we

always come up with on this show, you

can't know.

>> Yeah. Seems more likely than not to me

that at a minimum, Open AAI would not

exist,

>> right? So, then there was what Larry

wanted to do. Option three, build robo

taxis. Yeah.

>> And ultimately that is at least right

now what they would end up doing. So we

could do a whole episode about this

journey, but we will just hit some of

the major points for the sake of time.

The big thing to keep in mind here,

neither Google nor the public really

knew if self-driving was something that

could happen in the next 2 years from

any given point or take another 10. And

just to illustrate it, for the first 5

years of project chauffeur, it did not

use deep learning at all. They did the

Larry 1000 without any deep learning and

then went another three and a half

years.

>> Wow, that's crazy.

>> Yeah. And yet totally illustrates you

never know how far away the end goal is.

>> And this is a field that comes from the

only way progress happens is through

these series of breakthroughs. and you

don't know a how far the next

breakthrough is because at any given

time there's lots of promising things in

the field most of which don't work out

and then b when there is a breakthrough

actually how much lift that will give

you over existing methods so anytime

people are forecasting oh in AI we're

going to be able to do xyz in x years

it's a complete fool's errand even the

experts don't know here are the big

milestones 2013 they started using

convolutional neural nets they could

identify objects they got much better

perception capabilities this 2013 2014

period is when Google found religion

around deep learning. So this is like

right after the 40,000 GPUs rolled out.

So they've actually got some hardware to

start doing this on now. 2016 they've

seen enough technology proof that they

think let's commercialize this. We can

actually spin this out into a company.

So Whimo becomes its own subsidiary

inside of Alphabet. It's no longer a

part of Google X anymore. 2017 obviously

the transformer comes out. They

incorporate some learnings from the

transformer especially around prediction

and planning. March of 2020, they raised

$3.2 billion from folks like Silverlake

Canada Pension and Investment Board,

Mubatala, Andrea Horowitz, and of

course, the biggest check, I think,

Alphabet. And I think they're always the

biggest check because Alphabet is still

the majority owner, even after a bunch

more fundraises. In October of 2020,

they launched the first public

commercial, no human behind the driver's

seat thing in Phoenix. It's the first in

the world. This is 11 years after

succeeding in the Larry 1000. And this

is nuts. I had given up at this point. I

was like, that's cute that Whimo and all

these other companies are trying to do

self-driving. Seems like it's never

going to happen. And then they actually

were doing a large volume of rides

safely with consumers and charging money

for it in Phoenix.

>> Then they bring it to San Francisco

where for me and lots of people in San

Francisco, it is a huge part of life in

the city here now. It's amazing. Yeah,

every time I'm down, I love taking them.

They're launching in Seattle soon. I'm

pumped. Interestingly, they don't make

the hardware. So, they use a Jaguar

vehicle. Yep. That from what I can tell

is only in Whimos. Like, I don't know if

anybody else drives that Jaguar or if

you can buy it, but they're working on a

sort of van next. They have some next

generation hardware. For anyone who

hasn't taken it, it's an Uber, but with

no driver. And that launched in June of

24. Along the way there, they raised

their quote unquote series B, another

2.5 billion. Then after the San

Francisco roll out, they raised their

quote unquote series C, 5.6 billion.

This year in January, they were

reportedly doing more in gross bookings

than Lyft in San Francisco. Wow. I

totally believe it. I mean, it is the

number one option in San Francisco that

I and everybody I know to always goes to

for ride hailing. It's like try to get a

Whimo. if there's not a Whimo available

anytime soon, you know, then go down the

stack.

>> Like we're living in the future and how

quickly we fail to appreciate it.

>> Yeah. And what's cool, I think, for

people who it hasn't come to their city

and is not part of their lives yet, it's

not just that it's a cool experience to

not have a driver behind the like pretty

quickly that just fades. It's actually a

different experience. M

>> so if I need to go somewhere with my

older daughter, I don't mind hailing a

Whimo, bringing the car seat, installing

the car seat in the Whimo and driving

with my daughter and she loves it. We

call it a robot car and she's like, "A

robot car? I'm so excited."

>> Huh.

>> I would never do that with an Uber.

>> That's interesting.

>> To my dog, whenever I need to go with my

dog, like it's super awkward to hail an

Uber and be like, "Hey, I got my dog.

You know, can the dog come in it?" Not a

big deal with a Whimo. And then when

you're in town,

>> Yeah. we can actually have sensitive

conversations in the car.

>> You can have phone calls. It really is a

different experience.

>> Yeah, that's so true. Yeah. So, may as

well catch up to today. They're

operating in five cities, Phoenix, San

Francisco, LA, Austin, and Atlanta. They

have hundreds of thousands of paid rides

every week. They've now driven over a

100 million miles with no human behind

the wheel, growing at 2 million every

week. There's over 10 million paid rides

across 2,000 vehicles in the fleet.

They're going to be opening a bunch more

cities in the US next year. They're

launching in Tokyo, their first

international city, slowly and then all

at once. I mean, that's kind of the

lesson here. The technology, they really

continued with that multi-ensor approach

all the way from the DARPA Grand

Challenge. Camera, LAR, they added radar

and actually they use audio sensing as

well. And their approach is basically

any data that we can gather is better

because that makes it safer. So they

have 13 cameras, four LAR, six radar,

and the array of external microphones.

This is obviously way more expensive of

a solution than what Tesla is just doing

with cameras. But Whimo's party line is

they believe it is the only path to full

autonomy to hit the safety bar and

regulatory bar that they're aiming for.

>> Yeah.

>> It seems like a really big line in the

sand for them anytime you talk to

somebody in that organization.

>> Yeah. And look, as a regular user of

both products, you know, happy owner and

driver of a Model Y in addition to

regular Waybo user, at least with the

current instantiation of full

self-driving on my Tesla, vastly

different products. Full self-driving on

my Model Y is great. I use it all the

time on the freeway, but I would never

not pay attention. Whereas, every time I

get in a Whimo, it's almost like Google

search, right? It's like I just trust

that, oh, this is going to be completely

and totally safe and I'm sitting in the

back seat and I can totally tune out.

>> I think I trust my Model Y FSD more than

you do. But I get what you're saying and

frankly regulatory you are required to

still pay attention in Tesla and not in

the Whimo. The safety thing is super

real though. I mean, if you look at the

numbers, over a million motor vehicle

crashes cause fatalities every year or

there's over a million fatalities in the

US alone. Over 40,000 deaths occur per

year. So if you break that down, that's

120 every day. That's like a giant cause

of death.

>> Yes.

>> The study that Whimo just released last

month showed that they have 91% fewer

crashes with serious injuries or worse

compared to the average human driver,

even controlled for the fact that Whimos

right now are only driving on city

surface streets. So they controlled it

apples to apples with human driving

data. And it's a 91% reduction in those

serious either fatality or a serious

injury things. Why aren't we all talking

about this all the time every day? This

is going to completely change the world

and a giant cause of death.

>> Yeah.

>> So, while we're in Whimo land, what do

you think about doing some quick

analysis?

>> Great.

>> Cuz I've been scratching my head here of

what is this business? Then I promise

we'll go back to the rest of Google AI

and catch up to today. It is super

expensive to operate especially at early

scale. The training is high, the

inference is high, the hardware is high

etc etc etc.

>> Also the operations are expensive.

>> Yes. And in fact they're experimenting.

Some cities they actually outsource the

operations. So the fleet is managed by

there's a rental car company in Texas

that manages it or they've partnered I

believe with Lyft and with Uber and

different. So they're trying all sorts

of O and O versus uh partnership models

to operate it.

>> Yeah. And the operations are like these

are electric cars. They need to be

charged. They need to be cleaned. They

need to be returned to depots. They need

to be checked out. They need to have

sensors replaced.

>> So the question is what is the potential

market opportunity? How big could this

business be? And there's a few different

ways you could try to quantify it. One

total market size thing you could do is

try to sum the entire automaker

market cap today and that would be 2.5

trillion globally if you include Tesla

or 1.3 trillion without but Whimo is not

really making cars so that's probably

the wrong way to slice it. You could

look at all the ride sharing companies

today which might be a better comp

because that's the business that Whimo

is actually in today. That's on the

order of 300 billion most of which is

Uber.

>> Yep. So that's addressable market cap

today with ride sharing. Whimo's

ambitions though are bigger than that.

They want to be in the cars that you

own. They want to be in long haul

trucking. So they believe they can grow

the share of transportation because

there's blind people that could own a

car. There's elderly people who could

get where they need to go on their own

without having a driver. That sort of

thing. So the most squishy but I think

the most interesting way to look at it

is what is the value from all of the

reduction in accidents because that's

really what they're doing. It's a

product to replace accidents with

non-acs.

>> I think that's viable but again I would

say as a regular user of the product it

is a different and expanding product to

human ride share. So your argument is

whatever number I come up with for

reducing accidents, it's still a bigger

market than that because there's

additional value created in the product

experience itself.

>> Yeah. Scoping just to ride share now

that we have Whimo in San Francisco. I

use Whimo in scenarios where I would

never use an Uber or a Lyft.

>> Yeah, makes sense. So here's the data we

have. The CDC released a report saying

deaths from crashes in 2022 in the US

resulted in $470 billion in total costs,

including medical costs and the cost

estimates for lives lost, which is crazy

that the CDC has some way of putting the

costs on human life, but they do. So, if

you reduce crashes 10x, which is what

Whimo seems to be saying in their data,

at least for the serious crashes, that's

over $420 billion a year in total costs

that we would save as a nation. Now,

it's not totally apples to apples. I

recognize this, but that cost savings is

more than Google does today in revenue

in their entire business. You could see

a path to a Google sized opportunity for

Whimo as a standalone company just

through this analysis as long as they

figure out a way to get cost down to the

point where they can run this as a large

and profitable business. Yeah, it is a

incredible

20 plus year success story within

Google.

>> The way I want to close it is the

investment so far actually hasn't been

that large. When you consider this

opportunity, they have burned somewhere

in the neighborhood of 10 to 15 billion.

That's sort of why I was listing all the

investments to get to this point.

>> Jump change compared to foundational

models.

>> Dude, also let's just keep it scoped in

this sector. That's one year of Uber's

profits.

>> Wow. Seems like a good bet.

>> I used to think this was like some wild

goose chase. It now looks really, really

smart.

>> Yep. Totally agree.

>> Also, that cost 10 to 15 billion is the

profits that Google made last month.

>> Google. Well, speaking of Google, should

we catch us up to today with Google AI?

>> Yes. So, I think where you were is the

Gemini launch.

>> So, Sundar makes these two decrees mid

2023. One, we're merging Brain and Deep

Mind into one team for AI within Google.

And two, we're gonna standardize on one

model, the future Gemini and Deep

Mindbrain

team. You go build it and then everybody

in Google, you're going to use it.

>> Not to mention, apparently Sergey Bran

is like now back as an employee working

on Gemini.

>> Yes.

Employee number

>> got his badge back.

>> Yeah. Got his badge back.

So once Sundar makes these decisions,

Jeff Dean and Oriel Vignalis from Brain

go over and team up with the Deep Mind

team and they start working on Gemini.

>> I'm a believer now. By the way, you got

Jeff Dean working on it, I'm in.

>> If you got Jeff Dean on it, it's

probably going to work. If you weren't a

believer yet, wait till I'm going to

tell you next. Once they get Noom back

when they do the deal with Character AI,

bring him back into the fold.

Gnome joins the Gemini team and Jeff and

Gnome are the two co-technical leads for

Gemini now. So,

>> let's go.

>> Let's go. So, they actually announced

this very quickly at the Google IO

keynote in May 2023. They announced

Gemini. They announced the plans. They

also launch AI overviews in search first

as a labs product and then later that

becomes just standard for everybody

using Google search which is crazy by

the way the number of Google searches

that happen is unfathomably large I'm

sure there's a number for it but just

think about that's about the highest

level of computing scale that exists

other than like high bandwidth things

like streaming but just think about the

instances of Google searches that happen

they are running an LLM inference

on all of those or at least as many as

they're willing to show AI overviews on

which I'm sure is not every query but

many

>> a subset.

>> Yeah.

>> But still a large large number of Google

I mean I see them all the time.

>> Yep.

>> This is really Google immediately

deciding to operate at AI speed. I mean

chat GPT happened in November 30th 2022.

We're now in May 2023.

All of these decisions have been made.

all of these changes have happened and

they're announcing things at IO

>> and they're really flexing the

infrastructure that they've got. I mean

the fact that they can go like oh yeah

sure let's do inference on every query

we're Google we can handle it.

>> So a key part of this new Gemini model

that they announced in May 2023 is it's

going to be multimodal. Again this is

one model for everything text images

video audio one model. They release it

for early public access in December

2023. So also crazy 6 months. They build

it, they trade it, they release it.

>> That is amazing.

>> While February 2024, they launched

Gemini 1.5 with a 1 million token

context window. Much much larger context

window than any other model on the

market,

>> which enables all sorts of new use

cases. There's all these people who were

like, "Oh, I tried to use AI before, but

it couldn't handle my XYZ use case." Now

they can.

>> Yep. The next year, February 2025, they

release Gemini 2.0. March of 2025, one

month later, they launch Gemini 2.5 Pro

in experimental mode. And then that goes

G in June.

>> This is like Nvidia pace, how often

they're shipping.

>> Yeah, seriously. And also in March of

2025, they launch AI mode. So you can

now switch over on google.com to chatbot

mode.

>> And they're split testing auto opting

some people into AI mode to see what the

response is. This is the golden goose.

>> Yeah, the elephant is tap dancing here.

>> Yep.

>> Then there's all the other AI products

that they launch. So Notebook LM comes

out during this period. AI generated

podcasts

>> which does that sound like us to you? It

feels a little trained.

>> The number of texts that we got when

that came out of this must be trained on

acquired.

>> I do know that a bunch of folks on the

notebook LM team are acquired fans. So I

don't know if they trained on us. And

then there's the video the image stuff

VO3 Nano Banana Genie 3 that just came

out recently. Genie, this is insane. And

this is a world builder based on prompts

and videos.

>> Yeah. You haven't actually used it yet,

right? You watch that hype video.

>> Yeah, I watched the video. I haven't

actually used it.

>> Yeah. I mean, if it does that, that's

unbelievable. It's a real time

generative

>> world builder.

>> World builder. Yeah. You look right and

it invents stuff to your right. I mean,

you combine that with like a vision pro

hardware, you're just living in a

fantasy land. So, they announced there

are now 450 million monthly users of

Gemini. Now, that includes everybody

who's accessing Nano Banana.

>> Yeah, I can't believe this stat. This is

insane. Even with recently being number

one in the app store, it still feels

hard to believe. Google's saying it, so

it must be true. But I just wonder what

are they counting as use cases of the

Gemini app,

>> right? Certainly everybody who's using

Nano Banana is using Gemini.

>> But is it counting AI overviews or is it

counting AI mode or is it counting

something where I'm like accidentally

like Meta said that crazy high number of

people using Meta AI and

>> Right. Right. Right.

>> That was complete garbage. That was

people searching Instagram who

accidentally hit a llama model that made

some things happen and they were like,

"Uh, go away. I actually am just looking

for a user." Is it really 450 million or

is it 450 million?

>> Yeah, good question. Either way, going

from zero is crazy impressive in the

amount of time that they have done,

>> especially given revenue is at an

all-time high. They seem to so far be at

least in this squishy early phase able

to figure out how to keep the core

business going while doing well as a

competitor in the cutting edge of AI.

>> Yeah. And to foreshadow a little bit to

we're going to do a bull and bear here

in a minute. As we talked about in our

Alphabet episode, Google does have a

history of navigating platform shifts

incredibly well in the transition to

mobile.

>> It's true.

>> Definitely a rockier start here in the

AI platform shift.

Much rockier. But hey, look, I mean, if

you were to lay out a recipe for how to

respond given the rocky start, be hard

to come up with a much better slate of

things than what they've done over the

last two years.

>> Yeah.

All right. Should I give us the snapshot

of the business today?

>> Give us the snapshot of the business

today. Oh, yeah. Also, by the way, the

federal government decided they were a

monopoly and then decided not to do

anything about it because of AI.

>> Yeah. So, between the time when we

shipped our Alphabet episode and here

with our Google AI episode or our uh

part two and part three for those who

prefer simpler naming schemes. Yeah,

there was a US versus Google antitrust

case. The judge first ruled that Google

was a monopoly in internet search and

then did not come up with any material

remedies. I mean there are some, but I

would call them immaterial. They did not

need to spin off Chrome and they did not

need to stop sending tens of billions of

dollars to Apple and others. In other

words, yes, Google's a monopoly and the

cost of doing anything about that would

have too many downstream consequences on

the ecosystem. So, we're just going to

let them keep doing what they're doing.

And one of the reasons that the judge

cited of why they weren't going to

really take these actions is because of

the race in AI. That because tens of

billions of dollars of funding have gone

into companies like OpenAI and Anthropic

and Perplexity, Google essentially has

this new war to fight and we're going to

leave it to the free market to do its

thing where it creates viable

competition on its own and we're not

going to hamstring Google. Personally, I

think this argument is a little bit

silly. I mean, none of these AI

companies are generating net income, and

just because they've raised a huge

amount of money, it doesn't mean that

will last forever. They'll all burn

through their existing cash in a pretty

short period of time. And if the

spigotss ever dry up, Google doesn't

have any self-sustaining competition

right now, whether in their old search

business or in AI. It is all dependent

on people believing that the opportunity

is so large that they keep pouring tens

of billions of dollars into these

competitors. Yeah, plenty of other folks

have made the sort of glib comment, but

there's merit to it of, hey, as

flat-footed as Google was when Chat GPT

happened, if the outcome of this is they

avoid a Microsoft level distraction and

damage to their business from a US

federal court monopoly judgment. Worth

it.

>> Well, there's a funny meme here that you

could draw. You know that meme of

someone pushing the domino and it

knocking over some big wall later.

>> Yeah.

>> There's the domino of Ilia leaving

Google to start OpenAI and the

downstream effect is Google is not

broken up.

>> Yeah. Right. Exactly.

>> It actually saves Google.

>> It actually saves Google.

>> It's totally wild.

>> Totally wild.

>> All right. So, here's the business

today. Okay, over the last 12 months,

Google has generated $370

billion

in revenue. On the earnings side,

they've generated

140 billion over the last 12 months,

which is more profit than any other tech

company. And the only company in the

world with more earnings is Saudi

Aramco. Let's not forget Google is the

best business ever. And we also made the

point at the end of the Alphabet

episode, even in the midst of all of

this AI era and everything that's

happened over the last 10 years, the

last 5 years, Google's core business has

continued to grow 5x since the end of

our alphabet episode in 2015 2016.

>> Yeah. Market cap. Google surged past

their old peak of two trillion and just

hit that three trillion mark earlier

this month. They're the fourth most

valuable company in the world behind

Nvidia, Microsoft, and Apple. It's just

crazy. On their balance sheet, I

actually think this is pretty

interesting. I normally don't look at

balance sheet as a part of this

exercise, but it's useful. And here's

why. In this case, they have 95 billion

in cash and marketable securities. And I

was about to stop there and make the

point, wow, look how much cash and

resources they have.

>> I'm actually surprised it's not more. So

it used to be 140 billion in 2021 and

over the last four years they've

massively shift from this mode of

accumulating cash to deploying cash and

a huge part of that has been the capex

of the AI data center buildout. So

they're very much playing offense in the

way that Meta, Microsoft and Amazon are

in deploying that capex. But the thing

that I can't quite figure out is the

largest part of that was actually

buybacks and they started paying a

dividend. So if you're not a finance

person, the way to read into that is

yes, we still need a lot of cash for

investing in the future of AI and data

centers, but we still actually had way

more cash than we needed and we decided

to distribute that to shareholders.

>> Yeah,

>> that's crazy.

>> Best business of all time, right? That

illustrates what a crazy business their

core search ads business is. If they're

saying, "The most capital intense race

in business history is happening right

now. We intend to win it."

>> Yeah.

>> And we have tons of extra cash lying

around on top of what we think plus a

safety cushion for investing in that

capex race.

>> Yeah.

>> Yes.

>> Wow. So there are two businesses that

are worth looking at here. One is Gemini

to try to figure out what's happening

there and two is a brief history of

Google cloud. I want to tell you the

cloud numbers today but it's probably

worth actually understanding how did we

get here on cloud.

>> Yep.

>> First on Gemini because this is Google

and they have I think the most

obfuscated financials of any of the

companies we've studied. They anger me

the most in being able to hide the ball

in their financial statements. Of

course, we don't know Gemini specific

revenue. What we do know is there are

over 150 million paying subscribers to

the Google 1 bundle. Most of that is on

a very low tier. It's on like the $5 a

month, $10 a month. The AI stuff kicks

in on the $20 a month tier where you get

the premium AI features, but I think

that's a very small fraction of the 150

million today.

>> Yeah, I think that's what I'm on.

>> But two things to note. One, it's

growing quickly. that 150 million is

growing almost 50% year-over-year. But

two is Google has a subscription bundle

that 150 million people are subscribed

to. And so I've kind of had it in my

head that AI doesn't have a future as a

business model that people pay money for

that it has to be ad supported like

search.

>> But hey, that's not nothing. That's like

a

>> that's almost half of America.

>> I mean, how many subscribers does

Netflix have?

>> Netflix is in the hundreds of millions.

Yeah,

>> there are realcaled

consumer subscription services. I owe

this insight to Shashir Moroto. We

chatted actually last night cuz I name

dropped him on the last episode and then

he heard it and so we reached out, we

talked and that's made me do a 180. I

used to think if you're going to charge

for something your total addressable

market shrunk by 90 to 99%. But he kind

of has this point that if you build a

really compelling bundle and Google has

the digital assets to build a compelling

bundle.

>> Oh my goodness. YouTube Premium, NFL

Sunday Ticket.

>> Yes. Stuff in the Play Store, YouTube

Music, all the Google One storage stuff.

They could put AI in that bundle and

figure out through clever bundle

economics a way to make a paid AI

product that actually reaches a huge

number of paying subscribers. Totally.

>> So, we really can't figure out how much

money Gemini makes right now. Probably

not profitable anyway. So, what's the

point of even analyzing it?

>> Yeah. But, okay, tell us the cloud

story. So, we intentionally did not

include cloud in our Alphabet episode.

>> Google part two effectively.

>> Google part two. Yes. because it is a

new product and now very successful one

within Google that was started during

the same time period as all the other

ones that we talked about during Google

part two. But it's so strategic for AI.

Yes, it is a lot more strategic now in

hindsight than it looked when they

launched it. So just quick background on

it, it started as Google App Engine. It

was a way in 2008 for people to quickly

spin up a backend for a web or soon

after a mobile app. It was a platform as

a service. So you had to do things in

this very narrow googly way. It was very

opinionated. You had to use this SDK.

You had to write it in Python or Java.

You had to deploy exactly the way they

wanted you to deploy. It was not a thing

where they would say, "Hey developer,

you can do anything you want. Just use

our infrastructure." It was opinionated.

super different than what AWS was doing

at the time and what they're still doing

today, which the whole world eventually

realized was right, which is cloud

should be infrastructure as a service.

Even Microsoft pivoted Azure to this

reasonably quickly where it was like,

you want some storage, we got storage

for you. You want a VM, we got a VM for

you. You want some compute, you want a

database,

>> we got you.

>> Fundamental building blocks. So

eventually, Google launches their own

infrastructure as a service in 2012.

Took four years. They launched Google

Compute Engine that they would later

rebrand Google Cloud Platform. That's

the name of the business today. The

knock on Google is that they could never

figure out how to possibly interface

with the enterprise. Their core

business, they made really great

products for people to use, that they

loved polishing, they made them all as

self-s serve as possible, and then the

way they made money was from

advertisers. And let's be honest,

there's no other choice but to use

Google search,

>> right? it didn't necessarily need to

have a great enterprise experience for

their advertising customers because they

were going to come anyway,

>> right? And so they've got this self-s

serve experience. Meanwhile, the cloud

is a knife fight. These are commodities

>> all about the enterprise.

>> It's the lowest possible price and it's

all about enterprise relationships and

clever ways to bundle and being able to

deliver a full solution.

>> You say solution, I hear gross margin.

>> Yes. But yes, so Google out of their

natural habitat in this domain

>> and early on they didn't want to give

away any crown jewels. They viewed their

infrastructure as this is our secret

thing. We don't want to let anybody else

use it. And the best software tools that

we have on it that we've written for

ourselves like big table or borg how we

run Google or disbelief. These are not

services that we're making available on

Google cloud.

>> Yeah. These are competitive advantages.

>> Yes. And then they hired the former

president of Oracle, Thomas Kurrion.

>> Yes. And everything kind of changed. So

2017, 2 years before he comes in, they

had $4 billion in revenue 10 years into

running this business. 2018 is their

first very clever strategic decision.

They launched Kubernetes. The big

insight here is if we make it more

portable for developers to move their

applications to other clouds, the world

is kind of wanting multicloud here,

>> right? We're the third place player. We

don't have anything to lose.

>> Yes.

>> So we can offer this tool a kind of

counterposition against AWS and Azure.

>> We shift the developer paradigm to use

these containers. They orchestrate on

our platform and then you know we have a

great service to manage it for you. It

was very smart. So this kind of becomes

one of the pillars of their strategy is

you want multicloud, we're going to make

that easy and you can sure choose AWS or

Azure 2. It's going to be great. So

David, as you said, the former president

of Oracle, Thomas Currion, is hired in

late 2018. You couldn't ask for a better

person who understands the needs of the

enterprise than the former president of

Oracle. This shows up in revenue growth

right away. In 2020, they crossed 13

billion in revenue, which was nearly

tripling in three years. They hired like

10,000 people into the go to market

organization. I'm not exaggerating that.

And that's on a base of 150 people when

he came in, most of which were seated in

California, not regionally distributed

throughout the world. The funniest thing

is Google kind of was a cloud company

all along. They had the best engineers

building this amazing infrastructure,

>> right? They had the products, they had

the infrastructure, they just didn't

have the go to market organization,

>> right? And the productization was all

like googly. It was like for us, for

engineers. They didn't really build

things that let enterprises build the

way that they wanted to build. This all

changes. 2022, they hit 26 billion in

revenue. 2023, they're like a real

viable third cloud. They also flipped to

profitability in 2023. And today,

they're over $50 billion in annual

revenue run rate. It's growing 30%

year-over-year. They're the fastest

growing of the major cloud providers, 5x

in five years. And it's really three

things. It's finding religion on how to

actually serve the enterprise. It's

leaning into this multi cloud strategy

and actually giving enterprise

developers what they want. And three, AI

has been such a good tailwind for all

hyperscalers because these workloads all

need to run in the cloud because it's

giant amounts of data and giant amount

of compute and energy. But in Google

Cloud, you can use TPUs, which they make

a ton of, and everyone else is

desperately begging Nvidia for

allocations to GPUs. So, if you're

willing to not use CUDA and build on

Google Stack, they have an abundant

amount of TPUs for you.

>> This is why we saved cloud for this

episode. There are two aspects of Google

cloud that I don't think they forsaw

back when they started the business with

App Engine but are hugely strategically

important to Google today. One is just

simply that cloud is the distribution

mechanism for AI. So if you want to play

an AI today, you either need to have a

great application, a great model, a

great ship or a great cloud. Google is

trying to have all four of those.

>> Yes,

>> there is no other company that has I

think more than one.

>> I think that's the right call. Think

about the big AI players. Nvidia

>> chips

>> kind of has a cloud but not really. They

just have chips and they the best chips

and the chips everyone wants but chips.

And then you just look around the rest

of the big tech companies. Meta right

now only an application. They're

completely out of the race for the

frontier models at the moment. We'll see

what they're hiring spree yields. You

look at Amazon infrastructure, they have

application maybe. I don't actually know

if Amazon.com I'm sure it benefits from

LLMs in a bunch of ways.

>> Mainly it's cloud.

>> Yes, cloud and cloud leader. Microsoft

>> cloud.

>> It's just cloud, right? They make some

models but

>> I mean they've got applications, but

yeah cloud

>> cloud. Apple

>> nothing. Nothing.

>> AMD just chips.

>> Yep. Open AAI model.

>> Anthropic model.

>> Yep.

>> Yep.

>> These companies don't have their own

data centers. They are like making noise

about making their own chips, but not

really and certainly not at scale.

Google has scale data center, scale

chips, scale usage of model. I mean,

even just from google.com queries now on

AI overviews

>> and scale applications.

>> Yes. Yeah, they have all of the pillars

of AI and I don't think any other

company has more than one

>> and they have the very most net income

dollars to lose.

>> Right? So then there's the chip side

specifically of this. If Google didn't

have a cloud, it wouldn't have a chip

business. It would only have an internal

chip business. The only way that

external companies, users, developers,

model researchers could use TPUs would

be if Google had a cloud to deliver them

because there's no way in hell that

Amazon or Microsoft are going to put

TPUs from Google in their clouds.

>> We'll see.

>> We'll see. I guess

>> I think within a year it might happen.

There are rumors already that some

NeoClouds in the coming months are going

to have TPUs.

>> M interesting. Nothing announced, but

TPUs are likely going to be available in

Neocloud soon, which is an interesting

thing. Why would Google do that? Are

they trying to build an NVIDIA type

business where they make money selling

chips? I don't think so. I think it's

more that they're trying to build an

ecosystem around their chips the way

that CUDA does. And you're only going to

credibly be able to do that if your

chips are accessible in anywhere that

someone's running their existing

workloads.

>> Yep. be very interesting if it happens.

And you know, look, you may be right.

Maybe there will be TPUs in AWS or Azure

someday,

but I don't think they would have been

able to start there. If Google didn't

have a cloud and there weren't any way

for developers to use TPUs and start

wanting TPUs,

would Amazon or Microsoft be like, "Ah,

you know, all right, Google, we'll take

some of your TPUs even though no

developer out there uses them." Right.

>> All right. Well, with that, let's move

into analysis. I think we need to do

Bull and Bear on this one.

>> You have to this time.

>> Got to bring that back.

>> For these episodes in the present, it

seems like we need to paint the possible

futures.

>> Yes. Bringing back bull and bear. I love

it. Then we'll do playbook powers

quintessence. Bring it home.

>> Perfect. All right. So, here's my set of

bull cases. Google has distribution to

basically all humans as the front door

to the internet. They can funnel that

however they want. You've seen it with

AI overviews. You've seen it with AI

mode. Even though lots of people use

chat GBD for lots of things, Google's

traffic, I assume, is still essentially

an all-time high and it's a default

behavior.

>> Yep. Powerful. So that is a bet on

implementation that Google figures out

how to execute and build a great

business out of AI, but it is still

theirs to lose.

>> Yeah. And they've got a viable product.

It's not clear to me that Gemini is any

worse than OpenAI or Anthropics

products.

>> No, I completely agree. This is a value

creation, value capture thing. The value

creation is there in spades. The value

capture mechanism is still TBD.

>> Yeah. Google's old value capture

mechanism is one of the best in history.

So that's the issue at hand. Let's not

get confused that it's not like a good

exper it's a great experience.

>> Yeah. Yeah. Yeah. Okay. So we've talked

about the fact that Google has all the

capabilities to win an AI and it's not

even close. Foundational model chips

hyperscaler all this with self-

sustaining funding. I mean that's the

other crazy thing is you look at the

clouds have self-sustaining funding.

Nvidia has self-sustaining funding. None

of the model makers have self-sustaining

funding, so they're all dependent on

external capital.

>> Yeah. Google is the only model maker who

has self-sustaining funding.

>> Yes. Isn't that crazy?

>> Yeah.

>> Basically, all the other large scale

usage foundational model companies are

effectively startups.

>> Yes.

>> And Google's is funded by a money funnel

so large that they're giving extra

dollars back to shareholders for fun.

>> Yeah.

>> Again, we're in the bullc case.

>> Well, when you put it that way. Yeah, a

thing we didn't mention, Google has

incredibly fat pipes connecting all of

their data centers. After the dot crash

in 2000, Google bought all that dark

fiber for pennies on the dollar, and

they've been activating it over the last

decade. They now have their own private

backhole network between data centers.

No one has infrastructure like this.

>> Yep.

>> Not to mention that serves YouTube.

They're fat pipes,

>> which in and of itself is its own

bullcase for Google in the future.

>> That's a great point.

>> Yeah, Ben Thompson had a big article

about this yesterday at the time of

recording.

>> Yeah, that was like a mega bullc case

that Ben Thompson published this week

that it was an interesting point. A

textbased internet is kind of the old

internet. It's the first instantiation

of the internet because we didn't have

much bandwidth. The user experience that

is actually compelling is

>> video,

>> high resolution video everywhere all the

time.

>> We already live in the YouTube internet,

>> right? And not only can they train

models on really the only scale source

of UGC media across long form and short

form, but they also have that as the

number two search engine, this massive

destination site. So they previewed

things like you'll be able to buy AI

labeled or AI determined things that

show up in videos. And if they wanted

to, they could just go label every

single product in every single video and

make it all instantly shoppable. Doesn't

require any human work to do it. They

could just do it and then run their

standard ads model on it. That was a

mind expanding piece that Ben published

yesterday or I guess if you're listening

to this a few weeks ago about that. And

then there's also all the video AI

applications that they've been building

like Flow and VO. What is that going to

do for generating videos for YouTube

that will increase engagement and add

dollars for YouTube?

>> Yep.

>> Going to work real well.

>> Yep. They still have an insane talent

bench. Even though, you know, they've

bled talent here and there and lost

people. They have also shown they're

willing to spend billions for the right

people and retain them. unit economics.

Let's talk about unit economics of

chips. Everyone is paying Nvidia 75 80%

gross margins implying something like a

four or 5x markup on what it costs to

make the chips. A lot of people refer to

this as the Jensen tax or the Nvidia

tax. Uh you can call it that, you can

call it good business, you can call it

pricing power, you could call it

scarcity of supply, whatever you want.

But that is true. Anyone who doesn't

make their own chips is paying a giant

giant premium to Nvidia. Google has to

still pay some margin to their chip

hardware partner Broadcom that handles a

lot of the work to actually make the

chip interface with TSMC. I have heard

that Broadcom has something like a 50%

margin when working with Google on the

TPU versus Nvidia's 80%. But that's

still a huge difference to play with. A

50% gross margin from your supplier or

an 80% gross margin from your supplier

is the difference between a 2x markup

and a 5x markup.

>> Yeah, I guess that's right.

>> When you frame it that way, it's

actually a giant difference of the

impact to your cost. So you might wonder

appropriately, well, are chips actually

the big part of the cost of like the

total cost of ownership of running one

of these data centers or training one of

these models? Chips are the main driver

of the cost. They depreciate very

quickly. I mean, this is at best a

five-year depreciation because of how

fast we are pushing the limits of what

we can do with chips, the needs of next

generation models, how fast TSMC is able

to produce.

>> Yeah. I mean, even that is ambitious,

right? If you think you're going to get

5 years of depreciation on AI chips,

five years ago, we were still two years

away from chat GPT,

>> right? Or think about what Jensen said

at um we were at GTC this year. He was

talking about Blackwell and he said

something about Hopper and he was like,

"Eh, you don't want Hopper." My sales

guys are going to hate me, but like you

really don't want Hopper at this point.

I mean, these were the H100s. This was

the hot chip just when we were doing our

most recent NVIDIA episode.

>> Yes. Things move quickly.

>> Yes. So I've seen estimates that over

half the cost of running an AI data

center is the chips and the associated

depreciation. The human cost that R&D is

actually a pretty high amount because

hiring these AI researchers and all the

software engineering is meaningful. Call

it 25 to 33%.

The power is actually a very small part.

It's like 2 to 6%. So when you're

thinking about the economics of doing

what Google's doing, it's actually

incredibly sensitive to how much margin

are you paying your supplier in the

chips because it's the biggest cost

driver of the whole thing.

>> Mhm.

>> So I was sanity checking some of this

with Gavin Baker who's the partner at a

trades management to prep for this

episode. He's like a great public

equities investor who's studied the

space for a long time. We actually

interviewed him at the Nvidia GTC

pregame show and he pointed out normally

like in historical technology eras it

hasn't been that important to be the

lowcost producer. Google didn't win

because they were the lowest cost search

engine. Apple didn't win because they

were the lowest cost. You know, that's

not what makes people win. But this era

might actually be different because

these AI companies don't have 80%

margins the way that we're used to in

the technology business or at least in

the software business at best these AI

companies look like 50% gross margins.

So Google being definitively the lowcost

provider of tokens because they operate

all their own infrastructure and because

they have access to low markup hardware.

It actually makes a giant difference and

might mean that they are the winner in

producing tokens for the world.

>> Very compelling bill case there.

>> That's a weirdly winding analytical

bullcase, but it's kind of the if you

want to really get down to it, they

produce tokens.

>> Yep. I've got one more bullet point to

add to the bulcase for Google here.

Everything that we talked about in part

two, the Alphabet episode, all of the

other products within Google, Gmail,

Maps, Docs, Chrome, Android, that is all

personalized data about you that Google

owns that they can use to create

personalized AI products for you that

nobody else has.

>> Another great point. So really the

question to close out the bullc case is

is AI a good business to be in compared

to search. Search is a great business to

be in. So far AI is not. But in the

abstract again we're in the bullcase. So

I'll give you this. It should be. With

traditional web search you type in two

to three words. That's the average query

length. And I was talking to Bill Gross

and he pointed out that in AI chat

you're often typing 20 plus words. So

there should be an ad model that emerges

and ad rates should actually be

dramatically higher cuz you have perfect

precision,

>> right? You have even more intent.

>> Yes, you know the crap out of what that

user wants. So you can really decide to

target them with the ad or not. And AI

should be very good at targeting with

the ad. So it's all about figuring out

the user interface, the mix of paid

versus not, exactly what this ad model

is. But in theory, even though we don't

really know what the product looks like

now, it should actually lend itself very

well to monetization.

>> Yep.

>> And since AI is such a amazing

transformative experience, all these

interactions that were happening in the

real world or weren't happening at all

like answers to questions and being on a

time spent is now happening in these AI

chats. So, it seems like the pie is

actually bigger for digital interactions

than it was in the search era. So again,

monetization should kind of increase

because the pi increases there.

>> Yep.

>> And then you've got the bullcase of

Whimo could be its own Googleiz

business.

>> I was just thinking that yeah, that's

scoping all of this to a replacement to

the search market. Whimo and potentially

other applications of AI beyond the

traditional search market could add to

that,

>> right? And then there's the like galaxy

brain bullcase, which is if Google

actually creates AGI, none of this even

matters anymore. And like of course it's

the most valuable thing.

>> That feels out of the scope for an

acquired episode.

>> It's disconnected.

Yes, agree. Barecase. So far, this is

all fun to talk about, but then the

product shape of AI has not lent itself

well to ads. So despite more value

creation, there's way less value

capture. Google makes something like

$400ish dollars per user per year just

based on some napkin math in the US.

That's a free service that everyone uses

and they make $400ish dollars a year.

Who's going to pay $400 a year for

access to AI? It's a very thin slice of

the population.

>> Some people certainly will, but not

every person in America.

>> Some people will pay 10 million, but

right. So if you're only looking at the

game on the field today, I don't see the

immediate path to value capture. And

think about when Google launched in

1998, it was only 2 years before they

had AdWords. They figured out an amazing

value capture mechanism instantly, very

quickly. Yep. Another bare case. Think

back to Google launch in 1998. It was

immediately obviously the superior

product. Yes,

>> definitely not the case today.

>> No, there's four, five great products.

>> Google's dedicated AI offerings in

chatbot was initially the immediately

obviously inferior product and now it's

arguably on par with several others,

right? They own 90% of the search

market. I don't know what they own of

the AI market, but it ain't 90%. Is it

25%? I don't know. But at steady state,

it probably will be something like 25,

maybe up to 50%. But this is going to be

a market with several big players in it.

So even if they monetized each user, as

great as they monetize it in search,

they're just going to own way less of

them.

>> Yep. Or at least it certainly seems that

way right now.

>> Yes. AI might take away the majority of

the use cases of search. And even if it

doesn't, I bet it takes away a lot of

the highest value ones.

>> Mhm.

>> If I'm planning a trip, I'm planning

that in AI. I'm no longer searching on

Google for things that are going to land

Expedia ads in my face.

>> Or health, another huge vertical.

>> Hey, I think I might have something that

reminds me of misotheloma. Is it that or

not,

>> right?

>> Oh, where are you going to put the

lawyer ads? Maybe you put them there.

Maybe it's just an ad product thing, but

these are very high value

>> queries,

>> former searches that those feel like

some of the first things that are

getting siphoned off to AI.

>> Yep.

>> Any other bare cases? I think the only

other bare case I would add is that they

have the added challenge now of being

the incumbent this time around and

people and the ecosystem isn't

necessarily rooting for them in the way

that people were rooting for Google when

they were a startup and in the way that

people were still rooting for Google in

the mobile transition. I think the

startups have more of the hearts and

minds these days,

>> right? So, I don't think that's

quantifiable, but is just going to make

it all a little harder path to row this

time around.

>> Yep. You're right. They had this

incredible PR and public love tailwind

the first time around.

>> Yep. And part of that's systemic, too.

Like all of tech and all of big tech is

just generally more out of favor with

the country and the world now than it

was 10 or 15 years ago.

>> There's more important. It's just big

infrastructure. It's not underdogs

anymore.

>> Yep. And that affects the open AIS and

the anthropics and the startups too, but

I think to a lesser degree.

>> Yeah, they had to start behaving like

big tech companies really early in their

life compared to Google. I mean, Google

gave a Playboy interview during their

quiet period of their IPO. Times have

changed.

>> Well, I mean, given all the drama at

OpenAI, I I don't know that I

characterize them as acting like a

mature company.

>> Fair. Fair

>> company, entity, whatever they are.

>> Yes.

>> Yeah. But point taken.

>> Well, I worked most of my playbook into

the story itself. So, you want to do

power?

>> Yeah. Great. Let's move on and do power.

Hamilton Helmer's seven powers analysis

of Google here in the AI era. And the

seven powers are scale economies,

network economies, counterpositioning,

switching costs, branding, quartered

resource, and process power. And the

question is which of these enables a

business to achieve persistent

differential returns? What entitles them

to make greater profits than their

nearest competitor sustainably? Normally

we would do this on the business all up.

I think for this episode we should try

to scope it to AI products.

>> Yes, agreed. usage of Gemini AI mode and

AI overviews versus the competitive set

of anthropic open AI, perplexity, Grock,

meta AI,

>> etc. Scale economies for sure. Even more

so in AI than traditionally in tech.

>> Yeah, they're just way better. I mean,

look, they're amoritizing the cost of

model training across every Google

search. I'm sure it's some super

distilled down model that's actually

happening for AI overviews, but think

about how many inference tokens are

generated for the other model companies

and how many inference tokens are

generated by Gemini. They just are

amortizing that fixed training cost over

a giant giant amount of inference that I

saw some crazy chart. We'll send it out

to email subscribers. In April of 24,

Google was processing 10 trillion tokens

across all their surfaces. In April of

25, that was almost 500 trillion. Wow.

>> That's a 50x increase in one year of the

number of tokens that they're vending

out across Google services through

inference. And between April of 25 and

June 25, it went from a little under 500

trillion to a little under one

quadrillion tokens. Technically 980

trillion, but they are now, cuz it's

later in the summer, definitely sending

out maybe even multiple quadrillion

tokens.

>> Wow.

>> Wow. So among all the other obvious

scale economies things of amortizing all

the costs of their hardware, they are

amortizing the cost of training runs

over a massive amount of value creation.

>> Yeah, scale economies must be the

biggest one.

>> I find switching costs to be relatively

low. I use Gemini for some stuff then

it's really easy to switch away. That

probably stops being the case when it's

personal AI to the point that you're

talking about integrating with your

calendar and your mail and all that

stuff. Yeah, the switching costs have

not really come out yet in AI products,

although I expect they will.

>> Yes, they have within the enterprise for

sure.

>> Yep.

>> Network economies. I don't think if

anyone else is a Gemini user, it makes

it better for me because they are

sucking up the whole internet whether

anyone's participating or not.

>> Yep, agree. I'm sure AI companies will

develop network economies over time. I

can think of ways it could work, but

yeah, right now, no. And arguably for

the foundational model companies, can't

think of obvious reasons right now.

Where does Hamilton put distribution?

Because that's a thing that they have

right now that no one else has despite

ChatGBT having the Kleenex brand. Google

distribution is still unbelievable. I

don't Is that a cornered resource?

>> Cornered resource, I guess. Yeah,

>> definitely have that.

>> Yeah, Google search is a cornered

resource for sure.

>> Certainly don't have counterpositioning.

They're getting counterpositioned.

>> Yeah.

>> I don't think they have process power

unless they were like coming up with the

next transformer reliably, but I don't

think we're necessarily seeing that.

There's great research being done at a

bunch of different labs. Branding they

have

>> Yeah, branding is a funny one, right?

Well, I was going to say it's a little

bit to my barecase point about they're

the incumbent.

>> It cuts both ways, but I think it's net

positive.

>> Yeah, probably. For most people, they

trust Google. Yeah, they probably don't

trust these who knows AI companies, but

I trust Google. I bet that's actually

stronger than any downsides as long as

they're willing to still release stuff

on the cutting edge.

>> Yep.

>> So, to sum it up, it's scale economies

is the biggest one. It's branding and

it's a cornered resource

>> and potential for switching costs in the

future. Yep. Sounds right to me.

>> But it's telling that it's not all of

them. You know, in search it was like

very obviously all of them or most of

them.

>> Yep. Quite telling. Well, I'll tell you,

after hours and hours spending multiple

months learning about this company, my

quintessence when I boil it all down is

just that this is the most fascinating

example of the innovators dilemma ever.

I mean, Larry and Sergey control the

company. They have been quoted

repeatedly saying that they would rather

go bankrupt than lose at AI. Will they

really? If AI isn't as good a business

as search, and it kind of feels like of

course it will be. Of course, it has to

be. It's just because of the sheer

amount of value creation. But if it's

not, and they're choosing between two

outcomes, one is fulfilling our mission

of organizing the world's information

and making it universally accessible and

useful and having the most profitable

tech company in the world. Which one

wins?

Cuz if it's just the mission, they

should be way more aggressive on AI mode

than they are right now. And full flip

over to Gemini. It's a really hard

needle to thread. I'm actually very

impressed at how they're managing to

currently protect the core franchise,

but it might be one of these things

where it's being eroded away at the

foundation in a way that just somehow

isn't showing up in the financials yet.

I don't know.

>> Yep. I totally agree. And in fact,

perhaps influenced by you, I think my

quintessence is a version of that, too.

I think if you look at all the big tech

companies, Google, as unlikely as it

seems, given how things started, is

probably doing the best job of trying to

thread the needle with AI right now. And

that is incredibly commendable to Sundar

and their leadership. They are making

hard decisions like we're unifying deep

mind and brain. We're consolidating and

standardizing on one model and we're

going to ship this stuff real fast while

at the same time not making rash

decisions.

>> It's hard. Rapid but not rash, you know.

>> Yes. And obviously we're still in early

innings of all this going on and we'll

see in 10 years where it all ends up.

Yeah. Being tasked with being the

steward of a mission and the steward of

a franchise with public company

shareholders is a hard dual mission and

Sundar and the company is handling it

remarkably well especially given where

they were 5 years ago.

>> Yep. And I think this will be one of the

most fascinating examples in history to

watch it play out.

>> Totally agree. Well, thus concludes our

Google series for now.

>> Yes. All right, let's do some carveouts.

>> All right, let's do some carveouts.

Well, first off, we have a uh very, very

fun announcement to share with you all.

The NFL called us.

>> We're going to the Super Bowl, baby.

>> Acquired is going to the Super Bowl.

This is so cool.

>> It's the craziest thing ever.

>> The NFL is hosting a innovation summit

the week of the Super Bowl, the Friday

before Super Bowl Sunday. The Super Bowl

is going to be in San Francisco this

year in February. And so it's only

natural coming back to San Francisco

with the Super Bowl that the NFL should

do an innovation summit.

>> Yep.

>> And we're going to host it.

>> That's right. So, the Friday before

there's going to be some great onstage

interviews and programming. Most of you,

you know, we can't fit millions of

people in a tiny auditorium in San

Francisco the week of the Super Bowl

when every other venue has tons of

stuff, too. So, there will be an

opportunity to watch that streaming

online. And as we get closer to that

date in February, we will make sure that

you all know a way that you can tune in

and watch the uh MCing, interviewing,

and festivities at hand. Super Bowl

week.

>> It's going to be an incredible,

incredible day leading up to an

incredible Sunday.

>> Yes. Well, speaking of sport, my carve

out is I finally went and saw F1. It is

great. I highly recommend anyone go see

it, whether you're an F1 fan or not. It

is just beautiful cinema.

>> Amazing. Did you see it in the theater

or

>> I did see in the theater. Yeah.

>> Wow.

>> I unfortunately missed the IMAX window,

but it was great. It was my first time

being in a movie theater in a while. And

whether you watch it at home or whether

you watch it in the theater, I recommend

the theater. But it's going to be a

great surround sound experience wherever

you are.

>> I haven't been to the movie theater

since the era tour.

>> Ah,

>> which I think is just more about the

current state of my family life with two

young children.

>> Yes. My second one, some of you are

going to laugh, is the Travel Pro

suitcase.

>> Ah, this is the brand that pilots and

flight attendants use, right?

>> Maybe. I think I've seen some of them

use it. Usually they use something

higherend like a Briggs and Riley or a

Tumi or, you know, Travel Pro is not the

most high-end suitcase, but I bought two

really big ones for some international

travel that we were doing with my

2-year-old toddler. And I must say,

they're robust. The wheels glide really

well. They're really smooth. They have

all the features you would want. They're

soft shell, so you can like really jam

it full of stuff, but it's also a thick

amount of protection. So, even if you do

jam it full of stuff, it's probably not

going to break. This is approximately

the most budget suitcase you could buy.

I mean, I'm looking at the big honken

international check bag version. It's

$416 on Amazon right now. I've seen it

cheaper. They have great sales pretty

often. Everything about this suitcase

checked lots of boxes for me and I

completely thought I would be the person

buying the Ramoa suitcase or the

something very high-end and this is just

perfect. So, I think I may be investing

in more Travel Pro suitcases.

>> More Travel Pro. Nice. Nice. Well, I

mean, hey, look, for family travel, you

don't want nice stuff.

>> Yeah. I mean, I bought it thinking like

I'll just get something crappy for this

trip, but it's been great. I don't

understand why I wouldn't have a full

lineup of Travel Pro gear. So

>> amazing.

>> This is my like budget pick gone right

that I highly recommend for all of you.

>> I love how uh Acquired is turning into

the wire cutter here.

>> That's it for me today.

>> Great. All right. I have two carveouts.

I have one carve out and then I have a

update in my ongoing Google carveout

saga. But first, my actual carveout.

It is the Glue Guys podcast.

>> Oh, it's great. Those guys are awesome.

So great. Our buddy Robbie Gupta,

partner at Sequoia, and his buddies

Shane Badier, the former basketball

player, and Alex Smith, the former

quarterback for the 49ers and the Kansas

City Chiefs and the Redskins. Their

dynamic is so great. They have so much

fun. Half of their episodes, like us,

are just them, and then half of their

episodes are with guests. Ben and I, we

went on it a couple weeks ago. That was

really fun. When we were on it, we were

talking about this dynamic of some

episodes do better than others and

pressure for episodes and whatnot. And

the guys brought up this interview they

did with a guy named Wright Thompson.

And they said like, "Look, this is an

episode. It's got like 5,000 listens.

Nobody's listened to it. It's so good."

And the mentality that we have about it

is not that we're embarrassed that

nobody listened to it. It's that we feel

sorry for the people who have not yet

listened to it because it's so good. I

was like that is the way to think about

>> that's great

>> your episodes.

>> So here you are. You're giving everyone

the gift of

>> I'm giving everyone the gift because I

then I was like all right well I got to

go listen to this episode. Ray Thompson

I didn't know anything about him before

I probably read his work in magazines

over the years without realizing it.

>> He's the coolest dude.

>> He has the same accent as Bill Gurley.

So listening to him sounds like

listening to If Bill Gurley instead of

being a VC only wrote about sports and

basically dedicated his whole life to

understanding the mentality and

psychology of athletes and coaches. It's

so cool. It's so cool. It's a great

episode. Highly, highly, highly

recommend.

>> All right. Legitimately, I'm queuing

that up right now.

>> Great. That's my carve out. And then my

ongoing family video gaming saga in

Google part one. I said I was debating

between the Switch 2 and the Steam Deck.

>> That's right. First, you got the Steam

Deck because you decided your daughter

actually wasn't old enough to play video

games with you, so you just got the

thing for you.

>> The update was I went with the Steam

Deck for that reason. I thought if it's

just for me, it would be more ideal. I

have an update.

>> You also got a Switch.

>> Uh, no, not yet.

>> Okay.

>> But the most incredible thing happened.

My daughter noticed this device that

appeared in our house that dad plays

every now and then. And we were on

vacation and I was playing the Steam

Deck and she was like, "What's that?"

Well, let me tell you.

>> And I was playing I've been playing this

really cool indie old school style RPG

called Sea of Stars. It's like a chrono

trigger style Super Nintendo style RPG.

I'm playing it and my daughter comes up.

She's like, "Can I watch you play?" And

I'm like, "Hell yeah, you can watch me

play. I get to play video games and you

sit here and snuggle with me and like,

you know, amazing.

>> I get to play video games and call it

parenting."

>> Then it gets even better. Probably like

two weeks ago, we're playing. And she's

like, "Hey, Dad, can I try?" I'm like,

"Absolutely, you can try." I hand her

the Steam Deck and it was the most

incredible experience, one of the most

incredible experiences I've had as a

parent because she doesn't know how to

play video games and I'm watching her

learn how to like use a joystick and hit

the button.

>> Supervised learning. Yeah. Yeah. Yeah.

Supervised learning. I'm telling her

what to do and then within two or three

nights she got it. She doesn't even know

how to read yet, but she figured it out

and like I'm watching in real time. And

so now the last week it's turned to

mostly she's playing and I'm like

helping her asking questions of like

well what do you think you should do

here? Like you know should you go here?

I think this is the goal. I think this

is where it's so so fun. So I think I

might actually pretty soon her

birthday's coming up end up getting a

Switch so that we can play, you know,

together on the Switch, right?

>> But unintentionally the Steam Deck was

the gateway drug for my soon tobe

four-year-old daughter. That's awesome.

There you go. Parent of the year right

there. Getting to play video games and

Oh, honey. I got it. I'll I'll take it.

>> Oh, yeah. I got it. I got it.

>> All right. Well, listeners, we have lots

of thank yous to make for this episode.

We talked to so many folks who were

instrumental in helping put it together.

First, a thank you to our partners this

season. JP Morgan Payments, trusted,

reliable payments infrastructure for

your business, no matter the scale.

That's JPorggan.com/acquired.

Sentry, the best way to monitor for

issues in your software and fix them

before users get mad. That's

centry.io/acquired.

Workos, the best way to make your app

enterprise ready, starting with single

sign on in just a few lines of code.

Workos.com. And Shopify, the best place

to sell online, whether you're a large

enterprise or just a founder with a big

idea. Shopify.com/acquired.

The links are all in the show notes. As

always, all of our sources for this

episode are linked in the show notes.

Yes. First, Steven Levy at Wired and his

great classic book on Google in the

Plex, which has been an amazing source

for all three of our Google episodes.

Definitely go buy the book and read

that. Also to Parm Olsen at Bloomberg

for her book Supremacy about Deep Mind

and Open AI, which was a main source for

this episode. And I guess also to Kade

Mets right

>> for Genius Makers. Yeah.

>> Yeah.

>> Great book. Our research thank yous. Max

Ross, Liz Reed, Josh Woodward, Greg

Curado, Sebastian Thrun, Anna Patterson,

Brett Taylor, Clay Bavor, Dennis Asabis,

Thomas Kurrion, Sundar Pachai. A special

thank you to Nick Fox, who is the only

person we spoke to for all three Google

episodes for research. We got the hat

trick.

>> Yeah. to Arvin Navaratnam at Worldly

Partners for his great write up on

Alphabet linked in the show notes to

Jonathan Ross original team member on

the TPU and today the founder and CEO of

Grock that's Grock with a Q making chips

for inference to the Whimo folks

Dimmitri Doglov and Suzanne Fyion to

Gavin Baker from Atrades management to

MG Seagler writer at spy glassass MG is

just one of my favorite technology

writers and pundits

>> OG techrunch writer That's right to Ben

Idolen for being a great thought partner

on this episode and his excellent recent

episode on the Step Change podcast on

the history of data centers. I highly

recommend it if you haven't listened

already. It's only episode three for

them of the entire podcast and they're

already getting I don't know 30 40,000

listens on it. I mean, this thing is

taking off.

>> Amazing, dude. That's way better than we

were doing on episode three.

>> It's way better than we were doing. And

if you like Acquired, you will love the

Step Change podcast. And Ben is a dear

friend. So, highly recommend checking it

out. To Cororai Kovaktalu from the

DeepMind team building the core Gemini

models to Shashir Maroda, the CEO of

Grammarly, formerly ran product at

YouTube. To Jim Gao, the CEO of Fedra

and former DeepMind team member, Chathan

Pudigonta, partner at Benchmark. Dwarash

Patel for helping me think through some

of my conclusions to draw. And to Brian

Lawrence from Oakcliffe Capital for

helping me think about the economics of

AI data centers. If you like this

episode, go check out our episode on the

early history of Google and the 2010s

with our Alphabet episode and of course

our series on Microsoft and Nvidia.

After this episode, go check out ACQ2

with Toby Lutka, the founder and CEO of

Shopify. And come talk about it with us

in the Slack at acquire.fm/Slack.

And don't forget our 10th anniversary

celebration of acquired. We are going to

do a open Zoom call, an LP call just

like the days of your with anyone.

Listeners, come join us on Zoom. It's

going to be on October 20th at 400 p.m.

Pacific time. Details are in the show

notes.

>> And with that, listeners, we'll see you

next time.

>> We'll see you next time.

>> Who got the truth?

Is it you? Is it you? Is it you? Who got

the truth now? Huh?

Loading...

Loading video analysis...