Gemini 是怎么被“逼”出来的?谷歌创始人谢尔盖·布林(Sergey Brin)亲述:我为什么回到 Google?
By 学用复利投资Value Insights
Summary
Topics Covered
- Hard Tech Lucked Out as AI Differentiator
- Google Underinvested Early in Transformers
- Study CS Despite AI Coding Prowess
- Universities Obsolete in AI Era
- Algorithms Trump Scaling Data Compute
Full Transcript
I think that uh well, first of all, we've definitely flopped on a bunch of things. We don't need to get into all of
things. We don't need to get into all of them right now. Uh but we've had a long list of failures at the same time. So,
you know, part of it is just trying.
I think that because of the kind of academic roots um maybe we were more inclined to try
hard things and I think kind of coming into this last decade or so especially the hard things
have become more and more valuable. Um,
I guess I'm going to, you know, if you look at AI, which obviously a huge trend, but like just, you know, the amount of computing that has to go into that, the amount of kind of deep math
that has to go into that, um, those are all, you know, technically deep and challenging problems.
And I guess it's just kind of a twist of fate that that that turns out to be important at this stage in the world. I
mean there was a while where you know you could do there was like pets.com you remember you can put anything on.com. It
wasn't really that technically deep. You
need marginal understanding of the web and you can do whatever.com. Uh and you know fortunately we were doing search which did require some deeper
technical skills.
uh but the technical sophistication level has only gone up and in fact well now um the people we hire are just uh well they're much more qualified than I
am or I was at the time I was kind of a mathy computer science major because I had like during my during college I did both math and computer science uh which
was somewhat unusual in my class um and but nowadays as we like hire people out of um Stanford and you know all the
other top programs like these people are pretty sharp mathematically uh and computer science wise and a bunch of them are physicists because physicists kind of
have to do the hard math uh and a lot of the stuff they do is like very computationally limited so they need to kind of have some degree of the computation skills. So I I just think
computation skills. So I I just think somehow it has happened to be the case that some of the deep hard tech uh has become increasingly important and I think we just kind of lucked out on having set the bit early on that
direction.
>> That's an interesting observation that the that the that the technical problems have come to the four again as a as a competitive advantage for companies and
and I so let's talk about AI for a minute. I mean everyone's thinking about
minute. I mean everyone's thinking about it. You're back at Google working on it.
it. You're back at Google working on it.
uh you guys are at the forefront in a whole bunch of ways and it's incredibly competitive. I mean the amount of
competitive. I mean the amount of capital that's going into AI infrastructure is hundreds of billions of dollars even at the level individual companies it's it's really extraordinary. How do you how are you
extraordinary. How do you how are you seeing the landscape right now for what's going on in in AI?
>> Okay, let me think how to answer that without just pounding my own chest. Um
um I mean yes it's a huge amount of investment for sure. Um
I guess I would say in some ways uh we for sure messed up and that we underinvested and sort of didn't take it as seriously as we should have say eight
years ago when we published the transformer paper. uh we actually didn't
transformer paper. uh we actually didn't take it all that seriously and didn't necessarily invest in scaling the compute and also we were too scared to bring it to people because chatbots say
dumb things uh and you know open AAI ran with it uh which good for them it was a super smart insight and there's also our people like Ilia um who went there to do
that um but uh I do think we still have benefited from that long history so we had a lot of the research arch and
development uh of neural networks um kind of going back to Google brain um that was also kind of lucky it wasn't you know it wasn't luck that we hired
Jeff Dean I mean we were lucky to get him but we were in this sort of mindset that deep technical things mattered and so we hired him we hired a lot of people
from deck honestly because they had the top research labs at the time and but he was passionate about neur neural networks. Uh, and it stemmed I think
networks. Uh, and it stemmed I think from actually his college experiment. I
don't know. He's like he was like whatever curing third world disease and figuring out neural networks when he was like 16. He's done crazy things. But uh
like 16. He's done crazy things. But uh
uh he was passionate about it. He built
up a whole effort. Um and uh actually in my division at the time in Google X we had him but uh I didn't I was like okay Jeeoff you do whatever you want. He's
like oh we can tell cats from dogs. I'm
like "Oh okay cool."
Um, but you know, you know, you also trust your technical people. And soon
enough, they were developing all these algorithms, these neural nets that were, you know, doing some of our search and then, you know, Gnome came up with a transformer. Um, and we were able to do
transformer. Um, and we were able to do more and more. uh but yeah I mean we so we had the underpinnings we had the R&D we did underinvest for a number of years
and didn't take it as seriously as uh we should have but we also at the time we had developed the chips for it like the TPUs go back I don't know 12 years or something like that initially we were
using GPUs were probably also among the earliest to use GPUs uh and then we used FPGAAS and then we tried to develop our own chips which have now evolved through
a bazillion generations Um so I I I guess it was that um the trust into going after the deep tech getting the more computation out
developing the algorithms uh and in parallel we were big investors in compute for a long time. So we've had the data centers for a long long time
kind of on a scale that I don't I don't think well Amazon AWS also does have very sizable data centers but very few have you know that scale of data center you
know have their own semiconductors have you know the algorith learning algorithms and so forth to kind of all the components of the stack to be able
to perform at the forefront modern AI.
H how are you thinking about I mean the the technology keeps getting better every year. There's there's a set of
every year. There's there's a set of people who and there's a lot of different visions for what artificial intouch is going to look like. Are are AI really going to be able
like. Are are AI really going to be able to do everything humans can do at least in front of a computer and and maybe more broadly what will that world look like? Do you have a view on that on
like? Do you have a view on that on where the technology is going?
I mean it's uh it is absolutely amazing just the rate of innovation and uh it's hugely competitive now obviously as all of you see between the top US companies
the top Chinese companies um and it's yeah I mean like you know if you skip the news and AI for a month you're like way behind like you know um
so where is it going to go I mean I don't you know I I think we just don't know is there a ceiling to intelligence Um I guess in addition to the question that you raised like can it do anything a person can do there's the question
like what things can it do that a person cannot do.
>> Yeah.
>> That's sort of a super intelligence question.
>> Um and I think that's just not known.
Like how smart can a thing be? you know,
we've had however many hundreds of thousands of years of human evolution and I don't know, whatever millions of primate, but um
but that's a pretty slow process uh compared to what's going on with AI.
>> Do you think we're ready for the speed at which the technology is advancing?
>> Are we ready for the speed the technology is advancing? Um
I mean, look, so far I think people are getting, you know, definitely great use out of technology. I think uh even though there are doom and gloom forecasts here and there, like
everybody's pretty well empowered, uh and the AIS, truth be told, are periodically dumb enough that you're
always like supervising them anyway. Um
but occasionally they're brilliant and give you a great idea. And occasionally,
especially as a non-expert, um well, like whatever, if I want to figure out how to create a new AI chip, I guess I could talk to our expert designers and
stuff, but um as a base case, I can at least I can whip out my phone. I can
talk to an AI about it. It'll probably
give me a 90 80 90% decent sort of overview and understand it. um or
understand it. um or whatever my health questions or whatnot.
I mean I I do think it makes individuals very empowered uh because generally speaking you don't have like experts in XYZ all around you all the time and I
think that empowerment can create uh a lot of potential whether it's you know career or enterprise or or health or living well. Um, so look, I I mean I
don't think I have all the answers. I
just do think it has a huge potential to improve individual capability.
>> Yeah, that's certainly the positive vision. It's that that it could be an
vision. It's that that it could be an incredible augment of of human capability. It's it's great that you're
capability. It's it's great that you're thinking about it that way. I let me ask a question I think which is always um asked in the in the in the in the entrepreneurial thought leaders class
but is maybe particularly sailing with the discussion of AI because one of the things I think every student at Stanford and probably every college age student in the country is thinking about is how
will this technology affect their careers and their job opportunities and what they might go on to do. Um, I'm
curious if you have any advice for the students about what they ought to be studying or what they ought to be thinking about as they look out at the job market in the future.
>> I mean, I think it's super hard to predict exactly what will happen. I
think, you know, if we look at from the advent of the web to cell phones and so forth, those have transformed our society profoundly, have transformed the
kinds of jobs and careers and studies people do for sure. and AI will 100% change that. Uh but I think it's very
change that. Uh but I think it's very hard right now in a rapidly shifting landscape to say exactly what and also the AI we have today is
very different from the AI that we well had 5 years ago or the AI we are going to have in 5 years. Um so
um yeah I don't know I think it's tough to really forecast. I mean I would for sure use AI to your benefit. There are
just so many things that you can do. Um
yeah, I mean just myself as an individual like I I just um whether it's, you know, choosing a gift for my friends or family or brainstorming new
ideas for products or what have you or for art or something like that. like I
just turn to AI all the time now and it um it doesn't do it for me because I always you know typically will ask give me five ideas blah blah blah and you know probably three of them are going to be junk in some way that I'll just be
able to tell but two will have some grain of brilliance um and uh you know or possibly put it in perspective for me or
something like that that um I'll be able to refine think through my ideas.
>> Let me jump in with a really concrete question. So we have about 250 students
question. So we have about 250 students out there. A lot of them are
out there. A lot of them are undergraduates. A great number of them
undergraduates. A great number of them have not selected their major yet because we give them a lot of flexibility here at Stanford for the undergraduates as well. Um a few years ago we could predict that a a large
number would choose computer science as their major. Are you recommending they
their major. Are you recommending they pick continue to pick computer science as their major? And they're listening closely.
>> I mean I chose computer science because I had a passion for it. So, it was kind of um a no-brainer for me. Um I guess you could say I was also lucky because I was
also such um transformative field. Um
field. Um I wouldn't like not choose computer science just because uh you know AI can be decent at coding nowadays.
Um AI is pretty decent at a lot of things.
uh coding just happens to have like a lot of market value which is why a lot of people pursue it and furthermore you know better coding makes for better AI so a lot of the companies like our own that work on it care a lot
about it like we use it a lot for our own coding um and even for our algorithmic ideas and so forth uh but that's because it's such an
important thing so I guess I wouldn't um I wouldn't go off and switch to comparative literature because you think the AI is good at coding. The
AI is probably even better at comparative literature just to be perfectly honest. Anyway,
perfectly honest. Anyway, >> yeah, >> like I don't mean to disrespect comparative literature majors, but >> just like you know when you when the AI
writes the code and just be honest sometimes doesn't work like it'll make a mistake that's pretty significant. Like
you know getting a sentence wrong in your essay about comparative literature isn't going to like really have that consequence. So, so it's honestly easier
consequence. So, so it's honestly easier for AI to do some of the, you know, creative things.
>> I think it's a very, it's a very interesting observation about the technology because I think, you know, one inclination to say about AI is it's going to be really good at solving these technical problems, but it won't necessarily do the things we associate
with with humans, like being empathetic in a conversation. And if you ask one of these AI engines to say simulate a conversation, it's actually pretty good at doing a lot of sort of giving you the
structure for a complicated conversation. So I I think that actually
conversation. So I I think that actually I like that you're pointing to that uncertainty. Um
uncertainty. Um one more question that I want to open up to the audience so that we give people a chance to ask ask questions. So this is the 100th anniversary of the of the
school of engineering. If um if you were uh um if you were Jennifer and had to launch the school's second century, what would you be thinking about for the
second century of the school of engineering?
>> Um wow. Okay. That's a that's a big responsibility like kind of plan or the dean job will come.
>> It is a big responsibility.
Um h I mean I I guess I just would rethink what it means um to have a university. I
mean I know that sounds kind of annoying.
That's the kind of thing Larry would say and I would be really annoyed with him.
But um you know I mean we have this geographically concentrated thing and there's like the buildings and um the fancy lecture halls that really annoying
blinking light. Sorry I can't help it.
blinking light. Sorry I can't help it.
you guys need to you guys need to fix your but uh um but but realistically now you know information spreads very quickly and many of the universities have you know
obviously whatever gone online including Stanford but you know MIT with the open courseware early on and all these startups that have gone thisly whatever
corsera Udacity you name it so the teaching is sort of getting spread and and anybody can go online now and learn about it. You know, you can talk to an
about it. You know, you can talk to an AI uh or take one of these classes and watch the YouTube videos. Um
so I guess yeah, what does it mean to have a university? Are you trying to maximize the impact?
Uh in that case, you know, probably just limiting it geographically is not going to be so effective. Um to
be fair I guess you know in the Bay Area is kind of a special place but um yeah I mean I know I'm kind of rambling
here and thinking through but uh uh yeah I just I don't know that for the coming century that you know the idea of school of engineering and university is going to mean the same thing
uh as it as it used to. I mean, people move around, work remotely, collaborate across. Um, it's a little bit at odds
across. Um, it's a little bit at odds because we're trying to get people actually into the office and I think they do work better in person together, but um, but that's at a certain
particular scale. Like at some level, if
particular scale. Like at some level, if you have a hundred people together over there, it's like kind of fine. They
don't have to be at the same place as these other hundred people.
Um and and increasingly I do see sort of individuals kind of who create new things sort of regardless of
degree. Um I mean in as much as we've
degree. Um I mean in as much as we've hired a lot of academic stars we've hired tons of people who don't have bachelor's degree or anything like that
and they just figure things out you know on their own in some weird corner. Um
I don't know. I I think it's really hard question. Um, yeah, I guess I don't feel
question. Um, yeah, I guess I don't feel like I'm going to magically deliver you like the new recipe, but I I I just don't think this format is likely
to be the one for the next hundred years.
>> You took that in a deeper direction than I was.
>> No, it was great. Actually, it was a bit deeper.
>> Sounded more presidential than Deanike.
I think he's he's talking to you.
you I I agree it applies to the whole university.
>> You actually surfaced the the most fundamental questions about the university which is that that the you know part of the university is about the the creation and transmission of knowledge. That's the fundamental
knowledge. That's the fundamental mission. Those can be done in different
mission. Those can be done in different ways as technology advances. And then
there's a question about the model of having kind of a density of talent all in one place bumping into each other sort of you know which of course was what led to you creating Google and has led to a lot of great things. is that
will that will there be substitutes for that kind of ecosystem that gets created on a university campus and and or you know how how fundamental is that and
will it continue to be. So I I I actually thought that was I I appreciate that you surfaced such a deep question in this in this session. All right, I want to make sure we give some questions for other folks out in out in the audience. So Jennifer, I'm going to turn
audience. So Jennifer, I'm going to turn to you to to take some questions from the from the folks out here.
>> Yes. So the students in the entrepreneurial thought leaders class submitted questions in advance and a number of those were selected and so with the time we have left we're going to have a few questions from our
students. I think the first one is over
students. I think the first one is over over here.
>> Dean Widow, President Levan and Sarge Brin, thank you for your time. My name
is Rashad Barv from Kansas City studying MSN and IR. And my first question goes out to Sergy. It actually just touches on what we were discussing. Google
largely grew out of the academic work you authored on page rank. And with
industry now driving so much of today's innovation, do you still feel that the academia to industry pipeline is crucial? And if so, how might you
crucial? And if so, how might you strengthen it?
>> Wow, it's a great question. Um,
is the academia to industry pipeline crucial?
Um, yeah, I'm going to give you an I don't know on that. uh because
I guess you know when I was a grad student the sort of time from some
new idea to it being maybe commercially valuable was many decades.
um if that compresses then that no longer makes as much sense to do. I mean in academia you have
to do. I mean in academia you have freedom to think about it for a while.
You whatever you apply for grants, you do this and that and you can kind of spend a couple decades thinking about it and then it percolates and then you know eventually maybe there's some big
company or your startup kind of pursues it. Um the question is does that make
it. Um the question is does that make sense if that timeline shrinks a lot?
Um, I think there are I think there are certain things that for sure make sense and I
definitely, you know, even with an AI, um, you know, periodically keep up with the Stanford research and other universities, um, occasionally we whatever hire those
folks and collaborate with them and whatnot.
Um, but I guess I don't know that they needed to have that sort of period of time
that they were trying whatever some new attention thing let's say uh and they spent a couple years experimenting and then you know they took it to industry in one form or another. I mean obviously
industry is also doing all those things so probably not a huge argument for that. um radical sort of new
that. um radical sort of new architectures and things maybe but it's sort of you know the time that industry will scale it will be much
faster um I guess you know quantum computing comes to mind there was sort of first brainstormed I don't know when did fineman like in the 80s or something
kind of postulate this idea of quantum computing uh and now there are a bunch of companies are included they're sort of doing it. There are also university
labs that try like new ways to do it. Um
that's kind of like on the fence maybe.
Um I I guess I would say if you have some completely new idea like you're not doing superconducting cubits like we are or whatever the uh trapped ions like
bunch of startups are. Um but you have some new way uh maybe you need to let it marinate in university for some number of years.
those things are kind of hard. Um, it
could make sense, but then at some point if you decide it's really compelling, you're probably going to go ahead and take it commercial in one way or another. Um,
yeah, I don't know. I want to give you a clearcut answer. Um, because,
clearcut answer. Um, because, you know, the top companies now do invest in much more fundamental research and I think it's sort of with AI AI
starting to pay off that, you know, those investments are paying off. Um, so
I guess I I guess it would shift the proportion of endeavors that you would do, but I I do think there are still some things that do
that do take, you know, like the decade of kind of more pure research um that maybe companies are going to be more reluctant to pursue because that's just
too long a time to market.
>> All right, next question I think is over here.
Hi everyone, my name is Arnov and I'm a freshman studying computer science and math. My question is for Sergey Brin. As
math. My question is for Sergey Brin. As
AI accelerates at this unprecedented rate, what mindset should young aspiring entrepreneurs like myself adopt to avoid
repeating earlier mistakes?
>> Oh, what mindset should you uh adopt to avoid repeating earlier mistakes?
Um, yeah, when you have like your cool new wearable device idea, really fully bake it before you have a cool stunt
involving skydiving on airships.
Um, it's one to both of you. Uh,
no, I actually I like the sort of what we were doing back in the day for Google Glass. It's like, you know, an example
Glass. It's like, you know, an example of prior mistakes. Um, but I think I tried to commercialize it too quickly, uh before you know, we could make it more, you know, as cost- effectively as we needed
to and as polished as we need to from consumer standpoint and so forth. I sort
of, uh, you know, jumped the gun. Uh,
and I thought, oh, I'm the next Steve Jobs. I can make this thing. Tada. Um,
Jobs. I can make this thing. Tada. Um,
that's probably one, I guess. Uh yeah,
if I encapsulate uh yeah, everybody thinks they're the next Steve Jobs. I've definitely made that mistake, but um you know, he was a
pretty unique kind of guy. Uh so
yeah, I I guess I would say, you know, make sure you've baked your idea long and developed it sort of long enough to a far enough point
uh before there's sort of a treadmill you get on to where you're sort of outside expectations increase, the expenses increase, and you're sort of then you kind of have to deliver by a
certain time. uh and you might not have
certain time. uh and you might not have that uh you know you might not be able to do everything you need to do in that amount of time. You kind of get this uh snowball I guess of expectations that
happens and you don't give yourself all the time that you need to process them.
That's the mistake I would have tried to avoid.
>> All right, I think we're going to go over to this side.
>> Hi, thank you for the talk. Um this
question is uh so my name is Ian Pragataki. I'm an undergraduate freshman
Pragataki. I'm an undergraduate freshman at Stanford University. Um this question is for Sergey Brin uh and Jennifer. So
we see a lot of AI companies um improving large language models via scaling data and scaling compute. My
question is once we do run out of data and once we do run out of compute um what do you think will be the next direction? It would it be in newer
direction? It would it be in newer architecture something an alternative to transformers or would it be um a better learning method something better than like supervised learning or RL that we use to train these large language models
or is it is it a completely different direction that you have thought of before? Thank you.
before? Thank you.
Um yeah, I take it from my point of view. I mean all the things that you
view. I mean all the things that you listed I would say have already been bigger factors than scaling computers, scaling data. Um I
think that's sort of like people notice the scaling because you're like building data centers and buying chips and like um well there were um all
the publications from OpenAI and Anthropic about like different kinds of scaling laws. So I think that attracts a
scaling laws. So I think that attracts a lot of attention. But I think if you carefully line things up, you'll see that actually the algorithmic progress
has outpaced even the scaling over the last decade or something. At some point actually a while ago, many while I was in grad school, I think I saw this kind of plot for um like the nbody problem,
you know, like if you have gravitation, they're all flying around. uh and it actually you know there's been huge you know Morris law increase in compute over since people started worrying about that
in the 50s to I don't know by the time I wrote about 90s uh but actually the algorithms to do the unbody problem far outpaced uh that compute scale up uh so
I think you're going to find that you know companies like ours are never going to turn down being at the frontier uh of
compute uh but That's yeah that's just sort of an that's the dessert uh uh after you know your main course and the veggies of actually having done your
algorithmic work.
>> I I guess I'll jump in and say that um in terms of running out of compute or running out of data or specifically running out of compute we're very familiar with that here already. Um it's
actually a issue that it's difficult for a university to have the type of compute that the companies have. We don't even come close. But that does lead us to do
come close. But that does lead us to do quite a bit of innovative work in what happens when you have less compute and how to how to make you know more of less. So we do do a lot of that work
less. So we do do a lot of that work here at already.
>> Next question I think also on this side.
>> Hi everyone, my name is Andy Seavozi.
I'm a second year graduate student in chemical engineering. My question is to
chemical engineering. My question is to all the speakers. Um what which emerging technology do you think is seriously
being underestimated in terms of its long-term impact? Thank
you.
Okay. What emerging technology is being seriously underestimated? Wow. Um
seriously underestimated? Wow. Um
okay. I obviously can't say AI because it's hard to argue, but it could be underestimated.
>> It could be underestimated. could be
underestimated but probably not emergent at this point. We couldn't use that one.
Um I mean a lot of people do wonder about quantum computing quantum computing what it will bring. Um
it's probably not what I would hang my hat on to answer that question although I definitely support uh sort of our efforts in quantum computing and so forth but there are many unknowns.
uh I mean technically speaking we don't even know if P was not equal to NP like uh on the computation front there are just so many unanswered questions um and
uh the quantum algorithms are you know specific for particular very particular structured problems um that said I'm a big proponent uh but
it's hard to uh put my finger on that I mean uh perhaps the applications of both AI and and for that matter quantum computing to uh material science because what could
we do with different kinds of materials that are uh better in a whole host of ways. I mean kind of the sky's is the
ways. I mean kind of the sky's is the limit.
>> I was thinking of materials as well actually but partly because a lot of the the underestimate is sort of interesting. There's so much attention
interesting. There's so much attention right now on on you like what are the opportunities for technological innovation. So many technologies that
innovation. So many technologies that aren't there yet like fusion energy or quantum they they probably it would be hard to say that people are missing them and not paying any attention to them
right now and and AI but I think materials would be in my mind would be one of them and and uh and and probably some of the the opportunities in in
biology and health of of which there are there are many in in molecular science that um that's a that's probably getting less attention than AI right now. But
there's also a huge revolution in in molecular science.
>> Yeah, I was going to say exactly the same thing. I kind of watched the
same thing. I kind of watched the spotlight move around and the spotlight is very large on AI right now, but it's it's it was shining on biology and it shouldn't stop shining on it. There's
all kinds of things going on in synthetic biology, very exciting things.
So, I think we need to broaden that spotlight a little bit.
>> Okay, over here.
Hi, my name is Jomi and I'm a student coming from Singapore. My question today is for Sergey and it's a bit more personal person personal. So, we all grew up having limiting beliefs and I
was curious of what limiting beliefs or deeply held beliefs you had while uh building Google that you had to change and how did that affect your decision-m?
Thank you.
>> Huh. Limiting beliefs. um
yeah, I I guess like I had a very um like my life expanded pretty dramatically at a bunch of stages like I I was born in Moscow in the Soviet Union
and it's very different, you know, uh very poor. Well, everybody was very
very poor. Well, everybody was very poor. Uh, and um, I lived in a little
poor. Uh, and um, I lived in a little 400 ft² apartment with my parents and my grandmother and had to walk up five flights of stairs. Um,
I don't know, didn't really think about the world outside. I guess I was lucky that my father kind of got a hint of the world outside. He I guess went to some
world outside. He I guess went to some conference in Poland um where they told him what the western world was like and he decided to move us
which was very controversial at the time in the family. Uh but eventually got to the US and we're still you know very poor and had to make our way out of
having nothing. Um
having nothing. Um and uh you know at time I had to learn a new language get you know I had to meet all make all new friends. Um so it was
sort of challenging transition but awakening and I think when I came to grad school at Stanford it was sort of a similar like now I had sort of this all
this freedom in in the way the professors entrusted me uh and just something about California that was very freeing uh and and liberating in thought given
the tradition of the state. um one that we're a little bit getting away from in California if I'm being honest, but I'm not going to complain about that. Um
uh but I I guess it's this uh experience um I guess I guess I'm answering a question backwards, not like really
limiting belief. I guess I had had the
limiting belief. I guess I had had the experience of expanding my world in ways that seemed very
painful at the times but later paid off uh just because of my personal history.
Uh and I guess you know those um challenging transitions uh can pay off.
>> Right. Next question. Hello. Thank you
to all of you for being here. My my name is Lubaba. I'm a second year master
is Lubaba. I'm a second year master student in management science and engineering. Originally from Kazablanca,
engineering. Originally from Kazablanca, Morocco. Uh my question is also for you
Morocco. Uh my question is also for you Sergey. It's also more on the personal
Sergey. It's also more on the personal side. So you've achieved success at a
side. So you've achieved success at a scale most people never experience.
Looking at your life now, what is your definition of a good life? What does it mean to you uh beyond all these accomplishments?
>> Thank you. Okay, thanks. What is the definition of a good life? Um,
well, I guess it's uh, you know, being able to enjoy your life, you know, whatever you build. Uh, I like to have family, have one of my kiddos
here, my girlfriend is here. Um, you
know, I feel grateful to be able to spend quality time with them. Um
I I do feel quite grateful to be able to be intellectually challenged sort of at this stage. I actually retired like a
this stage. I actually retired like a month before COVID hit. Uh and uh it was like the worst decision. Um
I had this vision that I was going to sit in cafes and study physics which was my passion at the time. Uh and uh yeah that didn't work because there were no more cafes.
Um, and uh, yeah, I don't know. I was just kind of stewing and kind of felt myself spiraling, kind of not being sharp. Uh,
and then I was like, ah, I got to get back to the office, which at the time was closed, but you know, after a number of months, we started to have some folks going to the office and I started to do that occasionally. uh and then started
that occasionally. uh and then started spending more and more time on what later became called Gemini uh which is super exciting and uh to be able to have
that uh technical creative outlet I think that's uh very rewarding as opposed to if I like stayed retired I think that would have been a big mistake.
>> All right, I think we have time for one or two more I believe over on this side.
>> Hello. Thank you guys all so much for being here. My name is Stanley Leo. I'm
being here. My name is Stanley Leo. I'm
a freshman planning on studying management science and engineering. and
a question for all three of you. So, for
some context, like before arriving here, I was absolutely terrified because everyone here is like super talented.
I'm like, what is going on? Like, I have no clue like why I'm here and everyone just seems way too smart for me. But
after getting to know people, I've realized they're all just really relatable and normal people. So, for all three of you, you guys are viewed as like some of the best leaders, innovators in the world. But if there's one thing you like to share that is
reassuringly relatable and human about yourself, what would that be?
You want to start a guy?
>> Okay, I'm gonna share it and then uh I'm going to try to undo it, but okay. I
realize that sometimes I'm embarrassed to ask things I don't know, but uh I will go ahead. Wait, what is management science and engineering like?
Is it like a Bilbert kind of like I'm going to manage?
How does that work?
>> It's a class.
>> It's a class.
>> It's a major.
>> Wait, this class department is management science.
>> Is that the SP? I guess I should have read the details.
>> It's called management science and >> like the department.
>> It's a department. Yes.
>> But what do you study?
>> Like what are the classes?
>> So, management, science, and opera and and engineering. I'm going to just
and engineering. I'm going to just >> okay >> say they um just had their 25th anniversary but they were the merger of three departments industrial engineering
operations research and engineering economic systems. So I think that describes >> okay okay >> sort of gives you a little triangle there of what they do. So some
universities will have an industrial engineering or operations research. We
have this all bundled together here in management science and engineering which is the department that sponsors the entrepreneurial thought leaders seminar which is what we are >> okay
>> conducting right now.
>> All right. Well, I guess I didn't really know that. So that's my embarrassing
know that. So that's my embarrassing truth. But I'm glad I asked.
truth. But I'm glad I asked.
>> What makes me relatable is that I can explain things to Sergey Brin. Pay
attention to them.
[Laughter] I'll let you off the hook, John, and we'll go to our last question. Do we
have one more question? I think we do.
Yeah, >> we can ask one more. Hi. Um, my name is Zena. I'm actually the course assistant
Zena. I'm actually the course assistant for the class. So, thank you for being here and um we it's a great thing that we can have this last class. I'm going
to ask you something that we ask a lot of our speakers usually is to give a recommendation to the students as to what do you do with your time to stay on top of things? And you just said you really like staying sharp and being on
top of what's happening in AI and whatnot. So, what books do you read?
whatnot. So, what books do you read?
What podcasts do you listen to in your car?
>> Okay, I'm going to try to do this without advertising or so. Well, okay.
So, the thing I like to do, but you shouldn't do it now because we have like way better version coming. Uh, but I do talk to Gemini live in the car often and I ask uh but the publicly available version
right now is not our good version. So
like you shouldn't do it today, but give me a few weeks um to actually ship what I have access to um because we have like an ancient model behind it in the publicly released version right now.
It's a little embarrassing. Um but I do like ask it like uh you know whatever I want to develop a data center, you know, whatever I need how many hundreds of megawatts of this kind
of power, that kind of power, how much it's going to cost, and I just talk to it about stuff uh on my drive. Um, okay.
That does seem kind of self um advertising with Gemini. Uh, I mean, I do periodically listen to a whole bunch of podcasts. I like the, uh, the All-In
of podcasts. I like the, uh, the All-In Guys are actually one of my favorites.
Um, and, um, they're great hosts. Uh, we
just visited Ben Shapiro, another broadcaster down in, uh, we were in Florida, got to see his studio. Um, I
mean, a bunch of these podcasters, uh, are actually pretty fun to meet in person. Uh, but, um, yeah, I guess,
person. Uh, but, um, yeah, I guess, okay, that's not how you're going to learn about it. Uh, you're going to, um, but but I do just listen to them, see what's up. But, but I do prefer to have
what's up. But, but I do prefer to have an interactive discussion on my drive. So, that's why I talk to the AI, as embarrassing as that sounds.
>> Okay.
>> Sort of a glimpse of the future, I think. Actually, that's a good way to
think. Actually, that's a good way to end. We'll probably all be doing it. So
end. We'll probably all be doing it. So
might be. Yes. So thank you John. Thank
you Sergey. I also wanted to thank Emily Ma. Emily is a Stanford adjunct
Ma. Emily is a Stanford adjunct lecturer.
Emily is a co-instructor of the course.
She's also a Google employee and I she saw the potential for this event and partnered with us. So thank you very much. Thank you all for being here for
much. Thank you all for being here for celebrating the School of Engineering's 100th year. Uh this was a perfect way to
100th year. Uh this was a perfect way to close out our first century and let's see what happens next. So thank you.
>> Thank you.
>> Thank you.
>> Congratulations.
Loading video analysis...