Why Vlad Tenev and Tudor Achim of Harmonic Think AI Is About to Change Math—and Why It Matters
By Sequoia Capital
Summary
## Key takeaways - **Math is the foundation of all reasoning**: The core belief is that math is reasoning, and by training AI directly on math, it can achieve better performance across various quantitative fields like science and engineering. [01:45], [02:25] - **AI math could solve Millennium Prize problems**: There's a 43% chance the next Millennium Prize problem will be solved by AI or human-AI collaboration, with future problems likely to be significantly solved by AI. [14:45], [15:18] - **Lean enables rapid AI self-improvement**: Using Lean for formal verification creates an objective reward function for reinforcement learning and self-play, enabling extremely fast AI progress without human intervention. [15:55], [16:24] - **Mathlib is the fuel for AI models**: The open-source repository Mathlib, containing human-written Lean code, serves as a primary data source for AI models, though it's a data-poor regime compared to other AI fields. [23:27], [23:47] - **Why now? AI and Lean's advancements**: The current timing is ideal due to advancements in autoregressive language models and the increasing sophistication and adoption of the Lean programming language in mathematics. [21:21], [22:53] - **Math can accelerate scientific discovery**: By improving AI's mathematical reasoning, Harmonic aims to accelerate fundamental physics research, potentially leading to a deeper understanding of the universe's laws. [32:34], [33:10]
Topics Covered
- Math as the Foundation for Reasoning: Why Direct Instruction is Key
- The Skepticism Around Learning Math and Its Transferable Skills
- Transferring Math Skills to AI Reasoning: The Power of Logic and Critical Thinking
- AI's Impact on Software Engineering: From Code Review to Spec Collaboration and Verification
- Leibniz's 'Universal Characteristic': A 17th-Century Vision of AI and Automated Deduction
Full Transcript
last I checked there's a 43% chance that
the next millennium prize will be solved
by AI or like human with significant AI
assist I I think I think that's an
underestimate I mean we could be lucky
and Larry Guth might be on the on the
path to the remont hypothesis um which
would be amazing but I I think that you
know if if the next one is solved by a
human it would probably have to be in
the very near future and for sure the
the next next one will will probably be
significantly solved by
[Music]
AI we're excited to welcome Vlad and
tutor to the show we've had the pleasure
of knowing Vlad for many years at
Sequoia but what many of you may not
know about Vlad is that in addition to
being founder and CEO of Robin Hood he's
also an enormously talented
mathematician Vlad and tutor have teamed
up to create harmonic an AI research lab
with the goal of pushing the frontiers
of human knowledge specifically they
hope to create mathematical super
intelligence with the thesis that
understanding math allows you to better
understand and reason about much of the
broader World we're excited to talk to
tuther and Vlad about harmonic about the
ingredients that go into creating
mathematical super intelligence
including synthetic data reinforcement
learning and self-play and when AI will
win the IMO or a millennium Prize or
even solve the remon
hypothesis all right Vlad tutor welcome
to the show oh thanks for having us all
right so you guys you have this core
belief that math is reasoning and you
have what might be a contrarian belief
that the best way to train a model to
perform well in math is to directly
teach it Math versus allowing math to
emerge as a property of scale which is
what a lot of the other Foundation model
companies are doing can you talk a bit
about that core belief why do you need
to teach the model directly math and and
also maybe what does it mean that math
is reasoning uh when we started the
company we you know had a really big
focus on math and maybe we can get to
that later but you know if you look
around at all fields of Science and
Engineering uh well almost all Fields uh
math is really at their foundation and
math has essentially become the way that
people understand the Universe um it's
the way you model phenomena from black
holes to atoms it's the way you design
uh things in engineering and you know
there's a couple reasons for that first
of all it just happens to be the case
that the universe is explainable by math
so you can write down you know a fairly
compact set of symbols that explain
things but the other really important
thing is that it's a way to build these
you know shared cognitive theories that
are very objective and clear and
transparent and if you and I are
discussing something that's you know
rigorous we can write down a set of DED
deductive rules we can under agree on
what you know the ground rules are of
whatever we're modeling and then from
there we can derive uh you know
conclusions um and so with humans what
you see is that when people become very
good at math they tend to be good at
other quantitative areas in science and
engineering and so our bet is that you
know if you make a system that's really
good at math you're probably going to
see the same phenomenon where it's true
it might not immediately write the
world's best history essays but when you
ask it to do something scientific or
something in engineering it's just going
to be really really good at that
that's why we started with math and
where where is that boundary between you
know help me with my math homework and
write a history essay there is some
boundary that it's hard for math to
cross what do you think the outer edges
of what's possible with a model with
sort of math at its core where are those
outer edges I I'll give you sort of a
the the non- AI perspective so I studied
math and you know I was really good at
math from when I was a little kid and I
remember remember there were there were
always like the seventh graders in math
class that would raise raise their hands
whenever the teacher would come up with
something and it was always like a
abstract thing like you know the uh side
angle side of triangles there' be The
Annoying kid that was like when are we
ever going to use this and you know the
teacher would kind of Mumble a little
bit and they'd be like well you know
like math just like you're probably not
going to use it soon but it'll make you
really good at at other things and and
you know the other kids were always
skeptical of that and I bought into it
um and so I just kept taking more and
more advanced math you know I went to
Stanford and I majored in it then I went
to grad school to do a math PhD and my
belief was that okay if I just focus on
math then I'm going to get really good
at problem solving and you know business
problems and other problems will be easy
compared to you know solving these like
really tough like abstract algebra
problem sets that I was banging my head
against the wall for 10 hours every week
uh trying to do and I think it basically
ended up being correct right it's like I
I didn't really pay attention to
anything else um I took maybe like one
computer science class intro to to
computer programming at Stanford and um
you know 5 years later uh when I became
an entrepreneur uh
I found it really easy to to pick up
code I found it really easy to like pick
up contracts and you know of course I'm
no lawyer but you could understand that
stuff I mean The Logical underpinnings
are relatively simple compared to
abstract algebra and Analysis so I I
think for for humans at least I I
consider myself an example of like math
transferring to other very monetizable
things and I think for AI uh my
intuition seems to suggest that it
should be the same yeah and you already
see a little bit of evidence of this you
know at this point it's an Open Secret
in the industry that you know code
training on a lot of code data leads to
much better performance on reasoning
benchmarks so you can imagine what
that'll look like when you have
incredible math data sets that Encompass
a lot more General types of reasoning
yeah yeah that resonates the the idea
that you know math kind of math teaches
a human how to think critically how to
think logically and that skill can be
ported to a bunch of other domains it
stands the reason that that would work
in AI also um Vlad you casually
mentioned that you studied a little bit
of math and just for anybody who's not
quite familiar with your background in
math uh I believe you studied briefly
under Terry to who is perhaps the
world's greatest living mathematician
yeah um and then one of the things you
mentioned to us was that you you you
still catch up with them every now and
then you have lunch when you're in La
that sort of thing so I'm curious when
you have lunch with Terry to what do you
guys talk about um do you give any do
you give any stock
tips no no I'm not allowed to do
that yeah one one of the uh yeah one of
the unfortunate things of being a public
company CEO in the financial spaces I
have lots of stock tips but I can't
share any of them I have to like keep
them in turn and I can't even use them
basically can't trade anymore um which
which is unfortunate because I love
Trading
but um yeah I I um so um to backtrack
how I got to UCLA um I uh so Terry to is
a professor at UCLA uh and I think
what's really amazing about him is the
breadth of the work so most
mathematicians get like very deep into a
pretty narrow domain and Terry can get
very deep uh across like dozens of
domains you know he's made contributions
to number Theory combinatorics harmonic
analysis uh applied math um he's a one
of the leading lean contributors at this
point I'm sure he's like formalizing his
papers in lean and actually hopping on
the uh the community zulip and uh like
engaging with students um and then he
has a very popular blog and I think the
way that he's been able to do this is he
he's just smarter than
99.999% of people probably even more
than that so from a very early age it
was very clear that that he was like on
on another plane and I studied uh I did
my math honors thesis in undergrad uh
under this professor Larry Guth who's
also really amazing I mean the um he
actually had a a recent result that came
out that was a groundbreaking result um
uh
in uh I want to say number Theory or
some something about the reman
hypothesis um but yeah this this result
in non-aa AI math uh really was was
quite something and uh you know he um he
kind of suggested I I look at UCLA um I
was like really interested in in in his
field and I ended up uh I ended up going
there and being fortunate to study under
uh Professor to but uh I should be clear
I am a Dropout and it's amazing that I
can claim that after grad school but I I
will claim Dropout status so I only did
one year of uh of of UCLA so it was a
intro to graduate level analysis uh uh
Terry ta taught my my first year which
was pretty amazing um and and one thing
I remember was I was doing some reading
with uh with Professor Tao and um he
gave me this book and he signed it and I
think he signed it because he wanted to
make sure I would give it back to him
when I was done reading it and uh little
did he know that by signing it he
guaranteed he would never get that book
back and I I bring it up every time I
see him I'm like hey you're not getting
that book back it's on my shelf next to
all my other autograph first editions
what does the math Community think of AI
math are people split are or do people
think it's you know the path to the
promised land and the way that we're
going to solve remon and everything else
I think it's split I I think it seems to
be split and okay uh there's sort of
like the younger
mathematicians um I think are very like
Pro Ai and and pro verification and and
tools like lean and I think the the
older folks are a little bit more
skeptical so not surprising I think you
see that in pretty much every field um
I think that my my guess would be that
this will evolve uh my mental model is
uh something like chess where you know
at first there will be perhaps a lengthy
period of you know humans plus AI assist
um and and that will lead to a lot of
really good results but um over time I
think the AI will get better and better
and you know you look at chess right now
and it's sort of
like you know if there's a human
assisting the AI the AI would be like
annoyed of it they would just want to
just delete all of the input because it
would only make the results worse um so
I'm not sure if we're going to get to
that point I think humans will at some
point um I mean they they'll need to
guide uh the algorithms but I think the
the kind of definition of what a
mathematician will do will will
fundamentally change I was talking to
one of my friends who's a mathematician
at MIT and I asked him you know when we
when we were kind of first starting this
um what do you think and this is a a a
young Professor um like very excited
about about the field I was like are you
worried that you're kind of in a in a
field that is going to fundamentally
change he's like the the field of math
has always changed back in the 18 unds
mathematicians used to be kind of like
in the Royal Court and they would be
glorified calculators they would solve
quadratic equations by hand and of
course the they were worried that you
know when computers and calculators came
out the the job would no longer exist
but uh mathematicians get to Define what
math is so I'm sure uh at some point
it'll be prompting and kind of guiding
these AIS to solve problems and I think
um
yeah I think that's going to be very
huge so even if an AI solves the remon
hypothesis a human will always be in the
loop because the humans kind of pose the
remon hypothesis to begin with um yeah
just h on that I think there's like like
in the future you're going to have a lot
of compute resources dedicated to math
and the question will be like a very
human one which is like by which
procedure do humans decide where to like
direct all that like reasoning Firepower
and I think that's going to be like the
job of mathematicians they have to
choose what do we work on how do we
interpret the results how do we
interpret failures to find answers that
kind of thing do you think an AI an AI
math system can solve reman or where is
the ceiling do you think I think that uh
it should be able to solve it or or you
know prove that it's
undecidable that would also be an
interesting result
um yeah I I think the um if if think
about like what a great
mathematician uh like a Terry tow for
instance is capable of doing um you know
they're they're able to like synthesize
lots of papers lots of Frontier results
and learn from them in a way that the
other top human mathematicians can and
kind of find connections between these
things and um sort of use them to create
new and more complex theories uh I mean
that that's exactly kind of how the
system we're engineering work works and
that's that's what computers are great
at and uh AI models are great at
synthesizing large amounts of
information finding patterns uh
recursively
self-improving I think now on metaculus
last I checked there's a 43% chance that
the next millennium prize will be solved
by AI or like human with significant AI
assist and
um I I think I think that's an
underestimate I mean we could be lucky
and Larry goth might be on the on the
path to the reman hypothesis um which
would be amazing but I I think that you
know if if the next one is solved by a
human it would probably have to be in
the very near future and for sure the
the next next one will will probably be
significantly solved by AI one of the
things that you said I want to hit on
which is this idea of recursive
self-improvement because it seems like
in the world of AI there are
are if if you were to draw a spectrum of
human only to AI only and then human in
the loop is sort of the Spectrum in the
middle from lots of human little bit of
AI to lots of AI a little bit of human
one of the things that is interesting
about harmonic at least the way I
understand it is because of
lean you can encapsulate math and
code because of formal
verification you can objectively
determine whether things are right or
wrong which means that you have an
objective reward function that you can
use with self-play to have very fast
cycle times with reinforcement learning
which means that the the progress of
your model has a chance to be extremely
fast because there are no humans in the
loop with that with that recursive
self-improvement like objective function
is clearly defined you can do self-play
to just make the model better and better
and better and better and better which
is not something that we see in a lot of
domains of AI most domains of AI it's a
lot Messier to kind of get the cycle
time on Improvement um can you just talk
through the system a bit a little bit of
what I described like what feeds into
your model what governs the rate at
which it can get better because it seems
like something that will'll be able to
get better at a pretty quick rate yeah
happy to cover that uh one point before
going to that is just that I think the
most interesting part about this is that
you know there are other areas where
recursive self-improvement can work for
example again in those board games like
Alpha go yep but I think what people
don't realize well what a lot of people
don't realize is that in these let's say
you know perfectly observed zero some
games you improve recursively just by
playing against yourself but you hit an
optimal strategy yes so at that point
you no it doesn't matter what system you
have it won't do better the most
exciting part about math is that there
is no upper bound so you're just going
to keep putting compute in and it's
going to keep getting better and there's
no end to it and so when we talk about
do we think AI can solve a remon
hypothesis or get a millennium prize
like those are very human milestones and
I think the real question is like will
it ever stop I mean because it clearly
will get there and I think we're going
to end up solving problems that are much
much harder than three1 hypothesis which
we haven't even conceived of yet because
it's almost like Beyond us to write down
such a hard problem but coming you guys
ever seen that mcraft video of like the
AI beating Minecraft in 20 seconds no no
that sounds I think it's a good analogy
it's like you know what Minecraft is how
a human would play it and then the AI
beating it in 20 seconds is just like
incomprehensible yeah like you can't
even kind of grock what's going on in
the video
feed yeah but I think if we just talk
about you know how harmonic works you
can just think of it
as there's a collection of agents that
are essentially trying to solve problems
and it's true because we use lean we're
able to check whether our answers are
correct
and thereby derive a variety of training
signals that we use to improve the
system but just to be clear you know the
use of lean just lets you verify things
lean doesn't itself tell you whether
you're getting closer to the answer or
whether you're getting smarter or not
it's just telling you whether it's
correct or not so there's a lot of open
scientific challenges to making it get
better quickly and can you just say a
word about what lean is just in case
people aren't familiar yeah totally uh
lean is a just another programming
language um a really great one created
by Leo Deora um the best programming
language we might all be writing lean or
the AI might just be writing lean in the
future um but the idea is that the
mathematical statements are encoded in
the type system of the language so just
very simply you know you have functions
in lean and the input types correspond
to the assumptions of the mathematical
theorem and the output type is the
conclusion and the point of lean is that
if you write a program that implements
that function and it compiles that means
you can derive the output type from the
input type which in turn implies that
you can conclude the conclusion from the
assumptions so that's really the the
fundament that's that's how you use lean
for Math and I I think one thing that's
super interesting about lean is if you
look at Leo Deora the Creator who's at
um Amazon AWS now he's not a
mathematician and he wrote this as a
software verification tool so he he has
you know the belief that you know in the
future software will be verified and the
existing tools things like and
Isabel which are kind of multi-decade
old software verification tools are just
not good you know and and they're
frankly unusable uh the experience for a
developer is is poor and so he wanted to
create a better software verification
tool and in the hopes of if you build
something better more people will use it
and we'll have better software which is
a super Noble uh goal in in its own
right um but then what he didn't realize
was software
verification uh all it is is just
proving that software has certain
properties and uh this thing became very
popular in the math community and you
had uh like thousands of mathematicians
and math students building up an organic
Library called mathlib which you can
think of as kind of like the largest um
math textbook open source it's on GitHub
um and it's just like growing at a at a
pretty fast clip and and like the usage
of lean for math I think to some degree
has surpassed anyone's expectations
might be more than the usage for
verified software at this point um and
that might change as time uh goes
forward and and with AI one of the
questions we always ask is why now
because reinforcement learning has
existed for a long time math has existed
for even longer and it seems like math
has really hidden an inflection point
you guys have chosen to start harmonic
at this point in time why
now oh I mean there there's two really
good reasons why now um that we're
excited about so uh first you know one
one is just that the AI systems have
gotten better in an interesting way so I
I was actually talking with with a
friend close friend about you know RL
for theor improving back in 2015 in 2016
and one issue back then was that there
was n even A Great Notion of an AI
system that could predict something in
an infinite action space so in go you
know you can place a piece somewhere
right it's like either a black or white
piece but in math you can really do
anything like you can just generate any
like next step and so we didn't have
great systems to do that so
autoaggressive language models have
gotten pretty good so that's one thing
that makes it possible I talking on the
time scale of like a decade here but
that's really important and the other
thing that's kind of crazy is that lean
has gotten really good so if you had
told a mathematician 20 years ago that
that a large fraction of the field would
be excited about formal methods in math
they might have thought you were crazy
because back then formal methods were
really isolated to formal logic or
certain types of you maybe graph Theory
like if you guys have heard the Four
Color theorem that was one big success
for formal math but what's changed is
that lean is so flexible and so exciting
for people that they've contributed this
thing called math lib so now there's a
lot of body of knowledge that you can
build on to prove things and so the
combination of AI starting to like even
be a possible fit for this problem plus
lean working really well and lean 4 was
only released officially in September
2023 so those two things happening
together really made it the right time
to attack this can you guys say a word
about um data and specifically synthetic
data and what it is that fuels the model
that you guys are
building yeah so synthetic data is the
fuel for the model um there's a there's
an amazing resource called math liit
that open source Repository
um so that's a lot of human written lean
uh and it's written in a way that's very
general and compact so they're really
proving Advanced theorems right um it's
not necessarily the best fit for problem
solving uh and so as a result almost the
only data you can use for this is
synthetic dat you generate yourself
because that original data is not very
applicable so it's kind of a data poor
regime compared to most areas of AI and
so that process that I described where
the agents themselves are trying to
solve solve problems and thereby
generate training signals that's the
primary way in which you can get data
and the other issue is that you know you
have to progress through levels of
intelligence so you're not necessarily
proving the remon hypothesis up front
you're proving really simple things but
then you kind of amplify yourself
recursively throughout the process turns
out there's not as much math data on the
internet as cat
videos unfortunately not unfortunately
not well yeah it's interesting though
because you know there's the data wall
that the foundation model you know
general purpose Foundation model
companies are running into and it's at
this point they've exhausted you know
what's available on the internet and if
you can generate most of the data that's
required to train that's kind of another
advantage of having math at the core
versus hoping for math as an emerging
property of
scale yeah and I think the data wall
kind of manifests itself in two ways one
is just like you said we're out of
internet data yeah the other is
uh the
actual internet data quality that's out
there uh you can think of that as
providing kind of a ceiling to how smart
these models can get because if you
train on the cat videos and all the nice
Wikipedia content and you know the the
internet content um it's an open problem
how to get something that's
significantly smarter than that and so
you do need to get into some kind of
self-reinforcing self-play regime in our
opinion to uh get to a point where you
can surpass the the ability of human
mathematicians and researchers at
multiple tasks yeah um and so I think uh
in many ways like the the path it's it's
inevitable that it takes kind of the
alpha go to Alpha zero approach and we
learn how to make these models create
the the vast majority of their data
and have the the data actually increase
in complexity as these models continue
to to iterate I think the the great
thing about math is there's a simple
path to doing this you can you can
basically measure the complexity of a
math problem and and how difficult it is
by uh how many lines of lean it takes to
solve um so you can actually look at the
complexity of a system and a lot of um a
lot of like problems are solved by
breaking it down into smaller chunks and
actually solving those chunks and if if
you kind of think about how that works
uh the smaller chunks are then more
manageable because there are sort of
like fewer lines to to solve than than
the big one so if you get really good at
that um and then you get good at solving
the chunks then you can kind of like
train your model to do better and as as
you kind of like keep uh turning the the
gears on that the model gets better at
solving incrementally harder and harder
and more complex uh problems I think
that works very well in math because it
mirrors how we solve math uh like on on
pen and paper yeah and we've been able
to start with like simple axioms and
build up like just giant complex
structures maybe the remon hypothesis
would be hundreds of pages if not more
to solve uh for Last Theorem was I think
200 Pages very very complex um so uh
yeah I I do think eventually you'll get
to a level where you you'll be able to
solve these things and uh the math is to
some extent like the original synthetic
data yeah what what determines the rate
at which your model can get
better uh the rate at which it can get
better
um well I think
the the highest level one is energy so
the more energy you can put into it the
more attempts can happen in parallel
which means you generate data faster
um so I mean there there's no rate
limiting step I mean sorry there's a
bunch of rate limiting steps but there's
no like fundamental constraint on how
far how fast it gets better um so it's
really just about how much computer you
put in I think it's also I mean there
there's still a lot of Unsolved problems
in this field right like uh you we
benefit a lot from core theorems that
have been proved in the past and you
know there's if you think about like
competition math context there's
theorems that every student would just
learn and and use like you know amgm
inequality uh things of that nature and
and so to some degree um like math lib
is incomplete there's very little uh
content about geometry for instance and
it it's like very theoretical and
Abstract um and so a limiting step is
like what's in math lib and of course at
some point the models have to solve the
problem of like creating new theories
and new structures and kind of like
expanding to other domains and getting
really really good at formalizing things
that haven't been formalized by humans I
think that'll be a big unlock and
that'll certainly happen within the next
several several years you'll be able to
say hey here's just like this situation
it could be as simple as like a baseball
team and they're like throwing balls
back and forth to each other and you
know you systems would be able to kind
of like autof formalize that and turn
that into lean code on the Fly um and I
I don't think we're we're quite there
yet to the point where that's reliable
but when it does does get reliable um I
think I think that'll be a really big
unlock if everything goes right what do
you what do you think harmonic
becomes well our mission statement is to
explore the frontiers of human knowledge
so uh it's very important to us that you
know the the things we produce are
correct and useful to humans so I think
in the best case you know we're able to
build a tool that a lot of
mathematicians use to close all the
Millennium priz Problems and to go far
beyond that um I think that'll be a
great you know service to humanity um
there's also other areas of commercial
application for the software so you know
the dream for software engineering is to
be able to just check that code is
correct um to do that you need to have a
very good model of how code works you
need to be able to understand how the
libraries work what they promise that
kind of thing and so you can imagine a
future where safety critical software is
proven correct General software is
proven correct and the way software
engineering done is done can change as
well um so there is like a lot of
applications if you can make a system
that's very good at math reasoning and
very good at checking its
reasoning yeah really we think there's a
lot of applications yeah and I think the
uh I
mean math and software are two fairly
obvious ones I think software
engineering uh as a discipline is
changing really quickly I'm sure you
guys are seeing all the reports of
people doing crazy things with cursor
and you know Claude
3.5 um I think in the future software
engineering will be less about like
reviewing and collaborating on code as
an artifact and we'll be more about
collaborating on specs what do we want
the code to do can we be more rigorous
about that and I think that's where
verification will will become a bigger
thing because as the cost of software
goes to zero the cost of verified
software will also go to zero and
suddenly this thing that was like very
impractical and expensive because you
need specialist humans to do it um Will
Will just accelerate dramatically with
AI so I think you look out 5 10 years
from now the vast majority I mean if if
we progress along the capability curve
as we have been the vast majority of
software written will be verified and
provably correct um and I think that's a
really exciting future I also think like
on the more theoretical side it's not
just math but like physics is um
essentially math theoretical physics is
uh you know one one of the main ways the
frontier of math gets implemented and um
I I think uh it would be amazing to me
personally to accelerate some of the
fundamental physics research at the
frontier and really just develop an
understanding for
why the universe is the way it is why
the laws of physics exist and um also
help figure out experiments to test
those um so I I I would be very proud if
we contributed to that effort and do you
think you'll mostly be contributing to
math and math adjacent areas like
physics and software engineering or do
you think you know anything that
involves reasoning uh is in spec for for
harmonic yeah I mean I I think we we try
to be focused on the things that uh
we're still a small company over the
over the long term I
think if if you believe math is
reasoning that and and we do uh then
yeah if we get really really good at
math and uh computer science is a very
natural analog then um yeah I mean
anything is in scope for for those
models even the history essay I think
we'll see
to doesn't want to touch history
essays yeah I I really enjoyed writing
uh history essays even though my parents
were like you know Humanities language
arts just ignore all that stuff but you
know I think my my math skills led me to
write great history essays too so
hopefully Aristotle will be no different
one day Aristotle wrote some great
historical uh
commentary you are truly a polymath yeah
I mean Poetics is if you've read Poetics
it's
uh should we wrap it up with some rapid
fire let's do it lightning round okay
what in what year will you win the
IMO what do you think tutor soon all
right
2025 soon maybe 2024 all right we'll
sign you up for
2024 all right how about the Millennium
prize ooh
uh that's a tough one I would guess uh
2029 2029 yeah okay I heard
2028 is that is that what they're uh
yeah I guess it's uh fully AI unassisted
Millennium Prize or AI human hybrid well
what do you how about what do you think
for hybrid hybrid I could see 2028 are
we talking on easy Millennium Prize or
hard yeah is it like na easy millenum PR
Stokes 2026 remon hypothesis I'll give
you 2029 all right all right there we go
good given we can't even do a rithmetic
today with LMS uh that's pretty amazing
uh when do you think we'll have human or
superum level reasoning more broadly
defined I think to some degree uh if if
you define it as something that can
reason and solve math problems in excess
of any human like something that Terry
to would you know would would give Terry
tow a run for um for his money uh I
think we're a couple of years away but I
think the models within the next year
will get to probably like 99.99th
percentile would Terry agree with
that I think so yeah I don't know you'd
have to ask him but I I think he would
agree with
that one of our favorite questions is
who in the world of AI do you admire
most and we'll modify it slightly for
you guys who in the world of AI or math
do you admire
most I like Von noyman we were just
talking before about uh Von nyman's
biography
um I think I think what I find really
interesting about him was he started as
a mathematician and he was discouraged
um I think his father who was like a
Hungarian businessman was trying to
discourage him from
um doing math because it wasn't very
like
monetizable and so he got his friend who
was a a great mathematician to try to
talk him out of it but um the friend
came back and he's like I can't do it
this guy's too good it would be just a
disservice to society if he didn't use
his talents for for Math and then you
know he pioneered computer science and
the Von noyman machine was like the
blueprint for all modern computers uh um
he contributed to the Manhattan Project
which you know is a little controversial
but very very practical and impactful um
and you know created probably the
canonical text in Game Theory as well so
um yeah I I think uh I think is pretty
amazing also a fellow Eastern European
well some people debate whether Hungary
is in Eastern
Europe yeah it's it's interesting
question I think like I Def and Meer are
like almost all scientists and
mathematicians um but I think that like
you know if you've heard of the
mathematician livits um what was kind of
shocking to learn during the course of
working on this company is that you know
so livess was competing with Newton to
create calculus and you know Newton's
formulation went out but liit was
basically there one thing people don't
know is that liet also had a lot of
other work and one piece of work that is
just incredibly preing keep in mind this
is the late 1600s um he created this
thing called an idea called the
universal characteristic which is
essentially the notion of having a
deductive language uh automated
procedure to deduce things using that
language and a body an encyclopedia of
work in that language that you can build
on to derive things and so the amazing
thing to me is that this thinker
hundreds of years ago essentially
predicted what would be happening in
2024 and it seems that the only thing
that was required was having like AI get
a little better and having like
computers that can do something like
lean right and and I think it's just
incredible to have a human being predict
that with no concept whatsoever of
what's going to come later but to
understand that that's like such a
fundamental thing that we're going to
end up working on that how use later
awesome thank you guys thank you thanks
for having us thanks for having us
[Music]
[Music]
Loading video analysis...