When Will We Give AI True Memory? A conversation with Edo Liberty, CEO and founder @ Pinecone
By Turing Post
Summary
Topics Covered
- AI Memory Lags Five Years Behind
- RAG Still Pre-ChatGPT Quality
- Vector DBs Scale Like GPUs Did
- Dynamic Indexing Wins at Scale
- Knowledge Essential for True Intelligence
Full Transcript
When will we give AI true memory? To
make all of this work, you have to do all of these leaps. What does knowledge mean? I think that the world has already
mean? I think that the world has already changed. It is very good for the
changed. It is very good for the industry that does [Music] competition. Hi Edo, thank you for
competition. Hi Edo, thank you for joining uh inference podcast for touring post. And let me start from the big
post. And let me start from the big question right away. When will we give AI true memory and what does that phrase mean to you and what does it not mean?
It's a fantastic question. I like you open the Yeah, go straight ahead strong.
I think the state of memory and knowledge is where foundational models were maybe 5 years ago. Before I say what the state is, I want to maybe
slightly zoom out and say what I think it is at all. the memory. Yeah. Because
I think people people use it in different ways in the same way they use intelligence in different ways. And so I want to first maybe spend a minute explaining what where I think
intelligence and and knowledge and memory sort of where they where they're different and what different functions they play in true intelligence. Mhm.
Then I can sort of say where we are as a company, where our mission is, what do we do, where where we are today, where the market is today and and so on. And
yeah, I have a lot of questions about that. So let's start. When will give AI
that. So let's start. When will give AI true memory? When you look at
true memory? When you look at foundational models today in the the large language models and so on that we have, they really specialize in having
cognitive skills. Reading, writing,
cognitive skills. Reading, writing, summarizing, reasoning, problem solving, math, you know, stuff like that, proof generation, coding, those are cognitive
skills. By the way, I'm not even talking
skills. By the way, I'm not even talking about like computer vision and so on.
Those are computational problems. Those are problems have to do with skills, with compute, with abilities that you train over time. That is a completely different kind of skill or completely
different kind of machinery that you need. for example, to read all the
need. for example, to read all the Boeing technical manuals and being able to then go and and replace some part in
in an engine. Something has to be able to read, consume, understand, organize, and index in some way to make it available in real time for decision-m.
We as humans, if you're if you're a technical, if you're a mechanic, airplane mechanic, when you're presented with a problem, there is a wealth of information that you have at your fingertips to make, you know, to make
those decisions. That's knowledge.
those decisions. That's knowledge.
That's memory. That's information. And I
think memory to make all of this work, you have to do all of these tips. You
have to improve and make all of those steps coherently better. It's not enough to you have to digest the data correctly, understand it correctly, organize it correctly in real time, access it correctly, post-process it,
and so on. It's a very complex system.
Today, people use so rag, the retrieval augmented generation became a standal at least have a very very crude version of this. And people have many different
this. And people have many different variations of rag and search combined with models now with, you know, MCPS and and other stuff, right? Based on what I
can see in terms of quality, in terms of reasoning, in terms of capabilities, we are now on the knowledge front roughly
where models were maybe pre-C Chad GPT.
There are initial good results. We have
some really good ideas that seem to be performing really well. There are good huristics in the industry that start to already do something qualitatively better than we could before, but we're
still very far away from being truly good at this. We have a product called assistant that does everything end to end. It does incredibly well. It's like
end. It does incredibly well. It's like
a end to end rack to drop in documents in one place and connect your MCP server or context engine on the other side.
Everything works out. We had to improve all the components together. We had to invest in each part of them. And still
I'm telling you we're very far away and we work with thousands of customers who use our own vector database as a basis for rag and almost everybody does something that is very basic. So
everybody's like I'm going to start with the basic thing and then there are like 7,000 things I want to do on top of it.
So that's where we are as an industry. I
think we've broken the first barriers.
We have amazing ideas. We have
infrastructure in place. We have a veto database. We have models. We have all
database. We have models. We have all the components in terms of us truly unlocking this. We're we're on our way
unlocking this. We're we're on our way there. I think we will be there fully as
there. I think we will be there fully as an end to end automated system that truly understands members and has all the information available to it. You
know, probably in a few years like how many years? It's hard to say. I think th
many years? It's hard to say. I think th those things tend to move faster than than I predict. So you know already today with assistant and with our own
context capabilities and so on we're getting very good results end to end. I
think within 2 years for a lot of people they would think about this problem as semi-solved. By semi-sold, I mean they
semi-solved. By semi-sold, I mean they would rely on somebody like Pine Cineone to be able to ingest a million documents and they rely on their agents that have
access to their context from Pine Cone to bring in the right information at the right time to make good decisions and in the same way that they sort of rely on language models to be to behave
reasonably. So when you say we're far
reasonably. So when you say we're far away, let's uh address your scientist founder side, what breakthroughs do we need to achieve that? There are multiple
components that have to be improved independently and they all have to be brought together to work really well.
Pine cone is very well known for leading the veto database space and the vector database is really just a search engine or the infra that keeps the data that does the raw search. So if you just want
to have you just want to search over hundreds of millions or billions of documents with filters with this with that like with complex embeddings with text all that stuff you need strong
infrastructure to be able to do that in the same way by the way that we needed GPUs to train large models. It's funny
in both those cases the unlock ended up being hardware. We had to develop
being hardware. We had to develop TensorFlow and PyTorch to be a lot better and you know GPU acceleration and the distribution all that stuff. So we
had to do a lot of software investment to be able to do train large models.
Pretty much the whole vector database space and our journey into it is basically to be able to throw an infinite amount of data into the same knowledge machine, right? Because that's
how you unlock. So this the scaling just looks different. It's more not a comput
looks different. It's more not a comput thing. It's a storage thing. But that's
thing. It's a storage thing. But that's
not nearly enough. We have our own research team developing embeddings that are trained specifically for retrieval so that you organize your data such that
you can retrieve from that you can fetch it correctly based on the context we trained and we are going we are already have it actually in production we're going to have a new version of our
contextual token based model basically different words in documents actually mean slightly different things or are less or more important important depending on the context that becomes
incredibly important when you search.
And so the only and the only way to do that is with with models. And so we do that as well. We combine sparse and dense search which correspond roughly to searching by words and concepts versus
searching by meaning and context and general relevance, right? And we as humans do have to do both to, you know, to be accurate. But even that is not enough in assistant. And now we're breaking this off and allowing this to
be an independent component even though it's not out yet. But in assistant itself, the query part also has to be trained. When you send the query, this
trained. When you send the query, this is not a search query for a traditional search engine in the sense that you just find words that appear in documents.
When the search query is a task that you're trying to complete, now something has to an LLM or some other model needs to take that text or task or situation you're in and figure out, oh, what is
all the information that I need to go get to be able to make a good decision, right? So, that in of itself becomes an
right? So, that in of itself becomes an knowledge agent or a search agent or I'm not even sure how to call this thing. is
a a iterative process of fetching all the information that you need based on the information you get back until you feel like oh I I have all the context that I need to be a good decision and I'm not even going into like hooking
into data and PDF parsing and all like there's like a whole universe around that of of just software that needs to be built right so I think we have to improve all of those and we we do so the
models we are hosting existing models because they're good and people want to use them we ship our own models that are also there we shipped our re-ranker and host coherency ranker and host uh open
source rebrankers for results that come back. We have our assistant product. We
back. We have our assistant product. We
have our own uh context agent that is coming out that I I told you about. the
core DB itself, our ability to work at massive massive massive scale and make it extremely cheap so that you can operate at those scales and not, you know, not lose sleep over the fact that
you're, you know, you're burning through your uh cloud budget makes a big difference for companies. When you first pitched the idea of vector database back when you just out of Amazon, you said
almost no one understood the concept.
after the chat GPT boom it became a hot category with a lot of competition but now some people say it's becoming commoditized uh it becomes just a layer
of infrastructure and you mention that with with GPUs like as a different levels do you agree with that and where do you think the future of uh vector
databases is headed so yeah I mean I'm I am incredibly lucky and fortunate to be one of the very few founders who actually started a a cate category. It's
a huge privilege, right? Again, this is some of it is insight and a lot of it is just dumb luck and serendipity and good timing which again was a version of luck. I didn't know chat was going to
luck. I didn't know chat was going to happen. I was nobody was I think almost
happen. I was nobody was I think almost everybody was surprised by by the timing of that. Almost the definition of a
of that. Almost the definition of a category is that you have competition.
It's like otherwise it's not a category.
I am obviously in a in a sales cycle or whatever. Like when you're competing on
whatever. Like when you're competing on a customer, you you don't it's not fun.
You don't like being, you know, undermined. But it is very good for the
undermined. But it is very good for the industry that there's there's competition. It's very good that we are
competition. It's very good that we are being pushed to do better, faster, cheaper, more features. We're being
pushed by our customers to be more efficient. We're being pushed to
efficient. We're being pushed to innovate on our technology and our architecture. We had just shipped our
architecture. We had just shipped our newest release just a few weeks ago when the performance we're seeing and the cost reduction is massive. Now at very
large scales we are by far the cheapest solution. By cheap I don't mean as when
solution. By cheap I don't mean as when you say cheap people feel like whatever.
I don't mean to cheapen the the technology but it's the most most effective way to do this. And the only way you can do that is if you have the right architecture and you like optimize it over many years. We are on a journey.
We're going to keep on that journey and we're going to keep making our vector database faster, bigger, you know, more more cost- effective, more performant, more uh, you know, featurerich, more
secure and so on, more stable. At the
same time, there's also small open- source solutions and you know like uh incumbent databases like open search and others are adding veto search into their
into their offerings which is and and postcrist and others have have other offerings and the truth is that it works fine at small scale you know you have a
lot of developers that have you know a few thousand documents or maybe even I don't know it works well on a smaller scale Yeah, maybe 10,000 documents and they maybe send, you know, a couple of
queries a second. Yeah, maybe even one whatever. Let's say you operate that
whatever. Let's say you operate that scale. That's not nothing, but it's
scale. That's not nothing, but it's definitely you're definitely not operating at scale, right? And so
there's a lot of that just numbers wise.
There are a lot of people with small workloads and very few people with very large workloads. Not very few, but way
large workloads. Not very few, but way fewer. And so the prevailing thought if
fewer. And so the prevailing thought if you just open Twitter or LinkedIn is like, oh, I just use this and it worked great. You know what we see at Pine Cone
great. You know what we see at Pine Cone is that people come to us because they say, "Yep, I did the P. It worked great and now I'm ready to go to production.
Now I'm ready to scale." And I see either performance degrade, cost go up, stability starts looking a little bit shaky. Uh just managing this thing
shaky. Uh just managing this thing becomes a pain in the butt. You know, as an engineer, I'm suddenly on the hook to like maintain this thing in production.
And I'm like I didn't sign up to be a you know database admin for uh you know for my uh job and so on. When people
need a large vector base when people need production grade when people need scale and they need this to be cheap and fast and reliable and unmanaged they come to us. So I have no doubt that
vector databases are a category and for it to be from the ground built to do that well. Yeah, like I told you, we've
that well. Yeah, like I told you, we've evolved the our architecture multiple times already, every time to unlock 10x in scale. Even with our design for only
in scale. Even with our design for only vector database, we already had to evolve our architecture multiple times and we're now very far away from what pretty much any nodebased solution is offering. I think you told me at human X
offering. I think you told me at human X that you had to rewrite the whole architecture. Is that right? And why did
architecture. Is that right? And why did you need to do that? To explain that to you, I need to sort of take you a little bit on a historical trip on okay the kinds of workloads that that vector
databases have seen in you know circa 2020 people didn't have a lot of data but they were very aspirational in terms of their high throughput so there was
very computationally intense workloads so you'd have one or two or 10 million vectors but you'd need to query them a thousand times a second very heavy compute super optimized for that is uh
requires super advanced algorithms and and high performance computing and data structures and like there's a core internal of the DB. We spent several years just optimizing that and becoming
extremely good at that. Indices started
becoming larger. So people got used to the idea of vector databases. They got
used to the idea that search becomes better. They got be more comfortable
better. They got be more comfortable with vector embeddings and and so on. Uh
and they said okay now we want to run search at large scale. And so now people started vector searching over you know a billion vectors multiple billions of
vectors. Now the distribution becomes
vectors. Now the distribution becomes incredibly difficult. Oh by the way it's
incredibly difficult. Oh by the way it's not only the the amount of data became huge. The ratio between compute and
huge. The ratio between compute and storage changed meaningfully because now those systems are memory bound, storage bound, network bound. They're very
rarely CPUbound. If you just try to take the high performance computing thing and replicate it, it will be nauseatingly expensive. So we built our own
expensive. So we built our own serverless solution to be able to fan out and co-habitate you know thousands of users so that when you search
everybody can use the same CPUs and you know share the same storage and so then you can actually give high performance on a massive amount of data even though
specific uh use cases don't saturate CPU and then that's the only way you could actually do this cost effectively.
Interestingly enough, the reason why we had to redo our architecture or at least changing meaningfully in in the last uh 6 months I should say is because a third
pattern is now becoming more common. And
that third pattern is actually vector databases or at least data sets that are massive even bigger than the ones that I told you before. They could be tens of billions or hundreds of billions of
vectors. Sometimes we have customers
vectors. Sometimes we have customers that talk about with us about trillions.
So we're like, okay, fine. This is
starting to push push the boundary here.
But they're not searching everything at the same time. Those, you know, you know, 100 billion vectors are actually siloed to maybe a few tens of thousands
of shards. Think about this as a an
of shards. Think about this as a an email provider. When you search on
email provider. When you search on inboxes, every inbox is its own little search index or if you're uh you know, we work very close with Rubric and so
they provide agents on top of folders.
So now every folder or every set of folders or every user is now a different index, right? But they would have
index, right? But they would have millions of those. So now you have millions of small indices. By small I mean anything from a 100,000 to a few million vectors. Okay? Now this is a big
million vectors. Okay? Now this is a big departure from the original setting because in before we said oh like every index is massive. The object that you
call an index can be very heavy, right?
M you can have all sorts of bookkeeping and uh indices and substructures and folder structures. You can if you spend
folder structures. You can if you spend you know 100 megabytes of disk and memory on each index just to make it memory and like of course much more than that on disk just on the optimization for the index it's fine. It's going to
it's going to be great. When you have 10 million of those that's not going to work. It just doesn't work at all. And
work. It just doesn't work at all. And
so you you have to rewrite everything and you have to really make sure that you organize data a lot more effectively. Mhm. The last thing I would
effectively. Mhm. The last thing I would say is because we are a fully serverless system, we have no idea to begin with if an index like that is going to have 100
vectors or 10,000 or 10 million. We have
no idea. We have no idea if it's going to be queried once a month or 100 times a second. We just don't know. And so we
a second. We just don't know. And so we also had to rebuild our own LSM structure so that every level has its own indexing based on how much usage and
how deep you are in the hierarchy structure because we do have a lot of very small indices. And for those you really don't want to index anything. You
really want to be super scrappy and really have almost no structure. And the
bigger it is, the longer you have the data, the more query it is, the more you can justify spending the energy to index it better. And I think a lot of the
it better. And I think a lot of the discussions in the database industry is like oh what algorithm is best what should I use how should I configure things and so on and our experience is
that there is no answer the answer is you have to use all of them if you have 10,000 users if you're an email provider again let's just take that as an example if you have a million users yeah I mean
the top 20% never searches at all then there's like maybe 60% in your torso that would have some normal behavior and then you have the heavy users that do something altogether different and they might they might have 10 times as much
data that behavior you're going to have to need your database to choose the right indexing at the right level and all that stuff dynamically otherwise you're not going to be able to manage that that's another big big big differential and completely new thing
that we have to be very dynamic also with our own data structures so that we actually you know are able to operate in all those operating points that's super interesting as a part of infrastructure
for let's say agentic workflows right and uh we've so many dynamic parts and constantly rethinking and rebuilding architecture with this basically
limitless amount of data queries and all that. What are the questions about
that. What are the questions about memory or infrastructure that we are not asking enough right now like in on a bigger scale and we talk about agents
let's say yeah I have my own bias because I occupy some part of the stack okay so I you know take that as a disclaimer for my answer right I am both
horrified and delighted that people assume that both LLMs and knowledge agents and search and rag and you know things like pine cone uh an assistant
just work and they're perfect. It's
horrifying because they're not perfect yet. Mhm. And just assuming that is not
yet. Mhm. And just assuming that is not necessarily the best assumption sometimes, but it's also amazing because that means that the people working on it, our scientists, our engineers, you
know, and everybody in our level of the stack has a lot of work and has a lot of figuring out to do. The questions that we don't ask as a technology community enough is really what does knowledge
mean? What does what do we expect from
mean? What does what do we expect from these systems? How accurate do they need
these systems? How accurate do they need to be in what setting? What does
accuracy even mean? We provide in our context API for a system for example. We
always provide references and so on. We
have to make sure that all the information you get back is grounded in information you gave us because you don't you never want to hallucinate. You
really want to make sure that anything you say can be sort of like verified.
But is that too high a bar? I don't
know. May maybe it is for some applications. How do you break out of
applications. How do you break out of that? Do you need to break out of that?
that? Do you need to break out of that?
There are also really deep questions on what do you do with contested information. So you know sometimes you
information. So you know sometimes you have a point of view and they have the opposite point of view both in the data.
What do you do with that? If you search for one thing you'll find it. There's
like biases you know we know these things from social networks and so on for humans as well. If you seek some information that's information you'll find and it will strengthen your conviction. you know, you can encode,
conviction. you know, you can encode, you know, maybe unintentionally you can encode the same behavior into models.
Again, the same questions about truth come up, you know, that that we struggle with as society come up with data as well. You know, the fact that something
well. You know, the fact that something is much more common uh doesn't make it more true and models are by nature uh
basian. you know if something you see
basian. you know if something you see more you learn it more often you would tend to give that answer more the direct correlation in AI between you know how
common or frequent something is versus how truth what about truthful the a the model thinks it is that's a bug that's a problem we have to fix that so there
deep questions about how do we process data know it understand it make it coherent and and make it accessible in a way that we as society start to trust
I love it because there is a there's like 20 years of research ahead of us that when we're not going to be done anytime soon. If I had 50 PhDs working
anytime soon. If I had 50 PhDs working on this, they would all be very busy.
Can you give me a short description of what you mean by knowledgeable AI? Yeah.
So, we touched on this a little bit, but I can try to be give you a more concise answer. You need a system that can do
answer. You need a system that can do the following. It can take large amounts
the following. It can take large amounts of unstructured and unorganized information, take it in, organize it in a way such that in real time it can
create insights and provide the insights and the data needed to either complete a task or answer a question or uh you know solve a problem. That entire pipe is
what I call knowledge. At the end of the day, what you want is that your agent that again completes a task has all the
historical context and can bring it to bear in real time in the right context to complete the task. That's it. Like I
said, you have to build multiple different components of this stack and they all have to be very well orchestrated. And some of those we have
orchestrated. And some of those we have I think as a society we've done we're much closer to where we need to end up and some of those we're very far away still. Do you see memory and
still. Do you see memory and knowledgeable AI as stepping stones toward AGI? And more broadly, what's
toward AGI? And more broadly, what's your take on AI? 100%. I mean, I I don't think you can be intelligent without being knowledgeable. I just don't think
being knowledgeable. I just don't think that's possible. When you speak to your
that's possible. When you speak to your primary care doctor, I mean, it's not enough that they have a very high IQ.
You really want them to have gone to medical school and you really hope that they've actually understood what they were reading and they remember it roughly and they can actually like talk
about it intelligently and and so on. So
their IQ means nothing. They're probably
less quick on their feet just IQ-wise than they were at at 20 years old, but you still trust them a lot more when they're 50 because of their knowledge, right? If we look at the kinds of jobs
right? If we look at the kinds of jobs and the kinds the part of the economy that benefits most from AI is all knowledge work. It's lawyers and
knowledge work. It's lawyers and accountants and patent uh editors and musicians and other kinds of artists and you know you name it. Those are all
knowledge workers. If they need to be
knowledge workers. If they need to be enabled by I those systems better learn how to retain information and and be knowledgeable. Otherwise they're not you
knowledgeable. Otherwise they're not you know we will have pegged AI like uh spellch checker on steroids edit this tweak that write something for me and so
on again which I think is uh it's a miss. Thank you. I have two last
miss. Thank you. I have two last questions and one is about books. I
think books form people and my question is what book formed you and is there a particular book or idea you keep returning to as you build Pine Con's
future? Wow, that's a great question. A
future? Wow, that's a great question. A
because I don't get to read as much as I would want as a CEO. That's not
something I have a ton of time to do.
That's a problem. There's a book I read many years ago that is sort of I still come back to often just thinking about it. I reread it again recently. I
it. I reread it again recently. I
actually bought it for some people on the on the team. It's called Endurance. It's the
team. It's called Endurance. It's the
Southern Pole expedition by Anoskeleton and his his team that starts with Absolute Disaster, their boat being stranded obviously at the southern sea.
It's a well-known tale. It's an
unbelievable journey. But for me as a CEO, there are so many different lessons there on leadership, on hope, on resourcefulness, on heart, just limits
of the human spirit. Just imagining
people going through what they're going through is sort of like all inspiring.
There's just something about the human spirit there that is incredibly inspiring. That's a great suggestion.
inspiring. That's a great suggestion.
I'm making a list of such books. It's
always good to have different, you know, perspectives. what forms you, what helps
perspectives. what forms you, what helps you going as a leader, as a scientist.
So, so thank you. That's I'm trying to stay away from the traditional like CEO books like all the, you know, how to be efficient. Like there there's a lot of
efficient. Like there there's a lot of that, but there a lot of really good ones, too. I think fables often will
ones, too. I think fables often will give you more than, you know, self-help books or whatever. interesting. Yeah,
some of it is very practical, but I think a lot of what it means to be successful in how you succeed both in business and in life has a lot to do with your morality and your character
and your ethics. And at the end of the day, I mean that often times you see a lot of good companies with good technologies fail just for human reasons. Well, if we're getting back to
reasons. Well, if we're getting back to AI field, when you think ahead, I don't know, let's say 5 years, what excites you or concerns you the most about the
future, the world you're helping build?
I will say two things. I'll reiterate
what I said before about truth and and knowledge and so on. Those are to some extent technology challenges, but a lot of them are just human challenges. It's,
you know, in our world, agreeing on what's true is not easy. Even if you have access to all the information. So
it's not even a question of sources of intelligence, it's a political problem.
It's a people problem. It's a human problem. It's it's something that we we
problem. It's it's something that we we have to deal with as society. We will
have to grapple with those issues. We
all grappling with those issues all day long, right? Mhm. For Pine Cone
long, right? Mhm. For Pine Cone specifically, it's significantly easier because we work with companies with their data and so they define what is the desired outcome of the product
they're building, right? And so we sort of like get a free pass a little bit, but it's not, you know, we still have to grapple with some of those informations and I think in AI in general, this thing will become a big issue. In the same way
that search engines had to deal with this issues in the early 2000s, in the same way that social networks and and other stuff have to deal with social
bias and uh you know the algorithm biasing uh creating you know information silos and so on. All of those things will become AI problems too. Okay, those
are human problems that are not technology problem. The second thing I
technology problem. The second thing I would say, and this is it's doesn't worry me as much, but it is an issue that I think we'll have to deal with.
It's there's a little bit of a uh of a race between how solid, understood, trustworthy the technology is versus what you use it for. You want to see that those sort of like go hand in hand.
I want to see that we become good at curbing the unethical or irresponsible usage of AI when we see them as a society versus, you know, just letting them take off uh because you whatever
they make money for somebody and they're not willing to shut it down. We'll need
to get ahead of that at some point. And
the way that I'm dealing with this, I'm not a politician and a law enforcement or whatever. I'm not I can only fight
or whatever. I'm not I can only fight that by making the technology move faster, be more trustworthy, more understood, more uh you know more manageable. Well, there are two these
manageable. Well, there are two these are two concerns. Is there anything that excites you? A ton excites me. This is
excites you? A ton excites me. This is
the whole thing is incredibly exciting.
We're seeing an absolute sea change in how pretty much every profession is practiced. I've never seen that in my
practiced. I've never seen that in my lifetime for sure, but I don't know when that last happened. I think maybe the internet had such an effect on people.
Like I have uh I have young kids that in primary school. I mean, they already
primary school. I mean, they already like they view AI as in the same way that you know uh 30-year-olds think about the internet. They don't even remember a time where you couldn't talk
to a machine and and for it to be able to just respond back and be whatever talked back to you sounds insane to them. like of course I think that the
them. like of course I think that the world has already changed and will keep changing and the value that we'll unlock from this is massive companies are going to be smaller people are going to be
doing more a lot of menial cognitive tasks all the summarizing and whatever taking uh you know putting your notes in sales force up to the meeting all that nonsense that people hate doing it's
just going to go away and be done better faster and cheaper that's just on a society and just economic impact for me as a scientist and engineer, the fact that I get the privilege to actually
build this and influence how it works and have the delight of just discovering something new. It's like, hey, did you
something new. It's like, hey, did you know that, you know, this actually works and it's like, oh my god, you know, that those moments of the Eureka moments are just unbelievable. The fact that you we
just unbelievable. The fact that you we have the platform that serves tens of thousands of developers means that after that Eureka moment a few weeks or maybe a month or two later you can actually
give it to thousands and thousands of people and they actually use it. That's
just you know that's just the icing on the cake. Well, thank you very much for
the cake. Well, thank you very much for the conversation today.
[Music]
Loading video analysis...