From Software Engineer to AI Engineer – with Janvi Kalra
By The Pragmatic Engineer
Summary
Topics Covered
- AI Company Categories: Product, Infrastructure, Model
- Startup vs. Big Tech: Weighing the Trade-offs
- Startup Due Diligence: Beyond the Hype
- Proactive Learning: Turning Rejection into Opportunity
- AI Engineering: Embracing Iteration and Imperfection
Full Transcript
You interviewed at 46 companies. What
did you learn about what the market is like, what interviewing is like, what the whole scene is in terms of the space. There are the product companies
space. There are the product companies infrastructure companies, and the model companies. I found it helpful to put
companies. I found it helpful to put companies in each category and figure out which segment you're most excited about to help narrow down the options given that there's so many AI companies right now. Product companies are the
right now. Product companies are the companies building on top of the model.
Here I think of cursor, kodium, hebia.
Infrastructure companies are the companies building the tools to help AI product companies effectively use LLMs. So whole suite of these there are the inference providers like modal fireworks
together. Vector database companies like
together. Vector database companies like pine cone, chromadb, evate, eval observability tools like brain trust arise, galileo and a whole other suite of products. And then there's the model
of products. And then there's the model companies which are the base of the ecosystem building the intelligence. You
have the big tech companies like Google Meta building models and then you also have startups like or not startups you have other smaller companies like OpenAI Anthropic building models as well. So
that's how I kind of think about it. So
for me in trying to be more focused in my search I decided to focus on model and infrastructure companies because I wanted to keep getting breath in what I was doing and I felt like the product companies were too similar to my
experience at KOD which was phenomenal but I wanted to keep growing and that definitely the trade-off was that it's a bit more of an uphill battle because the work that I had done was not as relevant to model or infrastructure companies.
What is AI engineering and what does it take to get hired as an AI engineer?
John Leo is a software engineer turned AI engineer with four years of experience but already a very impressive background. During college, she joined a
background. During college, she joined a university incubator where she shipped four mobile apps in production to paying customers. She then interned at Google
customers. She then interned at Google and Microsoft, joined Kod as a software engineer, became one of the first AI engineers at the company and then interviewed with 46 different AI
startups and now works at OpenAI. In
today's conversation, we cover Jambi's decision-making when she decided to join a startup after graduating when she already had returned offers from big tech companies like Google and Microsoft. How Jambi became one of the
Microsoft. How Jambi became one of the first AI engineers at KOD despite being told no when she first volunteered to join KOD's in-house AI team. What Jambi
works on at OpenAI and why she thinks OpenAI moves so fast and many more topics. If you're interested in AI
topics. If you're interested in AI engineering or how to make the transition from software engineer to AI engineer, this conversation has lots of relevant tips and observations coming from Jambi. If you enjoy the show
from Jambi. If you enjoy the show please subscribe to the podcast on any podcast platform and on YouTube. So
Jambi, welcome to the podcast. Thanks
for having me, Ger. You were in a a good college in in Dartmouth, but you you then got an interest at at Google. Now
not everyone working at Dartmouth can get into uh a place like Google is very competitive. How did you get that
competitive. How did you get that internship and then the Microsoft internship? what was the interview
internship? what was the interview process like and what do you think helped you get your step in in in the door basically back then I didn't know anyone at Google or Microsoft so I
applied through their portal and I remember for university students they asked you to write essays on why you want to work there so I remember in those essays talking about the things that I had built outside of classes as
well as why I wanted to work there in particular I was lucky to get picked up from the stack to be honest and then leak coded to prepare for their interviews. So, so, so tell tell me
interviews. So, so, so tell tell me about like preparing for for lead code.
I mean these days it is somewhat commonly known but there's you know two sets of people some some engineers or college students they roll their eyes saying this is you know pointless it's not the job etc. And then some people just you know like you sounds like you
just kind of like went through it studied it prepared it how did that go?
So for Google that was in my sophomore year and I remember being surprised that I even got an interview in the first place. So, I wasn't studying actively
place. So, I wasn't studying actively before I when I got the interview, I asked a couple friends, what do I study?
I think they sent us a pamphlet of things to look at. And in that pamphlet there was that green book because back then, neat code wasn't a thing. There
wasn't blind 75. And so, that green book, I'm forgetting what it's called but I remember buying that book, locking myself in my room, cracking the coding interview probably. just cracking the
interview probably. just cracking the coding interview and just reading as many questions as I could back then.
Yeah, you have it. Yeah, cracking the book. I I even have a version of it
book. I I even have a version of it which was the white book that that was like 10 10 years ago. But so the author of this is was actually a Google interviewer like I think 15 or so years
ago, G Gail Lman McDonald and now she she actually sometimes I'm not sure if she still does it, but she used to run training programs at companies at Uber.
she came and she ran our training program on how to do coding interviews uh what kind of signals to get, how to how to change it. So, it's actually really nice because she she teaches the companies on how to do it. So, then she
can update the book and and actually have up to date of how it works at different companies. Wow. She definitely
different companies. Wow. She definitely
changed the game and that I'm not sure how much things were written down before that. So, I think she definitely paved
that. So, I think she definitely paved the way for Neat Code and other people to build on top of on top of this. Yeah
if you want to build a great product you have to ship quickly. But how do you know what works? More importantly, how do you avoid shipping things that don't
work? The answer, Statig. Static is a
work? The answer, Statig. Static is a unified platform for flags, analytics experiments, and more. Combining five
plus products into a single platform with a unified set of data. Here's how
it works. First, Static helps you ship a feature with a feature flag or config.
Then it measures how it's working from alerts and errors to replays of people using that feature to measurement of topline impact. Then you get your
topline impact. Then you get your analytics, user account metrics and dashboards to track your progress over time, all linked to the stuff you ship.
Even better, Static is incredibly affordable with the super generous free tier, a starter program with $50,000 of free credits, and custom plans to help you consolidate your existing spend on
flags, analytics, or AB testing tools.
To get started, go to statsick.com/pragmatic. That is
statsick.com/pragmatic. That is
statsig.com/pragmatic. Happy building.
This episode is brought to you by Cinch the customer communications cloud trusted by thousands of engineering teams around the world. If you've ever added messaging, voice, or email into a product, you know the pain. Flaky
delivery and platform stack with middlemen. Cinch is different. They run
middlemen. Cinch is different. They run
their own network with direct carrier connections in over 60 countries. That
means faster delivery, higher reliability, and scale that just works.
Developers love Cinch for its single API that covers 12 channels, including SMS WhatsApp, and RCS. Now is the time to pay attention to RCS, rich communication
services. It's like SMS, but smarter.
services. It's like SMS, but smarter.
Your brand name, logo, and verify check mark, all inside the native messaging app built by Google, now rolling out with Apple and major carriers. RCS is
becoming the messaging standard. Cinch
is helping teams go live globally. Learn
more at cinch.com/pragmatic. That is
cinch.com/pragmatic. That is
sch.com/pragmatic. And so h how did your internships go at at both Google and and Microsoft? Must have been really
Microsoft? Must have been really exciting to like get it. Google was the first one, right? It was a phenomenal experience. And it was exciting for a
experience. And it was exciting for a couple reasons. First, it was just a
couple reasons. First, it was just a privilege to get access to these code bases of places that I admired. I
remember when I was at Google, I was on the search team and I would use Momma their internal tool uh to find documents. And so I remember so many
documents. And so I remember so many weekends where I was just trying to find company documentation on how the search algorithm really works or comb through the code beyond the code that I was
touching to get a sense of well what makes Google tick. So from an intellectual perspective, it was just very exciting back then. Second, you
also learn a lot of technical things that you don't get exposure to in college, like how to effectively operate in a large codebase, the importance of writing unit tests. Now, when I look back, it's trivial, but for a college
student back then, I remember it was very important learnings that I really value getting back then. And to me, my favorite part was having access to
people that were five or 10 years ahead of me in their career. I remember over coffee chats asking many of them, you know, what career advice do you have?
What are things that you loved in college that I should do more of? And
some of the advice that I got really paved decisions that I made. So that was my favorite part of of the internships.
I would say in hindsight that given the big tech and startups are such different experiences and you learn so much at each, it would be more educational to do
one startup internship and one big tech internship to get a very robust overview of what both experiences are like very early. So like looking back now that
early. So like looking back now that you've done both Google and Microsoft they were somewhat similarish. Is is it safe to say? I mean at the high level right? Like we we know every company and
right? Like we we know every company and every team is different. Yes. at a high level. What was different is I wanted my
level. What was different is I wanted my junior year to work on operating systems because at that point I had just taken a computer architecture class and I loved it and so I wanted to go deeper in the stack. So from a technical perspective
stack. So from a technical perspective they were very different but from an experience of what do companies look like and how do they work which is a huge part of an internship that part was
similar. So what did you work on at
similar. So what did you work on at Microsoft? Was that OS? Yeah I was
Microsoft? Was that OS? Yeah I was working on OS uh specifically I was working on the Azure OS team. It was a product that lets you interact with
Azure blobs locally from your file system. So it hydrates and dehydrates
system. So it hydrates and dehydrates those blobs. You can think of it like
those blobs. You can think of it like Dropbox for Azure blobs. Yeah. Nice.
That is so cool. I mean both that you decided that you want to do something a lot less conventional, you know, like not the usual SAS apps or web apps or whatnot and that you were able to make it happen. Did you express this
it happen. Did you express this preference uh when when you got the internship? Yes, I remember talking
internship? Yes, I remember talking about my computer architecture class where we built up a computer from transistors and conveying how mind-b blown I was from that experience and how I really wanted to work on operating
systems and then I was lucky that they put me on that team. That's awesome. But
I think there's a learning here of like you don't ask you don't get. So like
again just just I just remember when when I was running I I set up the our first internship at Uber in Amsterdam.
So for for that site and you know like once we made an offer to to interns like you go through the interview process but I also ask people like if they have preference and most people just do not have preference. So there is this
have preference. So there is this interesting thing that if you do express your preference I again worst case you know you'll get you'll get whatever it would have been but from the other side a lot of people often don't speak up and
and you know the people who are at these companies they really want to try this win-win especially for internships. The
goal of an internship is to have a great experience and companies would like you to return. It goes both ways, right?
to return. It goes both ways, right?
They evaluate you but you also evaluate them. So they they will actually do it.
them. So they they will actually do it.
It's just a really nice learning like yes you like express what you're hoping for and it might just happen. Yeah. And
these companies have so much IP and so much that we take for granted today but are really hard technical problems that they have solved. So, it's just a treat to then go work on something that you
admire and get to actually see how that code works. Absolutely. It's once you're
code works. Absolutely. It's once you're in there, like these companies are are so amazing with how big they are especially as an intern, a lot of doors are open. You can also just ask and
are open. You can also just ask and they'll be super happy to do. So, so
then you'd made a very interesting decision because now you interned at Google, you're interned at Microsoft. A
lot of people would, you know, be very a lot of students or or new grads would be super happy with just having one. uh as
I understand you could have returned to either and then you made the decision to not do that.
Why you know Google Microsoft you loved the teams tell tell me about how how you thought about the the next step of what you would like to do after you graduate.
So I told you how I was having coffee chats at Microsoft my junior internship with a bunch of mentors mentioned that startups are a great experience as well.
So, I started to evaluate the big tech versus startup option, and I don't think it's black and white. I think they're really good reasons to go to both. The
way I saw it, the upside of going to big tech was first you learn how to build reliable software for scale. It's very
different to build something that works versus build something that works when it's swarmed with millions requests from around the world and Reddus happens to be down at the same time. Very different
skills. So, that was one upside.
different upside for big tech in general was that you do get to work on more moonshot projects that aren't making money today. They don't have the same
money today. They don't have the same existential crisis that startups do and so they can work on things that you know uh great ARVR research is happening back in the day. I think Google was one of
the best places if you wanted to do AI research. There also practical good
research. There also practical good reasons to go to big tech. I'd get my green card faster. I'd get paid more on average. And the unfortunate reality, I
average. And the unfortunate reality, I think, is that the role does hold more weight. People are more excited about
weight. People are more excited about hiring an L5 Google engineer versus an L5 from a startup, especially if that startup doesn't become very successful.
With that all said though, I think there are great reasons to go to a startup.
And back then, this was hearsay based on what I heard from mentors. But now
having worked at a startup for three years, I can confirm it's indeed true.
First, you just ship so much code right? They're more problems than
right? They're more problems than people. Once you get access to these 0
people. Once you get access to these 0 to1 green field problems that you wouldn't necessarily get where at big tech maybe where there are more people than problems. Second is the breath of
skills and this is not just in the software engineering space right from a software engineering space maybe one quarter you're working on a growth hacking front-end feature and the next quarter you're writing Terraform but even in terms of the non-technical
skills you get an insight into how the business works and you're expected to PM your own work. So there's so much breath over there and you just get more agency in what you work on. You get the opportunity to propose ideas that you
think would be impactful for the business and go execute on it. So that
breath and learning opportunity to me was a huge upset that got me very excited about startups. It's just so nice to hear you summarize this because the reality what a lot of people do is they go to one company or the other
either big tech or startup and then they're there for a long time and then one day they might switch but there's a lot of like some cost fallacy you know like you're used to this some people actually after a few years they might go
back to the same type of company and so I think there's a few there's relatively few people who see this as you know with such short uh and and focus time difference to see the different upsides
like you have and as you said so sounds like the upsides did happen. So you went to KOD, right? Yes, I did go to KOD. And
then uh how how do things go? So you
mentioned some of the upsides. I assume
like that that that all all happened there, but uh you know what other uh things sounds like things sped up there actually from a professional learning and also career experience. Definitely.
I went there for growth and breath and I definitely got that in terms of the opportunities that I got to work on. So
it was a phenomenal experience and I'm happy to dive into you know specific work I did but overall just a phenomenal experience but be before we do before the podcast we talked a little bit about
how you thought about selecting a startup cuz like you did go to KOD but as I understand this was not just I'm just going to you know like oh this looks like a good startup you actually
thought about how to select a potentially great startup that that would have that kind of potential growth potential. what is your mental model and
potential. what is your mental model and and how did you evaluate and how did you kind of you know like kind of rank the startups and what was your application process? So back then I didn't have
process? So back then I didn't have startup experience and I also went to a school on the east coast where not many peers around me were going to startups.
So I very much looked for where are places where I love the people in terms of them being smart people I can learn from as well as being very passionate about the product because I think you do your best work when you are passionate
about what you're building. So it was definitely something where I looked for from those two lenses. Today after
having been in Silicon Valley though for four years, I have a more robust rubric on what I look for. So that's definitely evolved since then because one thing
that's become super clear after living here is that your career growth at a startup is very contingent on the startup growing. So then how do you
startup growing. So then how do you choose which startup is going to grow?
And that's a hard question. You know
venture capitalists spend all their time thinking about this. Yeah. And today
what what is your mental model or for someone who is you know has a few years of experience a bit like yourself what would you advise for them on how to think about different categories of
startups the kind of risk the upsides and so on. There are startups of all different sizes and the smaller you go the more risk there is. I think that's part of the game and
is. I think that's part of the game and that's what makes it exciting because you also have more upside when there's more risk. That being said, I feel very
more risk. That being said, I feel very strongly that all engineers that take a pay cut to go to a startup should have an informed thesis on why they think that company is going to grow during
their tenure. And how to actually assess
their tenure. And how to actually assess growth is a hard question with no right answer. But my current rubric is looking
answer. But my current rubric is looking for four things. First, high revenue and steep revenue growth rate.
Second, a large market where there's room to expand. Third, loyal obsessed customers.
expand. Third, loyal obsessed customers.
And then fourth, competition. Why this
company will win in that space. And I'm
happy to go deeper into any of those but that's at least how I think about assessing different startups today. And
it's all relative because a startup that is premiumf will have less revenue than a startup that is series D 400 people.
And then when you're like thinking about the the these four different things. So
like we we'll later get to your your actual job search as well, but do you like try to find these things? So for
example, you mentioned one thing about how customer obsession, right? Like how
much customers love it? Like let's say you you have a or there's a startup that you're kind of interested in. How do you evaluate that? Do you kind of look it up
evaluate that? Do you kind of look it up yourself? Do you put in the work? Do you
yourself? Do you put in the work? Do you
try to somehow outsource it? What what
worked for you? Because there's no right answer here, I think it's really important to do the due diligence yourself because you're going to be the one that's responsible for your decision here, good or bad. How I think about
things like customer obsession is I look on Reddit on YouTube to try to find real users. For more SAS companies where you
users. For more SAS companies where you may not have customers writing about the product online, I'd actually find companies that use that product and then go try to talk to them and understand from the ground what do people think
about this product? especially if this is a product that I can't use myself because it's not for customers but for businesses instead. I I love it and
businesses instead. I I love it and again I I don't think more enough people do this kind of due diligence and and and they should you know one I guess now but in famous example is fast the
one-click checkout startup where they recruited actually there were some ex Uber folks there who I I like knew to to some extent but a lot of people were recruited with with this shiny diagram
that showed headcount growth and people most a lot of people did not ask about revenue or or when they did they were they were okay not hearing about it and even the people who worked there for a while they ignored it and there were
some people who actually asked about it and they actually realized that something was off but just following your framework for example some people who are a bit more diligent could have avoided it same thing with customers for
example there there were not many and like one learning that I I had back then and talking with engineers who worked there and got burnt they all told me I wish I would have done a bit more due
diligence and not taken the CEO's word for it but also asked for proof say same thing revenue runway those kind of things. Yeah. I feel like you know at
of things. Yeah. I feel like you know at startups we're paid in equity a large chunk and so you're investors so you have the right to all this information and to me if a startup's not giving you that information that is a red flag in
and of itself. Yeah. I I feel maybe people should think about that if you join a startup a bit like if you put in like a bunch of your money like a significant amount of your savings. And
when I did angel investing, if if I didn't get information, I mean, you can still put it in and you can hope for the best, but I think that's called gambling to be fair. It is. And and so that's okay. But then just be honest with
okay. But then just be honest with yourself like if I'm not getting this information, I am gambling my my most valuable time and very valuable years of of my life. And that's okay, right? It
could work, but it's maybe not the smart way. Exactly. And as engineers, we have
way. Exactly. And as engineers, we have when we're recruiting, we're elite coding, we're doing system design. It's
hard to carve out that time to do diligence. And so it's something I think
diligence. And so it's something I think we don't talk about enough. I will say that as a hiring manager or even as a manager when you join a company and if you previously done your due diligence
you will have a better start. People
will remember you saying, "Oh, this is this person who actually cares about the business, cares about where it's going cares about how they can contribute." So
on day one, you're already not just seen as like, oh, you know, like new starter XYZ, but like oh, like this person has drive. Like I think that comes across.
drive. Like I think that comes across.
And honestly, if a if a company is not happy, you just trying to understand the business, seeing how you can fit in that's probably red flag itself. Let's
be real. Yeah, that's true. That's fair.
So, at at KOD, you joined as a software engineer and you then transitioned into you know, I looked at your LinkedIn to AI engineer. How did that happen? And
AI engineer. How did that happen? And
and how did you make that happen?
Because I it sounds like you actually had a lot to do with it. So, if we rewind to end of 2022, that was when chatbt came out and November. Oh yeah.
Yeah. Big milestone. And you know, Kota saw the amount of love that this product was getting and KOD was starting an AI team with two engineers to build an AI assistant to help you build your KOD
documents. At that time, I asked that
documents. At that time, I asked that hey, I'd love to be a part of it and got a very polite no. So, I thought, no problem. I'm just going to start
problem. I'm just going to start building in the space anyway in my nights and weekends because this technology is very cool.
The first thing while I was learning was trying to answer to myself how does chatbt even work and through that went down a rabbit hole of self-studying the foundations of deep learning. So
starting off with the very basics of what's a token, what's a weight, what's an embedding to then understanding that okay LLMs are just next token prediction going through the history of different
architectures of how we went from RNNs to LSTMs and then building my way up to the transformer and understanding that okay it's positional encoding and attention that has allowed us to scale up in such a good way. What this did for
me was to just give me some intuition of how this technology works which gave me a bit more confidence. After having built that
confidence. After having built that foundation, I wrote about it in my in my blog and so my team was aware that I was working on this as well. I started to build on top of these models. So I went
to a lot of hackathons. My favorite one was a way to learn languages while watching TV because that's the way that I learned Hindi and I wanted a way to practice my Mandarin and that's in that
way. When I was building and doing
way. When I was building and doing hackathons, I got a sense of how to actually use these tools. So after 5 months had passed, when I asked again to
join the AI team, I got very much a heck yes, come join us. We see that you truly care about this because you've been working on it in your free time. And
that's when the real learning started because hacking on top of models in your free time is very different from trying to build it for production especially because as engineers our role is to
build reliable systems but you're dealing with stochastic models. So
they're very much at odds at with each other. And when you say hackathons is
other. And when you say hackathons is this was these like weekend hackathons you know the ones that anyone can attend you register and and like especially they were popping up because of the AI uh you know like hype basically
starting. Yes weekend hackathons. I also
starting. Yes weekend hackathons. I also
did so the project I was telling you about that was a language learning tool that was with an online hackathon for six weeks with this company called Buildspace. Anyone could go join and the
Buildspace. Anyone could go join and the way you win in this hackathon is not by what you build but how many users you get or how much revenue you're generating. So it's such a fun way as an
generating. So it's such a fun way as an engineer to not just build something but actually go and try to get people to use it. So it was a very fun learning
it. So it was a very fun learning experience and because it was all online they really encouraged us to build in public and that in and of itself was a great learning. I I I love it because I
great learning. I I I love it because I think uh a lot of times when a new technology comes out and you know a lot of engineers especially you had a day job and the people who have a day job the the biggest thing is like a I can
build something on the side but what should I even build? I I mean, you know like it it feels it's kind of pointless.
Like you can do tutorials, but especially in this case, there's not many and tutorials are kind of not there. So, I love how you found a way to
there. So, I love how you found a way to have a goal to enjoy it to do, you know scratch your own itch as well and and combine it. So, like maybe these online
combine it. So, like maybe these online hackathons or like hackathons happening around you, it could be a great way to do it. And and it sounds like it
do it. And and it sounds like it actually helped your professional like you help helped even your company and and and your job because now knowing how to use these tools was very much in
demand. It still is but there were not
demand. It still is but there were not many people who who were like as enthusiastic and as as self-saw. One
thing that I learned from that experience was don't wait for people to give you the opportunity to do something just start working on it. I I I I love this. This is such a good mindset. So
this. This is such a good mindset. So
when you joined this team so technically did you become an AI engineer? What do
you think even an AI engineer is? I feel
is this kind of overloaded term. So I
just love to hear like how you think about it. AI product engineer is
about it. AI product engineer is building products on top of models and the work entails first a lot of experimentation of this new tool came
out experimenting with what you can build to solve real customer problems prototyping it and then from there going actually building it from production. So
at its core, it's very similar to software engineering. There are some
software engineering. There are some domain specific things like learning how to fine-tune, learning how to write good prompts, learning how to host open source models, but in of itself, the
foundation is very much software engineering. Yeah. And I guess you know
engineering. Yeah. And I guess you know like I guess evaluation is is still is also a big one. Yes, that's a great one.
Writing get evals. Uh, and then like one thing that was really surprising for me to to learn when I talk with a friend who works at a startup is how their test suite costs money to run every time. The
Eval suite, they're like, I don't know like how many, like $50 or something like that. And it's like, oh, you know
like that. And it's like, oh, you know when I run my unit test, like it costs time and and effort, but but it's it's it's free. It's just time. And now you
it's free. It's just time. And now you actually, especially if you're using an API, you have this cost which is I think refreshing and just a good way to think about it and it just forces you to
adapt. Yeah, for sure. It's very
adapt. Yeah, for sure. It's very
interesting because there's no good way to measure the accuracy of a nondeterministic model without using LLM. And so at COD we used to use brain
LLM. And so at COD we used to use brain trust and it was so interesting and how the model is being used to check whether
or not it's working correctly. Yeah. As
you were just going deeper and deeper into uh the the AI field like what were resources that that helped you? Was it
just pure self-arning? Was it going to the the source of the where the papers are? like this is this is a really
are? like this is this is a really ongoing question because you know the industry is is not slowing down and there's not many kind of books or like you know static resources out there.
Yeah, very fair because things are changing quickly and there aren't static resources at that time and I still true today. I found it most helpful to learn
today. I found it most helpful to learn by just doing. So even when I was on this team I'd go to a lot of hackathons internal to KOD and external. I remember
there was an internal hackathon at KOD where it happened to line up with the day OpenAI released function calling for the first time. And so our team we played around with the power of function
calling which is a very important tool by turning natural language prompts into appropriately identifying what
third-party code integration you should use. So for example, a user types in how
use. So for example, a user types in how many unread emails do I have? and it
should appropriately pull out the Gmail pack or Gmail thirdparty integration that KOD had at that hackathon playing around with embeddings from Pine Cone to
see can I more accurately pull out the right third party integration. So that
was one way through internal hackathons but there were also external hackathons.
I remember in SF there when Llama 3 first came out they were hosting a fine-tuning hackathon. So I went the
fine-tuning hackathon. So I went the beginning they tell you what is fine-tuning, how to use it, which is great. Then there are a bunch of
great. Then there are a bunch of startups there that are building fine-tuning platforms. So they give you free credits to go fine-tune. And so
then I remember building on top of replicate and fine-tuning a way to turn lama into KOD formulas, which is our equivalent of Excel formulas. So
learning by doing to me was the most effective way when things are changing so quickly. And even though hackathons
so quickly. And even though hackathons are the most effective, you know reading blogs, some paper, Twitter to see what other people are doing did help, there are a lot of open source companies. I remember back in the day
companies. I remember back in the day Langchain had lovely documentation on how to do RAG when it was first getting popularized. And so reading what other
popularized. And so reading what other people are doing, even though they're informally written, it's not a textbook it's not a course, has been very informative as well. Nice. Well, yeah, I
guess this is so new. You just need to figure out what works for you and just try a bunch of stuff and and see what sticks and also it changes, right? So
like whatever works now, it it might not be as efficient later. So totally. Yeah.
And there are books coming up. I
remember you interviewed Chip and she has a lovely book on how to build as an AI engineer. Yeah. Yeah. She she
AI engineer. Yeah. Yeah. She she
actually captured a lot of the things that are not really changing anymore. So
that that's that's also changing and I think you know we'll now see courses come out andre Karpathi is uh doing some really really in-depth like courses if if if you have the time which honestly it
doesn't sound like a bad time investment to do so. So yeah yeah exactly with zero to hero. Yeah. So at at KOD what was
to hero. Yeah. So at at KOD what was your favorite project uh that that you built uh using AI tools or your favorite AI product? A project that's very close
AI product? A project that's very close to my heart from KOD is Workspace Q&A.
So maybe to set some context at KOD a very common customer complaint was that I have so many documents with my internal knowhow of compment need documentation but it's hard to find that
document when I need it and about in November 2023 RAG was getting popularized retrieval augmented generation and it struck our team that we actually had all the tools in place
to build a chatbot that would solve this problem. First, we had a team that had
problem. First, we had a team that had just redone our search index and they put a lot of hard work into redoing that search index. Second, we had the
search index. Second, we had the infrastructure in place to call LM tools reliably. And third, we had a chatbot
reliably. And third, we had a chatbot that allowed you to in your KOD doc chat with an LLM. Yeah. With those three things, I was able to just glue them
together in a couple days and build a version one of a chatbot that lets users ask questions about the content of their workspace. Oh, nice. So I put that you
workspace. Oh, nice. So I put that you know on Slack with a loom and to my surprise uh our CEO Shashir started taking interest in this and responding
to that thread. He saw a grander vision where Kod could create an enterprise search tool. So it's not just searching
search tool. So it's not just searching documents but all third party integrations which Kota had a lot of. So
ideally, you know, a sales team should be able to come in and say, "What's my projected ARR for an account and it pulls in from your Salesforce integration and answers that
question for you." So, that was exciting. And he basically tasked a
exciting. And he basically tasked a couple of us to experimentally prove out that Kota could build this in four weeks. Oh, nice. And a good challenge.
weeks. Oh, nice. And a good challenge.
It was Yeah, it was a good daunting challenge. It was me, my manager, the
challenge. It was me, my manager, the CTO, a designer or and um a PM. And it
was short deadlines and high stakes because this was going to be demoed to important people. So, it was very much
important people. So, it was very much all hands on deck. On one day's notice we flew to New York to hack together and it was nights, weekends, whatever it
took to make this work. It was very exciting time and I think a lot of blood, sweat, and tears behind it. But
the TLDDR is that it did go very well and it became the birth of a second product for Kota called Kota Brain. From
January to June of 2024, we had a much larger initiative where 20 people were now working on it and it was taking that version two that that small team we had built and making it a more robust thing
which is a very hard challenge in and of itself. And the cherry on top was that
itself. And the cherry on top was that Kodrain was presented at Snowflake Dev Day at the keynote. So it was just a very exciting time to be a part of it
from day one and the world getting to see it at a large scale. Yeah. So I I'm just like taking notes on like how amazing it is that you know you joined
KOD as a new grad with like no experience in AI engineering and all just frankly you know you had less experience than like a lot of the experienced engineers and software engineering. I mean just the the years
engineering. I mean just the the years of experience but from from the first day like you just kept track of the industry you saw this exciting thing is coming out Chad GP you tried it out you were convinced this this is this is
going to be interesting and fun. You
asked your your manager when Kota started a team to join, they said no and you just went and learned and in a matter of few months you probably leaprogged a lot of the people who were
just kind of waiting or you know not not necessarily u like being as as active as you are. You you got onto this team as
you are. You you got onto this team as an early engineer and you know a year later now when 20 people were working on this with KOD you were still like one of the earlier ones. So it just like shows
me how like what what you were saying not waiting for permission really pays off and you can just do things you can learn things and especially for for an innovative technology like AI and
whatever we see next it's actually valuable like companies like a company will value this kind of thing because it helps them and they they want they desperately need people like like like you were in this case or or other folks
who are doing similar things. What is
really cool is that it's so new. So, it
definitely levels the playing field between all sorts of seniorities because nobody knows what the right way is. And
so, we're all just figuring it out together. And that's what makes it
together. And that's what makes it definitely more exciting. Yeah. And I
feel there's like two things here. Like
if you're someone who already has some experience, may that be one year or 10 years or 20 years, that experience will eventually be applicable once you understand how this works, you know, you can take that past experience and see
how it applies. And if you don't have experience, it's actually not a bad thing because you're coming in with a fresh mind and you will probably you will not have some of those biases of you know for for example a lot of
software engineers who have like 10 plus years of experience they will know uh who build production system that unit testing and automate testing is super efficient and a very good way to do stuff. Now with AI systems it's not
stuff. Now with AI systems it's not necessarily the case when they're nondeterministic and things like for for large scale systems things like monitoring or checking of might be a better way. I'm, you know, I'm not sure
better way. I'm, you know, I'm not sure which one it is, but not having that that bias could actually speed you up.
So, like either way, it doesn't seem to be any any downside in just figuring it out and mastering this tool because it is a tool at the end of the day. Yeah.
Just just it's a new tool in our it's honestly a magical superpower because now it just unlocks so many things that you can do on top of it. Yeah. Yeah, but
it feels a bit like, you know, the Harry Potter wand. Like when when you watch
Potter wand. Like when when you watch the movies, like, you know, at first it sounds magical when you read the book like you can do all these spells, but if you're a hardcore Harry Potter fan, you will know that there's only certain spells that you can do. And you know
there's a certain thing that you need to say. And so there's a there's a whole
say. And so there's a there's a whole mechanic around it. And like for for every fantasy book as well, when there's a magical world, like there are the rules and there's people who can master those rules. And I I feel it's a bit the
those rules. And I I feel it's a bit the same this, right? It's at first it's magic, but actually it has the rules and once you learn it, you can you can be this, you know, sorcerer who can Yeah, exactly. This episode is brought
Yeah, exactly. This episode is brought to you by cortex.io. Still tracking your services
cortex.io. Still tracking your services and production readiness in a spreadsheet. Real microservices named
spreadsheet. Real microservices named after TV show characters. You aren't
alone. Being woken up at 3 a.m. for an
incident and trying to figure out who owns what service, that's no fun. Cortex
is the internal developer portal that solves service ownership and accelerates the path to engineering excellence within minutes. Determine who owns each
within minutes. Determine who owns each service with Cortex's AI service ownership model even across thousands of repositories. Clear ownership means
repositories. Clear ownership means faster migrations, quicker resolutions to critical issues like lock forj and fewer adhere pings during incidents.
Cortex is trusted by leading engineing organizations like affirm trip advisor grammarly and sofi. Solve service
ownership and unlock your team's full potential with Cortex. Visit
cortex.io/pragmatic to learn more. That
is cotx.io/pragmatic. So then you had a
cotx.io/pragmatic. So then you had a really great run at uh KOD and then you did something like you decided to to to look look around the market and you
blogged about this. You interviewed at 46 companies. Did I get that right? Yes.
46 companies. Did I get that right? Yes.
But there is context behind that. I I'd
love love love to understand like how you went about interviewing uh especially specifically for for an AI position. What did you learn about what
position. What did you learn about what the market is like, what interviewing is like, what the what what the whole whole scene is and if you can give a little context on like where you did this in terms of location wise types of
companies just to help you know us us all understand this. Sure. Maybe just by giving a little bit of context. It was
over a six-month period and the first half I wasn't closing them. I was
getting noses. I was getting ramped up on my leak coded system design prep.
After that, the interview process did take longer than I expected though because the AI space is especially noisy right now. And when I was trying to do
right now. And when I was trying to do my due diligence, like we were talking about earlier, there were often open questions that made me feel uneasy about the growth potential. And the advice I got from a mentor was that if it's not
heck yes and if you have savings, don't join. It's not fair to the company or
join. It's not fair to the company or you. So that was how I thought about
you. So that was how I thought about this. In terms of the space, it was
this. In terms of the space, it was clear that there are the product companies, infrastructure companies, and the model companies. I found it helpful
to put companies in each category and figure out which segment you're most excited about to help narrow down the options given that there's so many AI companies right now. Could you give just
an example of of each, especially with the infer model? I think it might be a bit I'm interested in how you're thinking about that. Yeah, product
companies are the companies building on top of the model. Here I think of cursor, kodium, heia. Infrastructure
companies are the companies building the tools to help AI product companies effectively use LLMs. So whole suite of these there are the inference providers
like modal fireworks together vector database companies like pine cone chromodb weate evalid observability tools like brain trust arise galo and a
whole other suite of products and then there's the model companies which are the base of the ecosystem building the intelligence you have the big tech companies like Google meta building
models and then you also have startups like or not startups you have other big smaller companies open anthropic building models as well. I I I think it's a really good way to think about it
and again I don't think many of us have uh verbalized it uh like this or this also goes back to to not many people have necessarily gone through I will say I this is not uh something that I came
up with myself Yashkamar uh a mentor he pointed out that you should look at the space like this and that's how I think about it now. Wonderful. And what did you learn about like each of these companies in terms of the interview
process, what the vibe was like like generically and also like how how you personally felt about it because like as I understand where you were uh Kota we can put them in the product
category sorry the product company category. So for me in trying to be more
category. So for me in trying to be more focused in my search I decided to focus on model and infrastructure companies because I wanted to keep getting breath in what I was doing and I felt like the
product companies were too similar to my experience at KOD which was phenomenal but I wanted to keep growing and that definitely the trade-off was that it's a bit more of an uphill battle because the work that I had done was not as relevant
to model or infrastructure companies. In
terms of the vibe, I think all of them are shipping really fast, have really lean teams, and are out to win. So, it's
a very exciting time to be looking at them. Questions I would ask myself when
them. Questions I would ask myself when I was trying to figure out, is this company viable in the long run on the infrastructure side was are their margins high enough given that so many
of these infra inference providers are also paying for expensive GPUs. So, what is the margins here?
GPUs. So, what is the margins here?
especially when a good software business should have about 70% gross margins. And
how easy is it to build infrastructure in house? You know, we know this, but
in house? You know, we know this, but engineers are a hard group of people to sell to because if it's not complex enough, if it's too expensive or doesn't work exactly how they want, engineers
will just build it in-house. Google's a
phenomenal example that's built so much inhouse. So, that's how I was thinking
inhouse. So, that's how I was thinking about the infrastructure companies. In
terms of the model companies, I was just trying to get a sense of if they're training Frontier models, can they afford to keep training them given how expensive it is? Are they staying ahead
of the open-source competition? Because
if they're open weights that exist for a model, no one's going to want to pay a premium to get the model from a closed source provider. It's it's a sad
source provider. It's it's a sad reality. It is. And I think that it's
reality. It is. And I think that it's interesting because today product companies are still willing to pay a premium for the best model even though
an open weight exists as long as the the closed source provider is ahead. Yes.
And anyone who's not nodding along when they'll find themselves evaluating an offer or a company and trying to understand the margins, that's a hard one to do, especially as an engineer.
Where where did you get like data or did did companies answer some of your your questions on on the unit economics? Is
these are things that companies like to have under wraps even as someone who's covering sometimes these these companies are just interested in the space even publications uh like financial publications you know will will just
kind of wave their their hands because it is hard like this is the this is the big question and and these companies they want to hide these things from the casual observer for sure. Exactly. I
think it's totally fair for a company not to share this information until you get an offer because it is sensitive information. I do think once you have an
information. I do think once you have an offer, it would be irresponsible for them not to tell you when you are as an investor as well and you sign an NDA. So
you keep it to yourself. So I do think they should tell you for questions or for companies in the inference space. I
would just ask you know how much money do you spend on the GPUs and then how much revenue do you have to make rough back of the envelope math of what those margins are like to just get some sense
of the business. And then I also found it helpful to read some news providers like the information that does very good diligence on the economics behind
different startups in the AI space. And
if I could, I would try to also ask investors who have invested in these companies or passed on investing in these companies because they see the uh
deck come to them. So they have a lot more insight onto what the business really looks like.
you're talking like a like an investor or or or or like how a senior executive would do it which I love. I think more people should be doing this by the way and not enough people are are doing it.
So it it's just very refreshing to hear and as and by the way like the investor is interesting because in my experience investors when you are applying to a company that they're investor in they actually want to help close great people
and they will Exactly. they will happily connect and then you also have a connection where a few years down the road that investor might reach out to you saying oh I remember you're you you're a business-minded engineer but
you know like in the future it's hard to tell I think we were talking about this before what will be in the future but there will be only more demand for software engineers who not only know how to code but are curious about the
business can communicate with users etc so you'll now have a network a stronger network so there's only upside in in doing your due diligence it can actually help your career that's true And I 100%
agree with investors being very generous with their time in wanting to chat with you and explain to you how the business works. So that's been something that's
works. So that's been something that's been really fun to do for sure. And then
uh just going back to like this is all great when you get an offer, but how did you get to getting an offer? Like what
what what did you need to brush up on in terms of interviews? Was it the pretty typical you know tech interviews even though these were for AI engineering roles of the the lead code system design or were there some AI specific things?
you know what what what helped you go from initially you stumbled and you didn't get too much to like okay you actually like we're getting offers now in terms of the interview process I
definitely thought it was all over the place as the market is trying to move away from leak code but still asks leak code so then you end up having to study
leak code as well unless you know exactly where you're applying to so there were coding interviews system design and then projects coding was a of
data structures and algorithms where the best way to do it is leak code. Luckily
neat code with an N now exists and he has phenomenal videos. So that was great. I believe in doing space
great. I believe in doing space repetition. So doing those questions a
repetition. So doing those questions a lot of times. Then there were front-end questions because I'm I'm full stack engineer as well. And I found that there was this resource, the great front end that had lovely interview questions for
the obscure JavaScript questions they sometimes ask.
on the back end. That one I just more relied on things that I had done at work for those interviews. That's the coding part. The system design part, I thought
part. The system design part, I thought Alex Shu, system design, his two books phenomenal. Just reading those, really
phenomenal. Just reading those, really understanding them, doing them again and again until you understand why things work a certain way. Honestly, I love system design interviews. They're really
fun because you learn things outside of the the domain that you're in as well.
And then there are the third type of interviews, which is project interviews where go build something in a day. And
those are probably my favorite out of all of them because you get to show how passionate you are about that specific product and you can actually show what you can do. I do hope that as an
industry we move away from leak code and instead move to just project interviews reading code which has become way more important today as well as debugging code. But I think we're kind of in the
code. But I think we're kind of in the interim where as an industry we haven't fully formed an opinion here. And then
most of these interviews was it the end of the last year? So end of 2024 or so were they were they remote or or were some some were in person already? Swiss
in between June of last year and a large chunk were remote but there were definitely interviews in person as well which I enjoy because I was very much
optimizing for companies that are in person. Yeah, we we we'll see. But I
person. Yeah, we we we'll see. But I
think we're sensing a trend or I'm sensing a trend that inerson injuries might be starting to go back at least your final rounds which by the way it might not be a bad I mean it's interesting because before co like uh
like when you know I spent like most of my career there it was just in person and there are so many upsides right you do meet the people you do see the location often times you meet your
future teammates and for example for me I I once in London I had uh two offers between two two banks and in one case I met my future team the whole team and
when I didn't meet my future team it was just like they said like you will be assigned a team and I actually chose it was a lower salary, but I chose a lower salary cuz I really like the people. And
you know, like we just kicked it off. It
it felt like a good connection. And back
then I went through a recruiter, so the recruiter negotiated the same salary for me, which was kind of a win, I I guess.
But like there there are like I I know there's you know like it's it's always we will hear people like mourning the the end of or or fewer remote interviews but there are all these upsides which
when you're committing to a place for so many for hopefully many years you want to have all that information 100%.
Definitely I think it's energizing on both ends for sure. It's a great point.
And so in the end you joined open AI right? Yes I did. Congratulations. Thank
right? Yes I did. Congratulations. Thank
you. And then can you share on what kind of general work you do at OpenAI? Sure.
So I work on safety as an engineer at OpenAI and OpenAI's goal and mission is to build AGI that benefits all of humanity. On safety we focus on the
humanity. On safety we focus on the suffix of that statement. So benefiting
all of humanity. Some things I work on are a small low latency classifiers that detect when the model or users are doing
things that are harmful so that you can block live. So that means the training
block live. So that means the training the data flywheel hosting these models to scale. Second thing that I get to
to scale. Second thing that I get to work on is measuring when the models are being harmful in the wild. And there are a lot of dual use cases over here. But
really trying to get a sense as these models become more capable and people are figuring out different ways to jailbreak them and exploit them. What
are those unknown harms that we don't know of with more powerful models and then distilling it into small classifiers. There's also on my team a
classifiers. There's also on my team a lot of safety mitigation services that we own. And so part of our work is to
we own. And so part of our work is to integrate it with all the different product launches and as you know there are a lot of different product launches that definitely keeps our team busy and that's just the tip of the surface.
There are a lot more things that we work on at safety. I mean this is like it sounds very interesting because when I worked on payments back at at Uber we had a team called fraud and oh boy they
had so many stories like I just talking with them I like you would think you know like payments is pretty simple like oh you just need to pay but then the edge cases are always interesting with
every every area and the same thing I guess with I mean LM are not as simple but once you realize how they work next token prediction it sounds pretty simple but then the edge cases and all the things that could go wrong etc. And it sounds like you're kind of in the middle
of of that like having a very like good vantage point in actually in in in in the details. You've now worked at KOD.
the details. You've now worked at KOD.
You've you've interned at at Google and and Microsoft and you talk with mentors about like what other places are. What
are things that you feel that are just very kind of distinctly different about OpenAI compared to other companies? I
think what makes OpenAI unique is the mix of speed and building for scale. You
know at startups you get that speed of iteration and it's so fun and then at bigger places you get to build for scale but OpenAI is in a very unique spot where you have both at the moment things
move really fast and you have huge amounts of users. The service that I work on you know gets 60k requests per second and you just think normally you
get one or the other and it's really fun to get both.
Second thing that I think is quite unique for a company of this size is the open culture. People are very open to
open culture. People are very open to answering questions on why and how things work a certain way. So, it's a great place to learn, which is something I didn't realize from the outside. And
then third, people are just very passionate about the mission, work really hard. And I don't think this is
really hard. And I don't think this is unique to OpenAI in and of itself. All
companies, I think, where great work is happening, people are like this. But
it's just never a boring day in the office because people care so much and are constantly shipping. Yeah. And then
talk about shipping. Like you've I I'm assuming you you ship some things to production already, but how can we imagine a thing a project an idea making into production, right? Like there's a
there's a very bureaucratic companies you know, I don't want to like say old Microsoft, maybe maybe not today, but where there's like, you know, like very strict planning process. Then Jer
tickets are created by the PM. The
engineers have to pick it up. Then
someone else might actually deploy it.
So like this is the super like old school and slow and and the reason why some engineers don't like it. What is it like? You mentioned it's fast, but what
like? You mentioned it's fast, but what was your experience in getting things from idea to production? And is it multiple teams? Can one person actually
multiple teams? Can one person actually do it? Is it even allowed? I I I don't
do it? Is it even allowed? I I I don't know. I think it's very much allowed and
know. I think it's very much allowed and very much encouraged.
There's been publications of how deep research came to be where it was an engineer hacking on something presenting it to larger sea suite and now becoming a full very impactful product. So it's
definitely encouraged which I love. I
too have had a similar experience and it's very encouraged to come with ideas and actually drive them forward. just
strictly from from your perspective what what do you think like one thing that stands out that open AI can actually still ship so fast? Cuz it
feels it defies a little bit the laws of the growing organization which eventually slows down. At one point, I'm sure it will, but there's no signs of this happening so far. My observation is
that the systems are built to enable you to ship fast and they give engineers a lot of trust even though it comes with the downside of sometimes that can lead
to outages. To put this very
to outages. To put this very concretely, when I joined, you could make stats safe changes without an approval. So you have trust to go in and
approval. So you have trust to go in and flip a flag to turn something on. That's
no longer the case. You need one reviewer. The service that I get to work
reviewer. The service that I get to work on has 60,000 requests per second, but you get to deploy with one review immediately. So my observation is that
immediately. So my observation is that there is truly trust put in engineers to work quickly and not have a lot of red tape around shipping fast. Yeah. And I think
this just goes with uh kind of unset expectation that expectations will be very high of the people who come in here because you cannot hire an engineer who is used to you know being only doing a
small part not used to thinking about the product and the business impact and all those things. So I have a sense that what you're doing it might be a kind of a given for you but in the industry it might be more common to expect that engineers are just wearing you know we
used to call it wearing more hats but it's just like it's just how it is like you you you do want to have a you know like you're kind of merging a little bit of PM a data scientist an engineer allin
one and these are the type of people who can actually make something like like open AI or or similar companies like work so well with with this many people.
Yeah. And I just think with intelligence today, the roles between data science, engineering, backend front end, PM blurs so much that each
individual, whether you're an open air or not, is expected to do more of that because you can get help from a very capable model. And I think that makes it
capable model. And I think that makes it very exciting for us because it means that we can truly be full stack engineers and go from an idea to launch very quickly. Absolutely. So what what
very quickly. Absolutely. So what what are some things that you've learned about uh AI engineering the kind of the realities of it because it's a very new field and like what are some surprising
things that you you didn't quite expect?
One thing that I've learned that I didn't realize coming in was how much of AI engineering is about building solutions to known limitations of the
model and then as the model gets better you scrap that work and build new guardrails. Let me give an example from
guardrails. Let me give an example from KOD. So prefunction calling days, we
KOD. So prefunction calling days, we wanted a way to get our model to take action based on what the user said.
Function calling didn't exist. So we
prompted the model to return JSON, parse that and actually deterministically call an action based on that JSON blob. Then
OpenAI released function calling. Okay
scrap that and instead integrate with function calling. But you know back in
function calling. But you know back in those days, function calling was not very reliable. And now today we moved
very reliable. And now today we moved from function calling to the MCP paradigm. So things are changing very
paradigm. So things are changing very quickly and the models are getting better but they're still not perfect.
The moment you get more capability there are more engineering guardrails you need to build to make sure they work reliably at scale. Yeah. And I guess you need to become comfortable with throwing
away your work when the model is there.
you just need to not be as attached to it cuz I I think there's a little bit of this especially when you're used to like things not changing as much as software engineering. So just you know like it's
engineering. So just you know like it's it's it's not a it's not a waste it's a learning. Yeah. And it's just been
learning. Yeah. And it's just been easier now to or cheaper to produce code and so you see this expansion and collapse phase happen a lot and where you build a lot of features see what
works and then collapse to what works and restart. There's a lot of scrapping
and restart. There's a lot of scrapping your work as the creation of code becomes cheaper. It's easier not to be
becomes cheaper. It's easier not to be attached when an LLM also helped generate that code. Yeah, I I think this will be a big change, a good change once we get used to it. Yes, exactly. Now
when it comes to AI and junior engineers, uh like you're such an interesting example in the sense that you you started your career a little bit before AI took off. Uh but you you also
transitioned with like not not decades of of experience just yet. What what is your take on how Gen AI will impact new grads, people who are still in college?
Cuz you know there there's two takes and they're both very extreme. One is the engineers with 10 plus years experience often just feel like I feel so sorry for these people like they're they're not going to get jobs even if they get jobs.
They're not going to dependent on AI.
They're not going to like read the books. you won't know what it was exact
books. you won't know what it was exact back in our day, right? So, so there there's this this thing and and also like some some people are generally worried that well, you know, you can now
outsource so outsource so many things to AI. They're thinking, okay, maybe they
AI. They're thinking, okay, maybe they can pick up things really quickly, but maybe they're not never going to get to that depth. Now, I think that both are
that depth. Now, I think that both are extreme. I I'd love to hear like how you
extreme. I I'd love to hear like how you see it because you're kind of seeing this firsthand. Definitely. And you're
this firsthand. Definitely. And you're
right, from my experience level, I get insight into what both of those engineering lives are like. And
currently, I'm not convinced that AI is going to be disproportionately worse for junior engineers. In fact, I think that
junior engineers. In fact, I think that it allows everyone to move higher into the stack and be more creative in what
you're building. You empower younger
you're building. You empower younger engineers to just do more, propose ideas, and actually ship that I do subscribe to the take that there
will be people that use AI to learn and then people that use AI to avoid learning. I would say that there's
learning. I would say that there's actually room for both things to exist and you should be doing both. I
personally think that when you're working on a green field project trying to prove a vision that something should exist, why not skip the learning vibe code it to actually get a real product
that you can validate and then go run to build this for real as a new product line? But I don't think you should skip
line? But I don't think you should skip the learning when you're trying to now build a robust system that you are the owner of because when hits the fan and you're in a sev, AI doesn't help
that much because it doesn't work very well in between at a high systems level and then you know reading logs. So when
you own the code, I think you should use AI to learn to understand all the edge cases of why things work a certain way.
So I think there's room for both. It's
going to be an important skill for us to learn when we should outsource the doing versus when we should use AI to make ourselves better and stronger engineers. Yeah. And I
guess like there's probably not too much harm in if you don't understand it spend some time to understand it and AI will help you typically do this faster.
So like I I I'm not sure if this is uh to do with personality or curiosity but we've seen this before by the way like when uh any time but let's say 10 10
years before when I was like maybe a mid mid-level engineer like I saw new grads join the workforce and we were now using you know higher level languages like JavaScript or Python or or TypeScript or
take take the example of of the the recent uh you know a few years ago like like new grad engineers they start to react And when you start with React, you JavaScript and Typescript. A lot of
people who haven't studied computer science and didn't do assembly or or C++ or these kind of things, you can just learn React and you can just stay there and you can figure out how to use it.
But the be the better developers have always asked why does it work like this?
What happens? What is a virtual DOM? How
can I do it? How can I manipulate it?
and and you look at the source code and I feel there's always been the people who do this and they're just better engineers eventually they can debug faster they they they ask why and you know they're slow so so I think in this
new world we will just have this and I don't I don't think this trait will will die out in fact you know like I to me you're a great proof of you know you you go deep you understand how how the things work and then you figure you decide like okay I'm I'm I'm going to
use it to my advantage right now I just want to go fast because I know what I'm doing already yes I do think that's spot on and that we've had this in in the past it will become even easier to
outsource that intelligence and in some sense be lazy. So I think we'll have to just be more intentional as engineers to make sure that we are actually going
deep in cases where it really matters.
And so far from from what what you're seeing how because you've seen before before geni tools you're now working at a you've been a product engineer with with AI you're now working at a model
company. How do you think these tools
company. How do you think these tools will change the software engine that you have been doing before? And how is it already changing your day-to-day work?
In terms of what doesn't change, we still have code as the way innovation manifests. You go from idea to code to
manifests. You go from idea to code to iterate. That's the same. As engineers
iterate. That's the same. As engineers
you still need to know how highle systems work and design them very well.
You have to debug code really well and you have to be able to be really good at reading code. So to me that all stays
reading code. So to me that all stays the same. What's changed I think is the
the same. What's changed I think is the division of responsibilities between PM designer software engineer. I was
talking to a friend at Decagon. I think
they were telling me there are 100 people and they still don't have a designer because product is just expected to do the design as well. As a
software engineer, this has always been true at startups, but now more than ever, you're expected to do product work as well. We talked about this earlier.
as well. We talked about this earlier.
What also changes that software engineers become more full stack. You
don't outsource work to another adjacent role like data engineer. You're expected to build those data pipelines yourself. I
also think what's changed is that we need to be better at articulating our software engineering architectures and thoughts because you are expected to prompt models to do this. And the
engineers that will be most efficient are the ones that can see the big picture. Write a great prompt that also
picture. Write a great prompt that also catches the edge cases and then have the model implement it. It's like the best engineering managers that are able to zoom in and zoom out really well and
being able to zoom out prompt what you need to do, but then zoom in when actually reading that code and catch potential bugs instead of just relying on the LLM to be 100% right in all cases
because there'll be unique edge cases to the system that you're building that the LLM is not aware of and you need to be able to catch that when you're reading the code. Yeah, I I feel like if you
the code. Yeah, I I feel like if you have a mental model and I I see this so much when I'm using these tools, you know, when I'm either like vibe coding or or prompting or when I know what I
want to do, when it's in my head, I I I either like because I know my code base or I know what I want to do or I just sat down, I I I thought through it and I drew it out. I'm I'm so fast. I'm great.
Like and I can switch between, you know I might do an agentic mode to like generate it. Maybe I like it, maybe I
generate it. Maybe I like it, maybe I don't. Then I just do it by hand. It
don't. Then I just do it by hand. It
doesn't matter. Like I get there. like I
know where I'm going, but when I don't I I did this where like oh I tried to vibe code a game and I failed not because I don't I just didn't know what I wanted to do like Yeah. And and you know when
when your prompt doesn't tell like oh do this then I don't give it guidance. Yeah. Like it Yeah. No
guidance. Yeah. Like it Yeah. No
definitely it wasn't the fault of of the tool. It was just you know what what I
tool. It was just you know what what I don't know what I expect like how would this thing know which it's it's nondeterministic but you need to give us some direction. Exactly. For sure. And I
some direction. Exactly. For sure. And I
also on that point think that it's great when you're zeroing and doing green field work, but today and I think this will change. It's not the best at
will change. It's not the best at working at in large code bases. And as
engineers, you're always working in large code bases when you're building things for prod. And so that part of our job hasn't changed of being able to find
the right place the code should go, use the right modules that exist and piece it together in a larger codebase when you're adding a feature. Yeah. Yeah. And
also just like simple stuff which we take for granted but like setting up the the tools to like run the tests to to know how to deploy to know how to control the feature flags to how safely put out something so it doesn't go to
prod if you want to AB test like which you know when you on board this is is a pretty kind of given but if you work at a place that has like microservices or whatever like it's it's all and I I feel like there there's so so many other
other things but I I love I love how you you summarize what will not change because I think that that is really important. And I love how how you
important. And I love how how you brought up software architecture. I I've
been thinking about like this recently.
In fact, I've started to read some like really old software architecture books because there are some ideas that I'm not sure will change. I I I want to say this theory, but it might not change as much. What books are software
much. What books are software architecture books are you reading at the moment? I I' I've been going through
the moment? I I' I've been going through the middle man month. I've almost
finished this. This is a real old one.
And then I have so this is from the 90s.
It's called software architecture and it's by Mary Shaw and David Garlon and Grady Grady Bouch who's a legend in software engineering and I interviewed
him. He said that he thinks this is the
him. He said that he thinks this is the single best software book. Now it's it's very thin and it's it's it's it's I think from 1995 or so. So I I I've just
heard to read it but I'm interested in what 1996 which is 30 years ago. I'm
just interested in what are the things that might have not changed. Clearly
some things will be dated, right? Like
they're talking about Cororba, which is like this old Java framework that we don't use anymore, but some of the other things there's a lot of uh reflection with civil engineering. And this book was written when there was no real
software architecture. So, they tried to
software architecture. So, they tried to define it, which I'm kind of thinking there might be some interesting ideas.
So, I'm I'm I'm interested like on on what has not changed for the most part.
Yes. No, that's a very nice approach of actually looking at the history to see what's not changed from then to now to then extend that line to what won't change in the future. I'd be very curious to see what you learn from that.
Well, and also reinventing, right?
Because I feel we will have to reinvent some parts of the stack and it's I think it's important to understand also I feel the past like 10 years or so we've not talked too much about software architecture. So maybe there is a little
architecture. So maybe there is a little bit to learn from from other other people's ideas. So uh very to wrap up
people's ideas. So uh very to wrap up how about we just wrap up with some rapid questions. I I just asked a
rapid questions. I I just asked a question. Okay, let's go for it. an
question. Okay, let's go for it. an
issue. So first of all, what is your AI stack for coding? Cursor for hackathons deep research to get a sense of what libraries already exist. Chatbt is my
default search and also my tutor and some internal tools to quickly do rag over company documentations when I'm trying to find something. So what is a book that you would recommend and why?
Book I'd recommend is the almanac of Neville Ravikant. I've read it a couple
Neville Ravikant. I've read it a couple times. big fan of the way he talks about
times. big fan of the way he talks about building your life both from a very pragmatic way about how you should do it from your career but also in terms of just how to be happy and what is a piece
of advice that made a big difference in your professional career don't wait for someone to give you the opportunity to go work on something go work on it love it so John this was so nice to to have you on the show thank you so much for
even having me in the first place thanks very much to John for this conversation to me talking with her was a great reminder of how in a new field like Genai years ago experience might be less relevant than teaching herself how to
use these new technologies like Jambi has done. So, it's also a good reminder
has done. So, it's also a good reminder of how it's never too late to get started. Jambi thought that she was late
started. Jambi thought that she was late in 2022 because she was 5 years behind every AI researcher who's been using transformers since it was released in
2017. And yet, Jambi is now working at
2017. And yet, Jambi is now working at OpenAI, the company that arguably made the most in utilizing transformers and LLMs. For more in-depth deep dives on how OpenAI works coming from the OpenAI
team and on practical guides on AI engineering, check out the pragmatic engineer deep dives which are linked in the show notes below. If you enjoy this podcast, please do subscribe on your favorite podcast platform and on
YouTube. A special thank you if you
YouTube. A special thank you if you leave a review which greatly helps the podcast. Thanks and see you in the next
podcast. Thanks and see you in the next one.
Loading video analysis...