Stop Applying to AI PM Jobs Until You Watch This
By Aakash Gupta
Summary
Topics Covered
- 80% of AI PM Jobs Are Just Traditional PMs Adding AI Features
- AI Products Require Thinking in Quality Distributions
- 19 Out of 20 Teams Pick the Wrong AI Problems
- RAG Before Fine-tuning: A Practical Hierarchy Most Skip
- Products Not Projects: The AIPM Career Roadmap
Full Transcript
Stop applying to PM job until you have a true understanding of the AI fundamentals.
AI product manager is BS. Is it actually real or is it?
When I look at the industry landscape for AIPMs, there's a few critical distinctions that many people miss.
This is Joti Nucla. She has been an AIPM at Meta, Amazon, and Netflix. She's one
of the world's most experienced and senior AIPM. Director of AIPM at Netflix
senior AIPM. Director of AIPM at Netflix feels like a dream job. So why did you leave Netflix? in a career changing way
leave Netflix? in a career changing way with lot of opportunity out there and with AI jobs increasing I wanted to take the time to like go full-time into this what is the road map to becoming an AIPM
the first is understanding [music] the difference between what an AIM does versus what a regular VM does and the second would be today's episode is a master class in AI
product management if there is only one video you are going to watch this is it do I need to learn these technical concepts like rag and finetuning to become become an EIPM.
I'm going to teach you everything everything that you need to become an EIP.
So, with all these rolling layoffs, is it really good to work at companies like Meta and Amazon?
Before we go any further, do me a favor and check that you are subscribed on YouTube and following on Apple and Spotify podcasts. And if you want to get
Spotify podcasts. And if you want to get access to amazing AI tools, check out my bundle where if you become an anal subscriber to my newsletter, you get a
full year free of the paid plans of Mobin, Arise, Relay app, Dovetail, Linear, Magic Patterns, Deep Sky, Reforge Build, Descript, and Speechify.
So, be sure to check that out at bundle.ac.com. And now into today's
bundle.ac.com. And now into today's episode.
Ji, welcome to the podcast.
Thank you. I'm so excited to be here.
So, I want to start with the hard questions. Okay. Um, you know, you had
questions. Okay. Um, you know, you had this title AI product manager, but I keep hearing that AI product manager is BS. Is it actually real or is it hype?
BS. Is it actually real or is it hype?
Yeah. So, let me um give you a datadriven answer because I've been on both sides of this like hiring AIP PMs
at um Meta, Netflix and Etsy and now talking to like dozens of companies about their AI strategies. So, when I look at the industry landscape for AI
PMs, um there's a few critical distinctions that many people miss. And
the way I would um say that the kind of roles that exist are uh twofolds.
One is a traditional PM with AI features added on. So this is probably the 80% of what's labeled as
AIPM jobs out there right now. And these
are where PMS are um uh leading a uh LLM capabilities and uh probably they're adding these AI features to existing
products. So think of it like your uh
products. So think of it like your uh chatbot um that you're adding to your customer service portal or you're adding some AI summarization to your document.
Um now the core product existed even before even before you added or bolted an LLM onto it. Uh so that's the
traditional PM with AI uh features. The
other type is the AI native PMS. And the way I think about that is probably this is uh a new category of PM roles that
are opening up. Um and I would say about 20% of the ones out there um are your AI native PM roles now. And here the
product is AR. Uh it's not a feature.
It's not something that you just bolt onto the product. So think of it like things like your chat GPT or your GitHub copilot or your claude, your cursor, your perplexity.
Yep.
The key characteristic would be the product is fundamentally probabilistic.
And so the value proposition is literally impossible without AI. Um and
you can't build your chat GPT without an LLM. So AI here is not just enhancing
LLM. So AI here is not just enhancing the product, it is the product.
Okay. So two different types, 80% in traditional products, 20% in AI native.
So there's basically 4x more open roles for you in those traditional companies.
And we heard some of the companies you work for, those products existed before AI, but you were working on AI within them. So if somebody wants to become an
them. So if somebody wants to become an AIPM, what is the road map to becoming an AIPM?
Yeah. And um before I jump into the road map, I don't want to talk about what type of AIPM roles exist along the
stack. So at the top, I call this the
stack. So at the top, I call this the application PMS. Now here the PMs own the end to end user experience. Um
they're thinking about how the users interact with AI, how do you build trust, how do you make AI reliable enough for everyday use. they need to
understand um the AI human interface and uh interaction patterns. This is
probably the easiest for someone who wants to convert from a traditional PM role to become an AIPM role because this encompasses a lot of the existing
product management skills along with AI knowledge. So this is the easiest to get
knowledge. So this is the easiest to get into.
The second is the platform PM. Now here
is where uh the PMs are building tools that other teams who are building probably application products are using.
So think of it like developer platform or model orchestration systems or evaluation frameworks or observability tools. Here the PM would need to
tools. Here the PM would need to understand both the technical infrastructure and developer experience.
So here maybe you're not building straight up for end users, you're building for other builders here. And
the last is the infra PMs where these PMS are building the foundational systems that power all of these AI traics like probably say uh vector
databases or GPU orchestration or optimizing kernel level um compilation or um optimizing model serving. So as
you see the lower you get into the stack the deeper your expertise needs to be.
So this is harder.
So easiest would be here the easy and hard.
Okay this makes sense. And what is roughly the percentages of roles at each of these three buckets? I would say you'd see about um 60% of roles with
application PMS um about 30% roles with platform PM and maybe 10% roles with infra PM.
Okay, makes sense. So the hardest roles are actually the smallest is kind of the good news.
Yeah.
So can you walk us through let's say somebody has their goal still on infra like what are the key concepts to know?
Yeah. whether it is application platform or infra some of the key concepts are the same across. So and that's what we are going to like talk about today where we're going to talk of these five
techniques. Um the first is
techniques. Um the first is understanding the difference between what an AI PM does versus what um a regular PM does. And um the second would
be determining when to use AI because I think now there is this hype around using AI that it seems to be like the technique that everybody wants to go
reach but knowing when to say yes and when to say no is a very powerful skill that a PM should possess.
The third is then we'll look into what are the AI techniques uh and what what are the options of AI techniques that we can go choose from. So we'll look at
some menu and uh the fourth is now if we then decide that yes we need to like use genai uh for this product then um we'll learn
about a few core concepts um around AI agents prompt engineering context drag evaluations and last but not the least is we will learn about uh delivering AI
products um so we'll learn all the way into deployment let's do it Perfect. Let's get started.
So what is or who is a product manager?
So a product manager is essentially the CEO of the product. A PM owns the product and its associated decisions along with it. So what the product
managers do is balance all of these three domains that is the UX, the tech, the business and remember all of these functions PMs lead without authority.
They don't report into them. So PMs need to be able to influence these uh teams and make hard calls. That is
irrespective of whether it's an AI PM or PM. This is like the baseline of what a
PM. This is like the baseline of what a PM does.
Yep. Of course, varies a lot between companies.
Absolutely. Absolutely. And again, here are some traits of a good PM to just bring us all into the foundation of uh the baseline of what we mean when we say
a good PM, right? Defining a clear vision for your team. Um be customer obsessed, which is understanding what the pain point really is. Um and
understanding the market landscape, aligning with your stakeholders um around the vision and building the vision for the product. And the fifth one is the bread and butter for a
product manager is prioritizing product features and capabilities.
Um and last but not the least is creating a shared brain um for your product managers and your team to uh make independent decision making. So
what is the core skill that uh differentiates uh PM from an AIPM? The core uh difference here is you see how
traditional uh PM products are deterministic. However, AI products are
deterministic. However, AI products are probabilistic. Where traditional
probabilistic. Where traditional products have predictable behaviors, AI products are inherently probabilistic.
That is the same input can result in different outputs. Now, if I have a
different outputs. Now, if I have a button in a traditional product, I click on it every single time, it will open up into like the next page, for example.
But uh every time you ask um an AI product because it's probabilistic, it can produce different outputs. So now
your as AI PMS you must think in terms of quality distributions um in terms of what's your acceptable error rates. It's
no longer a binary uh success versus uh failure. And so um uh AIPM uh you tackle
failure. And so um uh AIPM uh you tackle questions like what is the error rate that our users can tolerate uh before
your trust breaks for that user um and how do we handle these edge cases that handle that occur probably say 5% of the time or do we need a fallback
deterministic system to begin with um also what is different here is data is your first class citizen now with AIPMs where tradition PMS can focus on
features and user flows. As an AIPM, you must treat data as a product experience because poor data will create poor experiences. Um, and so having a good
experiences. Um, and so having a good data strategy is a prerequisite before you even start working on your uh implementing your AI product.
I hope you're enjoying today's episode.
Are you interested in becoming an AI product manager, making hundreds of thousands of dollars more, joining OpenAI anthropic? Then you might want to
OpenAI anthropic? Then you might want to do a course that I've taken myself, the AIPM certificate ran by OpenAI product leader McDad Jaffer. If you use my code
and my link, you get a special discount on this course. It is a course that I highly recommend. We have done a lot of
highly recommend. We have done a lot of collaborations together on things like AI product strategy. So check out our newsletter articles if you want to see the quality of the type of thinking you'll get. One of my frequent
you'll get. One of my frequent collaborators, Pavle Hearn, is the Build Labs leader. So you're going to live
Labs leader. So you're going to live build an AI product with Pavvel's feedback if you take this AIPM certificate. So be sure to check that
certificate. So be sure to check that out. Be sure to use my code and my link
out. Be sure to use my code and my link in order to get a special discount. And
now back into today's episode. Today's
episode is brought to you by Amplitude.
Replays of mobile user engagement are critical to building better products and experiences, but many session replay tools don't capture the full picture.
Some tools take screenshots every second, leading to choppy replays and high storage costs from enormous capture sizes. Others use wireframes, but key
sizes. Others use wireframes, but key moments go missing, creating gaps in your understanding. Neither approach
your understanding. Neither approach gives you a truly mobile experience.
Amplitude does things differently. Their
mobile replays capture the full experience. every tap, every scroll, and
experience. every tap, every scroll, and every gesture with no lag and no performance hit. It's the most accurate
performance hit. It's the most accurate way to understand mobile behavior. See
the full story with amplitude and also because Go ahead. Sorry.
Yeah, the data piece is actually one of the underrated ones because people often hear data and they kind of shrug their shoulders and say, "Oh, I understand basic statistics." But there's a lot
basic statistics." But there's a lot more to it in terms of data pipelines, how the data is being cleaned, how it's being put in there, what's being used as training data. So there's a lot of
training data. So there's a lot of nuance that people need to realize exists under each one of these topics.
Yeah. And like I always say, garbage in will lead to garbage out. So if you do, if you have your data that is not uh in higher quality uh that like what you
expect, then your model outputs will also not be as closer to reality as you would expect.
Yeah. And uh similarly u your model behavior here with AI products would be iterative versus a fixed feature of a
traditional product. Um in this case uh
traditional product. Um in this case uh the earlier button example that I was talking about every time you click that purple button it will lead to something similar versus here you're iterating
with your model. um any new change um you need to retest your model it you need to understand what is changing among uh your model behavior. So it's a
very iterative uh process and then your unit economics are also very different um where your um traditional products have predictable cost structures. Um now
because of this probabilistic nature of AI tradics your unit economics of AI products are also variable. It depends
on how long your LLM probably would give you an answer uh or how short of an answer it could be. And last but not the least, you now need to emphasize a lot
more on responsible AI and guard rates because traditional products it's easier to uh focus on bugs and edge cases. But
AI products uh need to be able to handle um potential harms, bias, um uh misuse and emergent behaviors that weren't
explicitly programmed into the model itself.
Yeah. So moving on now [clears throat] that we have understood what an AIPM does moving on to how do you now
determine when to use AI and when not to use AI. Now there's this uh report from
use AI. Now there's this uh report from MIT that many people are aware of. Now
the reason why uh AI pilots fa uh fail there could be several reasons but one of the key factors that the paper called upon was picking the right opportunities
to apply AI to go solve a problem. It
seems common sense but it is not as common where several teams are choosing the wrong problems to go and apply AI.
The reason being choosing good problems to apply AI is difficult. And that's
what that's what we learned today is when do you use um AI in a product because apparently 19 out of 20 people are choosing the wrong one.
Absolutely. Yeah. So here's when AI makes sense. So you have to choose AI
makes sense. So you have to choose AI when AI is well suited for some of these uh specific patterns like when you have
a pattern recognition in complex datas when patterns exist in your data but they're too complex for humans to manually define. So for example in
manually define. So for example in products like YouTube machine learning is used to view um identify the patterns
of users who are watching videos which would be impossible to capture with simple rules. So the relationships
simple rules. So the relationships between your user behavior and their content preferences it's multi-dimensional for a rule-based system um to capture. So pattern
recognition is probably a very good use case when AI could be applied. The
second is when um AI really excels uh is when you have historical data over several years uh to predict future outcomes. So like for example at Amazon
outcomes. So like for example at Amazon we used AI to forecast um inventory needs uh to predict um uh based on a
complex mix of seasonal trends or upcoming promotions and even weather patterns and these models could consider
hundreds of different variables uh in ways that humans simply cannot uh process uh effectively. So in prediction
use cases AI is a great choice and also when you need to create say personalized individualized experiences for thousands
or millions of users at scale then AI becomes incredibly valuable that ties back to the pattern recognition because probably there are several patterns and variables that could then impact. So for
example like content recommendation engines are classic examples of where AI thrives. So if your use case about
thrives. So if your use case about personalization then that's a good place to look at applying AI.
And what are the bad places?
Yeah. And I would say that I wouldn't say it's bad but here's where huristics which is rules based is probably sufficient where you don't have to um
stick to um applying AI. Now before I dive into this, what are huristics? And
I would say huristics are nothing but a simple set of rules like your if else.
If this happens, then do that. Um if
this then that these are all probably based on your past experience um or probably uh something that works in that
industry. Now I would say huristics or
industry. Now I would say huristics or your rules are probably sufficient when uh explainability is non-negotiable in
your industry because it's really hard for um AI models to have high explanability. They are interpretable
explanability. They are interpretable tools but explainability is still low.
or when there are clear rules in your domain like for example uh tax calculation we are at that time of the season where everyone's thinking about
year end taxes um and so tax calculation software is a very good example tax codes are complex but they're explicit um making them perfect for like say some
rules-based implementation and so if you if there are some clear rules and comprehensive rules in your domain probably sufficient to start with heristics and when data is limited
because AI needs lots of data to be effective. So if you're launching a new
effective. So if you're launching a new feature or you're entering a new um market where historical data doesn't exist then starting with huristics and a
rules-based approach is probably better than um a forcefitting AI to it. And
also the other place where um you your development speed is critical where um AI systems generally take longer to build and implement. So for MVPs or
time-sensitive features starting with uh traditional methods could be like um the right business decision.
So you've determined your AI usage. It's
not one of these things where the explanability matters, the speed to market matters. How do you select the
market matters. How do you select the right AI techniques?
Yeah. So let's dive into it. So what are some AI techniques um that we could look at? And when we look at AI these days,
at? And when we look at AI these days, people jump straight to like oh let's use chat GPT or let's build uh with an LLM. But honestly a simple machine
LLM. But honestly a simple machine learning model would have solved the problem in a week and at fraction of a cost. So let's break this down in a way
cost. So let's break this down in a way that's useful for product uh decisions.
So when I think of AI techniques, I think of them in three buckets. The
first is this traditional ML where this is your uh your regression models or your random forests or your XG boost the stuff that's been around for years. It's
mature, it's reliable and honestly it still powers most of the AI that you interact with daily. The second is deep learning. Now this is your neural
learning. Now this is your neural networks, your computer vision, your speech recognition and this is this thrives where uh when you're dealing with say image, video, audio, any form
of unstructured data that has some sophisticated uh pattern recognition and the third is where we get to geni where it's your LLMs, your diffusion models,
um your stable diffusion, your chat GPTs, your clouds. Now here's what's interesting. These aren't competitors.
interesting. These aren't competitors.
They are tools in your toolkit and the best AI products they usually probably come in multiple approaches. So I would say choose ML when you have structured data and you need to predict or classify
something. So think in terms of say
something. So think in terms of say spreadsheets. Um the sweet spot for ML
spreadsheets. Um the sweet spot for ML is when you're predicting a number or a category or you have historical data with clear patterns. um or you you need
the model to explain its decisions or speed and cost really matter. So some
examples of uh where traditional ML uh techniques still thrive is like your fraud detection or predicting the customer churn for your websites. Um so
as a PM a question you should ask is can I put this problem in a spreadsheet with clear input columns and an output I want to predict. If the answer is yes, then
to predict. If the answer is yes, then start with ML and don't have to complicate it.
Looking into deep learning, use deep learning when you're dealing with say perception tasks like image, video, audio, or when the patterns are too
complex for traditional machine learning to capture. And think about it this way,
to capture. And think about it this way, um, deep learning shines when humans can do the task easily, but it's really hard to write explicit rules for. Like for
example uh when I see your face I know this is akash it's easy for me but if you ask me write it in code as to what are those features that make me think
that this is akash this if then statements it's impossible to to translate this problem into and that's where deep learning comes in. So some of
your examples like um medical image uh diagnosis or manufacturing uh defect detection where computer vision can scan
products on an assembly line and figure out is this widget cracked or is this label misaligned. Classic examples of
label misaligned. Classic examples of where computer vision could be used.
Voice assistance which is converting your speech to text. All of these are great with deep learning. So now a question that as a PM you should ask is
is this a perception problem? Am I
dealing with images, audio, video? If
yes and you need to understand what's in that media, probably deep learning is your friend. Now, now here's the catch.
your friend. Now, now here's the catch.
Deep learning needs more data, more compute, and is less explainable than traditional machine learning, right?
The trade-off that you as a PM need to be aware of. And then genai.
Yeah, the hot topic. Um, now use genai when you want to understand, generate or reason over natural languages or images.
Now the breakthrough with LLMs isn't that they can just write. It's that they can read, they can comprehend context, reason um, across information and they
can respond appropriately, which is fundamentally different from any traditional AI system. So, Genai is the right choice when you are dealing with a natural language interface where your
users need to interact with your product using some conversational language, not probably just clicking buttons or filling forms. Genai is a good starting point there. Uh content generation is
point there. Uh content generation is the other use case where you want to write copy, you want to um write product descriptions, you want to write email
drafts. So if you're creating net new
drafts. So if you're creating net new text or images, genai is a good uh use case to be applied and when you need uh reasoning and synthesis now LLMs can
take information from multiple sources, understand context and make judgments.
So unstructured reasoning and synthesis genai is your friend. So as a PM, the question that you have to ask is, does this task require reading or writing in
natural language? Or do I need common
natural language? Or do I need common sense reasoning, not just pattern matching? And are my users going to
matching? And are my users going to interact conversationally with this product? If the answer is yes to any of
product? If the answer is yes to any of these questions, Genai is probably in your solution.
Yeah. And then there's the whole angle around AI agents as well, right? where
most people are building AI agents into their products or they're building MCPs into their products so that agents can interact with their products. So you
also probably need to be thinking about you know are there agents we can be taking that we are generatively creating the generatively planning all those skills that you just talked about for
Genai into a product.
Yeah and that's a great segue for us to get into the core building blocks that uh you have to know uh starting with AR agents.
Let's do it. So the first concept that we want to touch upon is what is agents or what is agentic AI. So agentic AI is
a system that can make decisions and take actions on your uh behalf or on its own to achieve some goal and you're not
explicitly telling it what order it needs to follow. It understands your goal and tries to reason and find the path to go and achieve that goal. So it
the true thing that differentiates AI agents is it is goal oriented. Now
looking at the core building blocks for an AI agent, the first is perception.
Perception is how the agent perceives information like your text input uh image or sensor data or API connections.
This is basically how the agent will receive input. The second block for a
receive input. The second block for a agent of considering the building blocks would be the reasoning. This is how the agent processes information and makes
decisions. Here is where your models
decisions. Here is where your models could live like your LLMs or your classification models or planning algorithms. All of them live here. The
third building block would be your execution or action systems. This is how the agent affects its environment.
Either that is through generating text or making those API calls or controlling hardware. This is how the agent actually
hardware. This is how the agent actually takes action. And the fourth is
takes action. And the fourth is learning. This is the feedback mechanism
learning. This is the feedback mechanism of how the agent evaluates outcomes and improves.
So when do you use a workflow versus an agent?
Yeah. So there's a huge difference between workflows and agents. Now a
simple workflow um both are by the way AI systems. Now workflows are predetermined sequences of tasks where
every uh thing is defined in terms of how the process will go and execute.
Think of these as some automation pipelines where AI serves as um some sort of a powerful component within that overall workflow. An example would be
overall workflow. An example would be like your invoice processing workflow where you have step one extract data from PDF. Step two validate against
from PDF. Step two validate against these rules. Step three now um uh have
these rules. Step three now um uh have the a uh AI system or agent uh evaluate and step four go and update this
multiple uh system whereas agents are goal oriented systems where they can independently decide um how to accomplish um those objectives. So the
key characteristic of workflows is there's predictable patterns um execution paths there's human defined decision trees of how things have to go
and um there's probably deterministic outcomes of there's an standard expectation of what the output would look like from one node to the next node. Whereas an agent um the
node. Whereas an agent um the characteristic or the architecture of an agent is very different which let me walk you through what that looks like.
So here is an agent architecture. So we
have the agent, there is model, there is memory and then there are tools. So the
agent is the brain or the orchestrator.
It controls the entire workflow deciding what needs to be done, which tool needs to be called. The model here, it could be a language model, it could be a machine learning model. So you could
have your GPG or clawed or any of the models here. Memory is where it stores
models here. Memory is where it stores context and historical information. This
is what allows your agent to be stateful, to be able to remember past conversations or previous actions. And
tools, these are the general utilities that your agent can use to extend its capabilities beyond what just this model
could do. So you like a weather API or a
could do. So you like a weather API or a booking system API, it could be a search API, it could be a code execution engine, any of these ones. Today's
podcast is brought to you by Pendo, the leading software experience management platform. McKenzie found that 78% of
platform. McKenzie found that 78% of companies are using Genai, but just as many have reported no bottomline improvements. So, how do you know if
improvements. So, how do you know if your AI agents are actually working? Are
they giving users the wrong answers, creating more work instead of less, improving retention, or hurting it? When
your software data and AI data are disconnected, you can't answer these questions. But when you bring all your
questions. But when you bring all your usage data together in one place, you can see what users do before, during, and after they use AI, showing you when agents work, how they help you grow, and
when to prioritize on your road map.
Pendo Agent Analytics is the only solution built to do this for product teams. Start measuring your AI's performance with agent analytics at pendo.io/acos.
pendo.io/acos.
That's pendo.io/
a kh.
Today's episode is brought to you by NI1. In tech buying, speed is survival.
NI1. In tech buying, speed is survival.
How fast you can get a product in front of customers decides if you will win. If
it takes you 9 months to buy one piece of tech, you're dead in the water. Right
now, financial services are under pressure to get AI live. But in a regulated industry, the roadblocks are real. NI1 changes that. Their airgapped
real. NI1 changes that. Their airgapped
cloud agnostic sandbox lets you find, test, and validate new AI tools much faster from months to weeks, from stuck to shipped. If you're ready to
to shipped. If you're ready to accelerate AI adoption, check out Nia1 at nia1.com/ashos.
at nia1.com/ashos.
That's n a y a n.com/
a kh.
Today's episode is brought to you by the experimentation platform Chameleon. Nine
out of 10 companies that see themselves as industry leaders and expect to grow this year say experimentation is critical to their business. But most
companies still fail at it. Why? [music]
Because most experiments require too much developer involvement. Chameleon
handles experimentation differently. It
enables product and growth teams to create and test prototypes in [music] minutes. With prompt-based
minutes. With prompt-based experimentation, you describe what you want. Chameleon builds a variation of
want. Chameleon builds a variation of your web page, lets [music] you target a cohort of users, choose KPIs, and runs the experiment for you. Prompt-based
experimentation [music] makes what used to take days of developer time turn into minutes. Try promptbased experimentation
minutes. Try promptbased experimentation on your own web apps. Visit
chameleon.com/prompt
to join the weight list. That's k a m e l e o n.com/prompt.
And now, let's do a hands-on exercise to go build a workflow. And then we'll also build an agent.
Love this. Let's see it.
Yeah.
And for that I will use N8N. And
and why?
It's really a it is uh low code, no code. Um so it's easy for anyone to like
code. Um so it's easy for anyone to like go and build workflows or agents. Um
it's also it also has a very strong uh community and so you'll always find um forums where if you're stuck you can ask questions and you can quickly get
results. So this really allows for
results. So this really allows for anyone to actually go build agents and workflows.
Got it.
So today what we are going to build is first we're going to build a workflow just so we all understand what a workflow looks like and then when I when we do the agent uh we can see that
difference. So in N8N there are um
difference. So in N8N there are um different types of nodes. There's
trigger nodes. So these could be your uh nodes that uh start your automations. So
like for example trigger manually or there's on a schedule or when at start.
So there are different trigger events.
So first we'll start with a trigger event which is we wanted to trigger manually. Next on triggering I want this
manually. Next on triggering I want this to go and call and make a HTTP um request. uh for uh for open weather
request. uh for uh for open weather we're going to use something called open meteio to uh get the information uh from that API. So here's this free weather
that API. So here's this free weather API that we are going to use.
And how do you find like good APIs?
Good old plain Google search where we I start off with like all right um I want to build this. So what do I need? I need
a weather API. Let's go and search this API for almost everything. lots these
days. So, it's easier uh to just search.
Got it.
So, from here I can search for um what area I want to get uh uh information for
about the weather um details. So, here
is I live in um uh Los Angeles. So, I'm
going to take Los Angeles um information. So, I'll say try API.
information. So, I'll say try API.
And if I set the uh latitude longitude which I already have for Los Angeles um and I can say how how what all do I
need? I it gives you a lot of options
need? I it gives you a lot of options like temperature I can I can get rain I can get uh cloud cover a lot of things.
So I'm just okay with temperature for now. I'm just going to go ahead and
now. I'm just going to go ahead and here's the API URL that I need to use.
So I'll take this API URL. I'll go back to my naden. Now I'll go and create a node called HTTP request. Now this node is what will go and access and make a
HTTP request to a URL. So I'm going to use the get method which pretty much gets for me to like post this uh URL and
gets the information. I'm going to use the get method and I'm going to use the um API URL that we copied. And let's see if I run the execute step here now.
Let's see what information we get. So
you see how we are able to capture. So
the right hand right side is the output.
So you can see the information that it captures from um from that API. I'm just
going to pin it so we can use it for later. So now the information that we
later. So now the information that we get um is not in a way that's easy for a workflow to go and execute. And so we're
going to add a code node to do some code modifications. I'm running the code in
modifications. I'm running the code in JavaScript. And now you may say like all
JavaScript. And now you may say like all right JI but I don't know how to code which is great. What I pretty much did is I uh went to chat GPT and I said this
is what I'm trying to code. I want to code in um JavaScript. Um I'm going to paste this into NAN and um Chad GPTG generated this code for me that I'm going to use now.
Easy enough. Very
so. That's your normal go-to workflow is pretty much use chat GPT for all the coding that you need to use on your own.
Yeah, it's it's very easy that way. So,
it gave me this whole block of code to go and use and I pretty much just took that and I say execute the step so we
can see how the node executes. So you
see how it captured that information and added this because I gave this message here saying in Los Angeles the high is the high today is this temperature and low today is this temperature. By the
way I'm still a Celsius person so I I do everything in Celsius.
All right. So now we have this code and now what I wanted to do is send me an email. Um so you see there's no
email. Um so you see there's no intelligence here. It is basically step
intelligence here. It is basically step after step after step. Now one of the step here could be an agent uh which does something and then hands it over to
the rest of the workflow. So here uh NAT has great integrations. So um there's an integration for Gmail. So I can click on Gmail integration and then there is a
lot of actions that it could take. So
I'm going to choose the message node. So
the send message node. I already have a credential, but if you don't have one, you just have to do this and say create new credential and do a Google O. Um,
and that's about it. It'll create a credential for you. Easy peasy. And now,
let's say I want to send it to Joti atexgenproductmanager.com.
atexgenproductmanager.com.
And the subject should be weather report for today. And email type, we'll just
for today. And email type, we'll just keep it text because HTML sometimes goes into spam. So, we'll avoid all of that.
into spam. So, we'll avoid all of that.
And now if I execute this step and for message, we'll just copy the message that we have from here. So I'm just copying and pasting that here. So I just dragged that message and put that into
this field here. And now if I execute this step, it has sent the email.
That was easy. Wow,
very easy.
So now we have a basic workflow.
So you see I got this message saying Los Angeles the high is this. So this is a basic workflow but this has no intelligence. So we'll go and add some
intelligence. So we'll go and add some intelligence to this. So let's use the same N810 workflows and now create an agitic workflow instead of a plain
workflow. All right, we're starting from
workflow. All right, we're starting from scratch again. We're going to add a chat
scratch again. We're going to add a chat on chat message um as a trigger. And now
we'll add an agent. So by default it gives me an agent um that I can now get started with. So you can see how it has
started with. So you can see how it has the model node, a memory node and a tool node that we could add. I'm going to add the model node. I'm going to add open
AAI chat model. Now this requires me to connect it to um the OpenAI uh API. So I
have already done that but if you have not you you could just create a new credential and add your open AI API key and that will connect it and you can also choose from the list what model you
want. I'm okay with GPT 4.1 mini. So
want. I'm okay with GPT 4.1 mini. So
I've connected the agent to the model.
I'll also give a simple memory so that it uh remembers uh things and has a place to store. And now let's add tools.
So one of the tool is a get weather tool that we want to add which is a HTTP request. I'm going to create a HTTP
request. I'm going to create a HTTP request. The same things that we did
request. The same things that we did before. We're going to do that again.
before. We're going to do that again.
The the method is still the um get method and I'm going to paste the same URL that we got from our weather API.
And then I'm going to add one more tool which is the email. I'm going to say Gmail and I'm going to add the same information
weather today. And unlike our workflow
weather today. And unlike our workflow where we had to define the message here, we could just say let the model um automatically define by based on whatever message it's getting, the agent
can decide what that message should be.
I could also add a description and say unique weather information. And that's
it. You can see how we're not defining any code. We are not saying how to um
any code. We are not saying how to um convert that into a particular phrase.
We're we're not writing any of that uh steps. We are going straight to like
steps. We are going straight to like saying here is the HTTP, here is the um here is a tool to send uh an email. And
now we'll let the agent do all of these tasks.
Okay.
So now let's run this. So I can type a message and say what is the weather today in Los Angeles? You can see it's
running. It went and called this tool
running. It went and called this tool HTTP request and it gave me um an information. The weather today in Los
information. The weather today in Los Angeles has temperatures from 14.5 Celsius early in the day rising to about 17 Celsius in the morning. All right.
So, it gave me information about weather and you can see it didn't execute Gmail.
I never said don't execute this, don't execute that. But it didn't execute
execute that. But it didn't execute because it the agent determined that all it needs is the get weather and that's the only tool it needs to use. But now
if I say send the message and now it has used my Gmail tool to send me a message.
So let's look at that. I got this message from the agent. So here this is a classic example of how we are not telling which tool for the agent to use.
the agent determined based on the question we asked and based on the task.
That's what makes it actually AI, an actual AI workflow, not just a regular no code workflow.
Yeah.
Awesome. So, how do we go further here?
What's next? Are we going to learn rag systems?
Yeah. But before we get to rags, I want to talk about a critical concept um that every product manager working with AI agents needs to know and that's prompt engineering and context engineering.
Yes. So let me start with prompt engineering because this is where most of us begin our journey with um AI agents. So think of prompts as a primary
agents. So think of prompts as a primary interface between you and the AI system.
And when I say primary interface, it literally is like that the prompt is how you tell the AI agent what to do, how to behave and what outcomes to expect.
First there is system prompts. Now these
set the overall behavior and personality of your agent. So for example, if you're building a customer service agent, your system prompt might be uh to establish
an empathetic personality uh that the agent has to be professional and always verify customer identity. The second is user prompt. Now, these are the prompts
user prompt. Now, these are the prompts that an actual user inputs to an LLM or or interfaces with uh the chat product.
Now, that's simple enough, but here's where it gets interesting. Is how you design your system to handle these unpredictable nature of user inputs is
what determines how your um agent responds because users won't always ask the questions the way you expect them to. And that's where the power of prompt
to. And that's where the power of prompt engineering techniques like your few shot come into picture. Few short
examples are where you show AI what good responses look like by providing some additional response.
This is the really underrated one where you actually put in an example. This is
a good response. This is a bad response.
People think this is a lot of work but when you're engineering the system prompt for an agent is actually worth it.
Yeah. And I found this to be incredibly powerful in production systems to provide your AI with examples of what good responses look like by providing
examples. So instead of trying to
examples. So instead of trying to describe what you want and abstract terms, you could show AI concrete examples of ideal interactions. Now that
you know prompt engineering, let's move into context engineering. Now, context
engineering is where magic happens in production systems because context engineering is about managing everything the AI needs to know to do its job
effectively and doing it within the constraint of context windows. Now, AI
models have context windows. That's a
limit on how much information they can process at once. plot sonet for example has a 200k token context window. Now
that might sound a lot until you start loading in your company's knowledge base, the conversation history, the realtime data, the user prompt. Suddenly
you're making hard decisions about what stays and what goes. So I think about context engineering in three layers.
First there's immediate context. That's
the current conversation or task that the user is having. The second is session context. That's the session
session context. That's the session information of the user's recent interactions. And the third is knowledge
interactions. And the third is knowledge context. This is the broader information
context. This is the broader information that your agent needs to reference. And
here's something that I have learned the hard way. Context window management
hard way. Context window management directly impacts your cost because every token you process costs money. So if
you're carelessly loading your entire knowledge base into every interaction, you're burning through your budget really fast. So context engineering is
really fast. So context engineering is more like an art of knowing what to load and when.
And what's an example of that that you learned the hard way? Did you guys overspend at one of these companies cuz you just had engineered way too much context into it?
Yeah. And that's when we actually started off with understanding that there is probably a way to dynamically uh figure out based on the information to load what context uh is needed like
what uh information from your knowledge base could be loaded in and that's the prompt flow and orchestration patterns that's where they come in. So when
people say prompt engineering is dead, it is not dead. It is part of this holistic context engineering um that encompasses prompting strategies as well.
So how do we dynamically pull in this information quickly?
So that's where now we're going to learn about rag which helps you figure out based on the prompt you can retrieve the
right information and load it. So let's
dive into rag. And for my money, this is like the most important skill, guys.
Like this is the point to just lift up off your phone as well and just look at think deeply about how am I going to learn this concept because every enterprise that's implementing AI internally for their workflows, every
product, they're generally using a rack system.
Absolutely. And like I say, rag is nothing but retrieval augmented generation. Now, it's very simple, but
generation. Now, it's very simple, but has tremendous amount of value that it provides. And so when people say, "Oh,
provides. And so when people say, "Oh, should I go and fine-tune my model?"
I'll be like, "No, let's start with RAD because RAD might solve 80% of your problems." Now, like the word says, retrieval, it retrieves information from
the knowledge base and then it augments it along with the user input before passing it to the LLM. that allows the LLM to now have the context uh to be
able to generate an output that is foundationally in deeprooted in the knowledge base of that company. So let's
look at the rag systems. So let's say you have a document, several documents of course in a company. You chunk them and chunking is nothing but breaking
them down into smaller pieces almost like think of like you have a story book and if you're saying at after every 20 pages just rip it. It could be one chunking strategy is a fixed chunking
strategy. So you take the document, you
strategy. So you take the document, you break that down into smaller chunks and then you convert that into pass it through an embedding model and store it
in a vector database. Now when a user queries the user query is also passed to this embedding model converted into a
vector and now this vector goes into this vector database to now find the nearest neighbors. similar um approaches
nearest neighbors. similar um approaches that would answer this user's question.
It retrieves that information from the database and adds it along with the user input and passes this to the LLM. The
LLM now has the documents, the relevant documents from uh from the vector database and the user input to now generate a response that is
fundamentally rooted in the information.
And so something to talk about here is fine-tuning. Many of them reach out to
fine-tuning. Many of them reach out to me and ask, "Can we take an LLM and fine-tune it to our use case?"
Fine-tuning should never be your first option. Maybe not even your second
option. Maybe not even your second option. It's something that you have to
option. It's something that you have to consider after you have exhausted prompt engineering, context engineering approaches, and rag approaches. A
practical hierarchy that I recommend is before you go and start with fine-tuning is to first start with prompt optimizations to see if you can get
better results. Then you can optimize
better results. Then you can optimize your context engineering to figure out what context is being loaded into the
context window of an LLM. Implement rag.
Most of the cases that I see almost 80% of your use cases get solved with rag versus fine-tuning.
Especially if you've done really good prompt engineering at the top.
Absolutely. And only if these three don't suffice then you should go for fine-tuning.
I think because fine-tuning is there in the API documentation, people immediately jump to it. But first follow the sequence.
So let's see how to build a rag system.
I'm [clears throat] excited for this one. So we are going to use something
one. So we are going to use something called lang flow. Again lang flow is a noode tool that allows you to build rag systems with just blocks and
connections.
And why langflow? Why not n or any other tools?
You could use natn2. Um what I have seen is lang flow is more uh first approach to agentic AI. Um and therefore it's
easier to build rag systems in lang flow but you can build rag systems in n as well. I would say this is just another
well. I would say this is just another tool that I'm introducing to our users.
So now anyone who is comfortable with nat could try with that. Langflow is
another tool that also very nicely sits in with the lang uh chain lang kind of community. Uh so some of your tracing
community. Uh so some of your tracing capabilities and all of that can come through easily as well.
Got it. So starting with an empty blank slate. First we're going to build the
slate. First we're going to build the flow where we are going to take a document and chunk that up into pieces.
So for that we'll do the load data flow which is starting with a file. So given
a file I can select a file and add it.
So in this case I'm adding state of AI and business 2025 report. So I'm adding that and then I need to split text. So
this is where I'm chunking my document.
You can see it provides me different options like chunk overlap, the chunk size. I'm just going to keep uh as is
size. I'm just going to keep uh as is and I'm going to say connect this file to this input. So this file will go into the split text and then uh get chunked
up. I'm also going to call for OpenAI
up. I'm also going to call for OpenAI embedding because I'm going to use OpenAI's embedding model and I'm going to use the embedding three small and
again I've already given my API key but if you haven't and you're using it for the first time you'll have to give your API open API key here. Next is you need a vector database. Now there are lots of
options in Langflow. You could use Pine Cone, you could use Chromma DB. I'm just
going to use Astro DB and Astro DB also has an API key that I'm that I have already provided here. Now in terms of database, I've created a database called rag demo. But you can also create a
rag demo. But you can also create a database by clicking on plus new database and creating a fresh new database. Once you've selected the
database. Once you've selected the database, you have to choose a collection where these chunks will go and get stored. So I I am choosing Langflow as my collection which I've
created. You can choose you can go and
created. You can choose you can go and create any new collection from here. Now
with that I'm ready to make my connections. Now the chunks that get
connections. Now the chunks that get passed from the split text I'll connect it into ingest data and my embedding model I'll connect it to the embedding
model on the Astro DB. This is our load data flow. Let's run this. So the flow
data flow. Let's run this. So the flow was built successfully. We don't have much to see here because it's being saved and all it did was it took the file, it chunked it up and saved it into
our uh database in Astro DB. So now
let's build a retriever flow which will actually where a user can ask a question and then retrieve uh answers from that text or from that document. So we'll
start with a chat input because a user needs a way to ask a question. So we'll
start with the chat input. We we
building a retriever flow. We'll also
have our um embedding model. Remember
even the input is now vectorzed. We'll
add our embedding model. the same
embedding model we used in the data flow. Now that vectorized information
flow. Now that vectorized information would go to your Astro DB to search from those documents. So here I'm choosing
those documents. So here I'm choosing the same database and I'm going to choose the same collection where it has 136 records. You know I'm connecting my
136 records. You know I'm connecting my embedding model and I am connecting my chat input as a search query. Now the
data that comes from this needs to be parsed. So we'll add a parser and
parsed. So we'll add a parser and connect a data frame. So in this case if you see here if I convert this into a tool mode. So here you see search
tool mode. So here you see search results. If you just click on there you
results. If you just click on there you can choose that instead of search results I wanted to be a data frame. So
I choose my data frame and I connect it to the data or the data frame piece of my parser. From here I needed to create
my parser. From here I needed to create a prompt template to capture so where I can give instructions. So I'm going to choose a prompt template where I'm going
to give system instructions where I'm going to say given and I'm going to take the context from before. I'm going to say context. They can see that I've
say context. They can see that I've created this prompt variable context and say given the context about answer the
question as best as possible and then we'll add our language model. We'll do
our connections in a second and then we'll have an output from the language model.
All right, we have built all of these frames. Now let's just start connecting
frames. Now let's just start connecting them. So the prompt from here goes into
them. So the prompt from here goes into the input. We have the context here.
the input. We have the context here.
So we're making that a prompt variable.
Correct. So that way I can add the question too. So this will pass. I can
question too. So this will pass. I can
take my chat input right now here and connect it to the question that receives that input. It also takes the context.
that input. It also takes the context.
It's connected to the input. And now we just have to connect the model response to the output.
Okay, that's about it. And now let's run it and see the flow was built successfully.
So now if I go to the playground, I can ask a question.
So this is where we'll go back to what I have built before and we'll show. Yep.
And so if I say what is this document about? Here's where I have the document
about? Here's where I have the document is report titled the geni divide state of AI in business 2025. So it gives me more information um about what this document is.
Nice. We've built a rag system. People
have got to see the basics of rag. So
this about covers for you guys all of the building blocks. We went through when do you even build with AI? What AI
techniques do you choose? What are the key building blocks of AI? Prompt
engineering, context engineering, workflows versus agentic workflows. And
finally, rag systems. This is the road map you want to go ahead and start learning. Not just watching these
learning. Not just watching these podcasts, but going out implementing on your own so that you know them cold so that when you hit your AIPM job, you can help engineering teams actually build
these. You're not going to be using a no
these. You're not going to be using a no code tool to build in an actual product.
But by doing a noode tool, you get to learn the in-n-outs. If you build the intuition, some of the intuition we talked about where we say, "Hey, always do rag before fine-tuning." If you practice that, if you try to do fine-tuning for a problem and then you
try to do rag for a problem, you'll very quickly build that your intuition on your own and it'll allow you to be an effective PM in these situations. So,
you have been cracked into AIPM. A lot
of people want to crack in. What is the right road map? What are the best hands-on projects to build to become an AIPM? I would say always start with
AIPM? I would say always start with don't think of it as projects. Think of
it as building products. Now you could um build a use case, a painoint that you have, build for that use case. Then
rather than just after building it and be like I'm done with it, actually take it forward. Think of it as a product.
it forward. Think of it as a product.
Launch it. Have your friends and family try to use it. Now all of a sudden you have real users. You'll have things that break. So you are now doing things like
break. So you are now doing things like a real product manager would and by building taking it from a project to a product will actually start giving you
the confidence to put those projects in your resume. You can give a lot more
your resume. You can give a lot more information and clarity for your recruiters when you talk to them rather than saying oh I attended this course or I did this project. Now all of a sudden
you have a lot more richer details. This
breaks in these use cases here. Here
were the challenges that I had to overcome and that gives you a lot more richer information and data to go with.
So products not projects. Should people
be creating a portfolio and what does a good portfolio look like if so?
Yeah, your portfolio should definitely have some sort of an app that's solving a real problem that you have built for.
There are a lot of no code prototyping tools today that you could use. Build an
agent. We just built a simple agent.
Build a agent that solves a problem.
Build a rag system. We just saw how to build a rag system. So look for problems within your area and try to build examples of those as portfolios that you
can then take it and not just call it a project but have real users and convert that into products. Now all of a sudden your resume has three products that you have orchestrated.
How important are certificates? What
does a AWS ML certification get you?
Yeah. Um certificates are a great way to signal to your recruiters, your hiring managers that what you have learned is
not just theoretical but also something that is credible and accredited. So you
could go to uh for example I offer at nextgen product manager AI product management course and we teach everything that you got to do to learn about AI product management and you
could learn those concepts you could do a lot of hands-on projects and then go take that AWS AI um practitioner certificate now that gives you a
credible information or credible certificate to go across and tell your u hiring managers of how it's not just something that you have learned but it's
also accredited by AWS.
Speaking of AWS, I want to talk about you and your career a little bit. You
worked at AWS, you worked at Meta, you worked at Netflix. How do the AIPM cultures differ at those three companies?
Yeah, very different. Let me start with um Amazon or AWS, which is where I started my career. It's a very customer
obsessed document writer kind of a way.
Um I think Amazon invented uh the term uh PR FAQ or the six pager where um everything at Amazon starts with a press
release um and a frequently asked questions document even before like engineering even starts a single line of code um or before a design mockup is
created and the philosophy is we work backwards from the customer. You start
thinking from the customer problem and you articulate why existing solutions don't work and then explain how your product solves it. That's the PR FAQ or
the six pagers. Um that's used for strategic reviews.
Now it's not just a document for the sake of document. It's taken very seriously. This PR FAQ is reviewed all
seriously. This PR FAQ is reviewed all the way up by your VP or even sometimes Andy Jasse
and it's a very document writingheavy culture where I think Amazon PMs spend probably 40 to 50% of their time writing documents.
Wow.
So you become an exceptional writer at Amazon. There's just no option around
Amazon. There's just no option around it. At Meta, I think it's the complete
it. At Meta, I think it's the complete opposite in terms of process. If Amazon
is about rigorous upfront planning, Meta is all about experimentation and iteration.
And the culture ethos reflects that, right? Like it says move fast.
right? Like it says move fast.
So at Meta, product management is expected to be deeply technical where you're able to understand the code base.
you're able to go through the insights uh of how something was implemented.
You're able to talk about um how to ship multiple variants, how will you test them against your control groups and you let the data tell you what works.
I think of all the companies I've worked with, I've seen Meta having the most sophisticated um experimentation infrastructure in the industry and as a
PM there, you live and breathe statistical significance.
At Netflix, um the philosophy is context over control where perhaps it's the most unique approach to product management amongst the big tech. um
instead of having like some rigid process or some documentation requirements or some approval hierarchies, Netflix invests heavily in making sure
everyone understands the strategic context and then they trust you to operate independently within that context.
So you're expected to be an exceptional communicator.
You don't have to always be very formal with documents in the way that um Amazon expects, but it's all about building alignment through conversation,
presentation, and shared understanding.
So, you need to be very comfortable with ambiguity and be able to define your own scrambling.
All three of those companies, Meta, Amazon, and Netflix, they're kind of notorious for having hard cultures like performanceoriented cultures. Amazon
performanceoriented cultures. Amazon just laid off 30,000 people. Meta has
the rolling layoffs. Netflix is known.
Even Reed Hastings has slowly stepped back from his own role. Different people
will leave. There's pressure everywhere.
How is it working in these companies?
Would you recommend it to other people?
It's an absolutely phenomenal experience. I think I've learned a lot
experience. I think I've learned a lot from working at these companies. I have
built the documentation customer thinking rigor. Like working at Amazon,
thinking rigor. Like working at Amazon, the first thing you you learn and it gets ingrained in you is working backwards from a customer pain point.
With meta, it's all about how do I test quickly? How do I once I know what I
quickly? How do I once I know what I want to build, how do I test quickly?
what are what should that experimentation culture look like? And
with Netflix, I have learned truly about what does autonomy mean and the power of autonomy um and the shared experience of uh
building consensus and working together towards a shared vision. It shapes who you are as a person. The kind of insights you get as a product manager
and the scale is phenomenal across um Amazon, Meta, Netflix, every feature that you build is probably used by
millions of users. So the scale that you get to work with is amazing and that would be an experience that I would encourage everyone to have at some point
in their career in their roster.
So why did you leave Netflix? Director
of AIPM at Netflix feels like a dream job. What was the story? Yeah. So I have
job. What was the story? Yeah. So I have been an AI PM for the past 13 and a half years in the field of AI. Believe it or
not, AI existed before LLMs. So uh I've been in the field of AI for so long. I
have about 12 patents in the field of AI.
And with so much of AI growth happening, I thought to myself, hey, I really derive a lot of satisfaction from
actually teaching um product professionals how to transition into being an AIPM. I've been teaching for the past 2 and 1/2 years. And the
greatest satisfaction I derive is when someone says your experience, your insights were so powerful that I was able to go crack
that interview and now my pay is like 2x of what I used to get. Immense
satisfaction and in a career changing way. And so I said, you know what, with lot of opportunity out there and with AI jobs
uh increasing, I wanted to take the time to like go full-time into this and spend my time teaching and consulting um
companies to draft their AI strategy uh to apply the learnings that I have from leading um scaled AI businesses and
products to help their portfolios.
So I took the jump.
So I asked this question. You can share as much as you want or as little as you want, but obviously Netflix is known to pay well. If you've worked at places
pay well. If you've worked at places like Met on Amazon, they're known to pay well. So people would assume you rake it
well. So people would assume you rake it in the dough as a teacher. What can you share with us? How is the business of Joti doing now that she's no longer a full-time PM?
So I am a newcomer to this field.
Although I've been teaching for the past two and a half years, I did that as a part-time and um I would say I've added two new courses. So I used to only teach
AIPM because I just didn't have the bandwidth back then, but now that I'm going full-time in um I added two new courses. One is uh on diving deep on
courses. One is uh on diving deep on agentic AI. So this is for someone who
agentic AI. So this is for someone who is already aware of AIPM fundamentals and is now looking to go and lead and build AI uh agentic AI products and I
also introduced a PM accelerator specifically helping uh professionals uh crack product interviews be product
sense product execution behavioral and I have I'm seeing great interest across all the three from different uh groups
groups. Most of the groups that I work
groups. Most of the groups that I work with are folks who are um getting into AI now for the first time and I also see
that my I don't advertise my agentic AI or accelerator outside but the courses run full just because all my previous students who took AIPM continue down the
funnel um to agentic AI and and uh PM accelerator. So it's been a great
accelerator. So it's been a great experience going into this full-time.
I'm just 2 months in, so it's probably too early to figure out how things are, but I'm really excited about it.
If I'm reading between the lines, you might not have hit director of AIPM at Netflix, but you clearly see a path to getting to more. Is that fair to say?
Absolutely. Yes.
All right. That is the potential you guys can get as a course instructor. Ji,
thank you for sharing your knowledge so indepth, so freely with all of us.
Really appreciate having you on the podcast. Thank you so much for having
podcast. Thank you so much for having me. I am thrilled to be here.
me. I am thrilled to be here.
All right, everyone. We'll catch you later.
I hope you enjoyed that episode. If you
could take a moment to double check that you have followed on Apple and Spotify podcasts, subscribed on YouTube, left a rating or review on Apple or Spotify, and commented on YouTube. All these
things will help the algorithm distribute the show to more and more people. As we distribute the show to
people. As we distribute the show to more people, we can grow the show, improve the quality of the content and the production to get you better insights to stay ahead in your career.
Finally, do check out my bundle at bundle.acg.com
bundle.acg.com to get access to nine AI products for an entire year for free. This includes
Dovetail Mobin Linear Reforge Build Descript, and many other amazing tools that will help you as an AI product manager or builder succeed. I'll see you
in the next episode.
Loading video analysis...