From Playtime to Production: How Every Builds AI That Teams Actually Use
By Just Curious: Applied AI for Value Creation
Summary
## Key takeaways - **Practitioners, not consultants**: Every operates as AI practitioners, not just consultants, by deeply integrating AI into their own daily workflows and testing new models constantly. This hands-on experience informs their consulting approach, ensuring they provide practical, cutting-edge solutions to clients. [02:08] - **Discovery before build**: Before building any AI solution, Every prioritizes a discovery phase to deeply understand a client's goals, workflows, and pain points. This ensures that AI adoption is focused on delivering direct value and solving existing problems rather than creating unnecessary tools. [04:44] - **AI requires active management**: Adopting AI is a transformative change, making everyone an AI manager responsible for its outcomes. It's not a set-and-forget tool; continuous checking of work, iterative prompt refinement, and managing the AI agent are crucial for success. [10:49] - **The 'hire an intern' test**: A good problem for AI to solve is one you could explain to a brilliant intern, including the necessary context and resources. This ensures the task is clearly definable, repeatable, and has a clear ROI based on the effort saved. [11:43] - **Leadership drives AI adoption**: Organizational-level AI adoption requires coordinated effort and leadership buy-in. Leaders must understand the transformative potential of AI to foster an environment where its value can be realized across the entire company. [16:15] - **Master your LLM first**: Before buying new AI products, focus on mastering your existing enterprise LLM like ChatGPT or Claude. Most teams underutilize these tools, and significant value can be unlocked by deeply understanding and leveraging them first. [33:14]
Topics Covered
- Uncover Pain Points for True AI Value.
- AI Requires Skill Building, Not Just Rolling Out Tools.
- What Makes a Problem Ideal for AI?
- AI Success Demands Top-Down Coordinated Leadership.
- Maximize Your Existing LLM Before Buying More.
Full Transcript
Today we're talking with Natalia Cano,
head of consulting at Every, where she
helps companies like private equity
firms, hedge funds, and media
organizations move beyond surface level
AI efforts and embedded the technology
directly into daily workflows. Drawing
on her background across finance, public
innovation, and venture, Natalia brings
a systems level view to AI adoption, one
grounded in how work actually gets done.
Natalia, welcome to Just Curious.
>> Thanks, Sue. Great to be here. What is
Every and what does it mean to be a
multimodal media company that also
consults on AI?
>> Yeah, so EverY uh Every started out
about 5 years ago. Uh and we do three
things. We do we write about AI. We
build AI products and then we share the
best things that we learn across those
two dimensions with other clients uh
through our consulting business. And the
first part of every uh is the media
company. That's how we started out. We
um we write about AI and really about
how people are using it, how companies
are using it and what are the most
practical valuable ways in which people
are
getting value out of AI. From there we
evolved to building AI native companies.
We wanted to really explore where rubber
hits the road and how you could be using
these really powerful tools to build
companies and so we decided to go and
build our own kind of product studio. We
now have five companies that have that
we've incubated in house one of which is
fully spun out and we take all of the
learnings as I mentioned and we share
them with you know variety of clients
that come to us and are ready to kind of
go on their AI adoption journey. So
that's what it means to be at every. It
means to be at the forefront of AI and
to be really excited to help other
companies and people explore how AI can
be helpful to them.
>> I want to follow up on that last point.
Um you said that you are head of
consulting, but every aren't
consultants, they're practitioners. What
does that look like in practice when
you're helping your clients adopt AI?
>> I mean it it means that we don't uh AI
is all we do. It's every day. Everything
we do is testing new models. All of the
internal kind of work and processes that
we have at every we operate in an AI
first way. So for example, a few months
ago, we decided to hire an AI ops
manager and this is someone who just
ensures that we are using AI in the most
compelling and valuable ways internally.
So uh AI is really all we do. When we
say that we're not consultants, we're
practitioners. It means that we are
using these tools at the cutting edge in
much the same way that we we want to see
our clients using those tools. So
consulting is not it's not sort of like
a theoretical exercise for us. We rely
on AI to build and run the companies
that we operate via our consulting stu I
mean our product studio. And so
consulting is just an extension of what
we do every day in running the business
that we rely on.
>> Yeah. when it comes to adoption of AI,
you've said that most companies don't
lack awareness of that need. They lack,
I guess, application. What does that
mean?
>> Yeah, I mean, it really just means that
I think we all could use a little bit
more efficiency and AI has this really
great promise of being helpful in that
particular end and helping us do more or
do better work. And I think where both
individuals and companies struggle is in
really figuring out what exactly that
means to them and their workflows. So,
what we're able to do when we're working
with teams or individuals is we're able
to pinpoint all of the areas where they
could be using AI and then really kind
of hone in on the areas where it would
be most valuable for them and for us to
focus on and to develop tools um and
workflows that they can get real
leverage out of. Right? It doesn't
really matter if you have great
prompting skills or if your projects are
structured in a really beautiful way. if
you're not alleviating pain points that
enable you to do the work that you want
to do, um you're, you know, it's not
really valuable. So, we just want you to
do more of what you care about and to do
it better and faster and for AI to do
that for you.
>> And what does that process look like
when you are helping them identify those
problems that can be solved? Um, where
do you typically start?
>> We started in discovery. So in order for
AI to be helpful to anyone
it that we need a lot of context and AI
needs a lot of context to understand
what it is that you're trying to achieve
and how you define excellence and how
you would go about achieving that
particular thing. So we start by
learning about what your goals are as an
organization right like what is the
north star that you're collectively
working towards. We then partner with
the teams that we're going to be
training and then really ask them how
they go about achieving their collective
goal that ladders up to that broader
goal. Uh and then really understanding
what are kind of like the core workflows
that they rely on to reach that goal and
within those workflows where there are
opportunities for AI to be helpful. And
so the first phase of our work is just
really learning and really deeply
understanding that any work that we will
do is one going to be oriented around a
goal and a priority that we know the
team cares about and it's valuable to
them. uh and two that we are going to
focus on AI skills, adoptions, tooling
that is going to give them direct value
in solving for something that they're
already spending time on and is a pain
point to them because you know this is
something that they're doing regularly
every day, every month, every week and
it's it's a painful sort of it's a pain
point, right? It's it's painful sort of
uh time expenditure.
So we start out all projects by just
kind of capturing that information and
then building essentially an AI road map
of here are all of the opportunities
that we surfaced that for which AI could
be meaningful and relevant. Uh and then
we work with our clients to decide which
of these do you want to prioritize and
pursue. And based on that conversation,
we then build out essentially kind of
like a training plan where we upskill
each of the teams that we're working
with to solve the specific pain points
that we identified in the conversations
that we had and do so via usually uh the
LLM that they're working with or via
kind of like the workflows that they're
solving. Uh,
>> do organizations usually come to you
with a desire to train their business
unit or their functional unit or their
organization or do they come with a
problem that they want to solve that you
help them solve while also training
their organization to use the
technology?
>> That's an interesting question.
I would say most companies
come to us looking to maybe build
something. I would say maybe there's
there's sort of two answers. There's
companies that come to us and and they
say, you know, we we want our team to be
using AI um and so we want to build
something, you know, or uh or or sort of
have some sort of like interface that
solves that solves like a pain point for
them. And then we also have teams that
say, "Hey, we just rolled out, you know,
sort of anthropic or claude or chachi BT
to the entire organization and we have a
sense that people aren't actually using
it really well." And
so those tend to be kind of like the two
broader conversations that we're having.
I would say for the first type of
conversation, what we see is there's
there's this kind of big challenge with
AI that there's so many things that you
can do. you really could build any
product or solve any pain point now um
faster than ever, but there's this huge
risk in
building something that you actually
don't need and doing a lot of that. And
I think we we really try to recommend to
clients um or anyone that we speak to
that before you go about trying to build
a product that you're not sure is really
the pain point that would be most
valuable to address, you really outline
where there is both lowhanging fruit and
high impact AI opportunities and then
seeing how your team uses the tools that
are available to them and where they are
actually getting value out of those
tools.
decide where to go next. So, you know,
that's that's kind of our response to
the first type of conversation that we
have. And then for the second, you know,
where there's like the classic like we
rolled out AI tooling to everyone at the
company. Um, I would say there's still
sort of like a misunderstanding on the
level of depth that you have to go into
to get value out of AI. Um, you know,
we've all had like a magical experience
with AI and I think we've also als all
all had a moment where we've asked it to
do something and it's just totally
sucked. Uh, and the reality about
getting value out of AI is that it takes
quite a bit of effort. It takes quite a
lot of effort actually to create a a
prompt and uh and and and kind of like a
project structure that solves a major
pain point for you. And so in order for
us to do that effectively, um, we're
going to need to work together and get a
lot of context and files and documents
and structure that into a really big
sort of like super prompt that solves a
painoint and we're going to like it's
going to be iterative, right? We're
going to try that a few different ways
and when we find something that works,
then we're going to have something that
the team can really rely on. But there's
this misconception with AI that you can
kind of just roll out TATBT to everyone
on your team and all of a sudden there's
going to be these like massive
efficiencies that are gained when the
reality is that you actually kind of
have to build out the skill of using AI
essentially train the AI agents or the
prompts that you want to use to have
sufficient context and instructions and
information about what you want to get
done and what excellence means for your
team. And then there's an iterative
process in testing those tools and
making sure that they are effective at
accomplishing those goals. And so we we
work with those clients to kind of make
sure that we can go through that process
and that they really understand that by
adopting AI, it's not kind of like a new
tech tool that everyone has access to,
but it's it's essentially kind of a a a
totally transformative change where now
everyone at your organization is a
manager of AI. and there there were sort
of great opportunities but also big
responsibilities that come with that
role. And so we want to make sure that
uh people understand what that role
means and that they're able to be good
managers to the AI tools that they have
access to.
>> And I guess related, you said that
adoption only happens when people see it
solve actual problems. I love that
framing. I hate honestly the dialogue
about like AI adoption because to your
point you just drop like AI on someone's
lap and they don't know what to do with
it or it's not solving a problem not
going to use it and they're certainly
not going to use it effectively. What
makes a good problem for AI to solve?
>> It's pretty simple. It needs to be
repeatable. So it needs to be something
that you well first of all it needs to
be something that you can explain,
right? A good problem that AI can solve
is a problem that you can explain and
solve yourself as if you were explaining
kind of like to an intern, right? Like
if you hired a really brilliant intern
kind of off the street and you needed a
problem solved, this needs to be a
problem that you could explain very well
to that intern and that you would you
would expect an intern to be able to
kind of like solve for you or like a you
know a brilliant chief of staff to solve
for you. So one, can you explain what
the problem is? And can you explain the
solution to that problem with with, you
know, kind of context and information
around what it means to solve that
problem with excellence and with like
the specific steps kind of like
procedurally as you would think about
solving that problem and can you direct
say this chief of staff to the set of
resources that they would need access to
in order to kind of do that task. So
what's a great ch what's a great sort of
um uh you know sort of challenge or or
use case for AI is it something that you
can explain is it something that you can
kind of give context and resources
around is it repeatable right like is it
something for which you can check the
output and you checking the output of
that is still there's an ROI component
right like if you're um if it's going to
be super tedious for you to like check
the output um that you're getting from
AI like let's way like it's like a
thousand rows that you're like uploading
and you're asking it to um you know
create another thousand rows. It's up to
you to always be a good manager to AI
and to check the caliber of the work
that you're getting from AI. So are you
making a good tradeoff in asking it to
accomplish a task that you could
reasonably supervise and get sort of
quality outputs from? So is there kind
of like a quality check? Um, and then I
would say the the last thing is any any
usage of AI that will really give you
sort of leverage or value is something
that you can improve over time and that
um you can kind of like continue and
test and build on. So, is this a
workflow that you could really delegate
and rely on and that you have time to
sort of like step up and continue to
manage? Because uh, you know, again, as
I said, there's this misconception that
once you set up a a maybe a prompt or an
agent that you could just kind of let it
run, but the reality is that you are now
responsible for both the prompt, the
agent, and the outcomes of that. So, are
you prepared to basically now be
checking the work and the output of
that? Um, and is is is the ROI that
you're getting from that sort of
experience worth your time, right? Are
you making a good decision around
picking a problem that you're solving
that you've spent 2 hours sort of like
setting up this prompt or this project
or this structure and it's going to save
you at least 2 hours if not hopefully
significantly more on a regular basis,
right? you don't want to automate or use
AI to solve something that it actually
takes you 5 minutes or 30 minutes to to
do. So I think it's thinking about all
of these different elements, but at the
core it's repeatability, checking the
quality of the outputs, having really
good instructions, and then making sure
that you really have context and
essentially kind of like SOPs, like a
standard operating procedure that you
can hand off to train and test if you
could really delegate a task to AI.
>> Yeah. And how do you find those
problems? And are there questions that
you ask that are great at teasing out
problems that are good for AI to solve?
Yeah, I mean I would say, you know, one
of the easiest one is one of these sort
of me questions is, you know, in these
discovery conversations that we do,
it's, you know, if if you did hire an
intern tomorrow that could help you with
the things that you have on your plate,
what would be the three highest value
tasks that you would kind of think to um
train your intern to kind of accomplish
for you? And so that's that's not a
perfect way to go about it, but I think
you start to get a sense for what are
like the tedious, repetitive, high-v
value tasks that you could get a really
smart person to accomplish if you invest
the time in training them to do that.
There's a very similar sort of
psychology to them doing the same thing
with AI.
>> Yeah. And who do you typically engage
with within one of these client
organizations? Who should be leading
this effort on the client side? So our
perspective is that AI unlike other
technology should really come from the
top. Uh there's a lot of organizations
where there's teams that are using it
there sort of like individuals scattered
across the organization that are getting
a lot of value out of AI kind of like
your your typical bell curve power
users. But in order to get value out of
AI at an organizational level, you need
a coordinated approach and you need
leadership that understands what is the
kind of value that you can be getting
out of these tools. Which means that you
need leadership that is really engaged
and understands just how transformative
these tools are. There's a few examples
of clients that we've we've seen do this
really well. I think notably, you know,
Will, CEO of Walleye, has been an
exceptional leader in um in sort of
under understanding that sort of mental
paradigm that we all experienced at some
point in the last two years and then
really choosing to have the
organizational the organization in this
case, Walleye, uh make a uh a take a
coordinated approach to finding where
there are opportunities to get value out
of AI and making sure that every single
person at the organization is as an
opportunity to see that value, replicate
that value for themselves and then kind
of get scale from that. And when you see
leadership have that light bulb moment
and then generate an environment where
there is a coordinated approach to
getting value out of AI, I think that's
where you see magic happen.
>> Great. And that's a there's a great
interview with um Walley on Spotify and
on YouTube, Dan and AI and I. So, anyone
who's listening to this who wants to
check that out, um, go check it out. Um,
what if the leader is interested in AI
and the organization's interested but
isn't actively using the technology and
doesn't really know what it's capable
of? What do you do then?
>> If an organization hasn't really been
using AI, but they're interested in sort
of
>> someone comes to you and they say, "Hey,
like we really think there's an
opportunity uh to use AI. I haven't
really played around with it yet. Um, I
think it could be like transformative.
Do you
turn them away or can you work with the
leader to get them sort of, you know, up
the
curve or whatever?
>> Yeah, of course. I mean, I I think in
some ways those are the most fun clients
to work with. Absolutely. Uh I there are
so many magic light bulb moments that
happen early on when you are uh you know
just getting familiar with how powerful
these tools are. And if a team has not
started using AI in a coordinated
you know even playful way there are just
so many magic moments early on where you
could start to see the team sort of see
all of the different applications. Uh,
and I would say it's it's especially
cool when a company gets to experience
that together and early on. So 100% I
mean so long as there is an appetite and
an aperture to rethinking workflows and
getting teams to be on these platforms
and then work together to get leverage
out of them. I think the magic is
totally there.
>> Great. I'd love to walk through an
example of how every works. You shared a
case study with me, a private equity
firm that you worked with. Tell me a
little bit about the problem that they
came to you with. Yeah, so this was one
of our very first clients early on. This
is a this is a firm that was interested
in actually doing something much like
what we just talked about, right? they
they did have access to a variety of AI
tools and they were seeing that while
there were sort of disperate approaches
to using the tools, there wasn't sort of
like a coordinated approach that was
making it so that everyone on the team
was getting kind of consistent value.
And so the first thing that we did is of
course just talk to the the different
teams to understand what they were doing
and what their goals were and um you
know kind of learn about the ways in
which they were already using AI and how
they wish they could be using it. Out of
that set of conversations, we
identified, you know, for the, in this
case, it was for the investment team in
particular that they had a pretty
consistent way in which they were using
internal um uh internal notes basically
and internal sort of research to create
like a V1
uh idea or set of questions around like
a new thesis or like a new company that
they were exploring. And then that sort
of that that thesis would go into a memo
that would eventually go to the rest of
the investment team. And so we built out
a bunch of workflows for them. But one
of the favorite one of my favorite ones
that we delivered was basically getting
all of the rich data that they had
internally to help inform a set of
questions that could help an investor
sort of refine their thinking around
their a deal that they were looking at
in a particular sector. So to kind of
like refine their thinking and and
stress test how they were thinking about
a particular company or about like a
particular industry and then using the
rich sort of internal private data that
they had to create like a V1 memo that
they could then take to the rest of
their partners and say hey this is a new
opportunity data that I'm testing here's
all of the relevant data that we have
internally for how we've thought about
this before and why it could make sense
for us to pursue this or explore this.
Um, it made it so that basically the
amount of companies that they could
really think critically on and the kind
of conversations they could have
internally were were richer and more
interesting because we went from being
like a very sort of like lonely
experience of you identify a company
that you're interested in. You have like
so many incredible resources within the
firm that you could be pulling from or
looking at or data that you should be
considering sort of swifting through and
doing that manually to having sort of
like a company that you're looking at in
a thesis being able to do a lot of
preliminary research with an AI workflow
that understands who you are, what you
do, how you think about this kind of
this industry kind of like more broadly
and if it fits into the thesis that your
company has. uh and then to add to that
the rich repository of information that
your firm has collected over the years
that it's existed to really formulate if
there's a there there for the that team
to pursue that opportunity. So that was
one of the that was one of like the very
early on projects that we worked on.
It's been a foundational project for us
understanding specifically how private
equity firms work but also the sort of
fertile ground that there is for more
investment firms to get value out of AI.
uh and you know it's actually something
similar that we're doing now with
another firm but each firm has a
slightly different process for how they
go about the thesis formation say for
example and how their data lives and
where they extract rich and valuable
insights from their internal data and
just helping people think more
critically and find more interesting
insights from the data that they have
access to. I think it's just like a
really fun project to work on and uh and
and value to create for the investors on
the team.
what did you build that on? What was the
sort of tech stack that enabled that AI
workflow?
>> So, you know, for us, we're big
believers that your horizontal LLM is
the most underutilized and
underleveraged tool at your
organization. So, if your team has
already rolled out chat, GBT or Claude
over and over again, we find that most
teams are severely underutilizing it.
And a lot of the workflows that we build
for teams are basically live within chat
GPT with a mix of you know it being GPTs
or shared projects or scheduled tasks.
Um and there every week we see new
things coming out that make that
experience richer and richer. You know
today the browser launched and we're
excited to see
>> how different that that is. But our
experience has been that the easier you
make it for people to feel ownership of
and understand how they can update a
prompt to create new tooling and
resources for them in the context of
TAGBT, the more leverage you can
actually get for individuals and a whole
team. So a lot of our solutions really
live in the LLM that our our clients are
using that the teams are using. oftent
times that's chat GBPT and it it's
basically structured so that it's
incredibly easy to use but also
incredibly effective.
>> You mentioned earlier the sort of
iterative nature of prompt development
or GBT development. What does that look
like in practice? So you you go in, you
ask a bunch of questions, you identify
some opportunities to build solutions to
make workflows more efficient. You come
out with a V1 of your GPT. How how do
you take that V1? What does the process
look like between that V1 of the GPT and
what ultimately they're using at the end
of the project?
>> It varies a lot. Uh I mean early on we
are we are basically creating a prompt
and then asking you know for we call
them AI champions the these are kind of
like our liaison at the firms that we're
working with. We're asking for them to
like run it and test it and then see
where there are sharp edges around that
prompt where it breaks or it stops being
valuable or it's actually not solving
the problem that we thought we were
solving. We get feedback from there. or
we see what context uh might be missing
or what might be missing about the
prompt in order to solve for those
things or to make it richer. We go and
improve it to solve for those things and
then we test it a few times basically to
make sure that the end product is
exactly what we were hoping it would
deliver. So that's what we mean by it
being iterative is we give it a go.
oftentimes, you know, the prompts when
we start out are 85% of the way there,
but it's really that last 15% that makes
a huge difference and it either being
valuable to someone or not being
valuable is if you know when you're when
you're reaching for that tool, it's
solving the specific need that you have.
And so that's what we're trying to solve
for when we're going back and forth and
tweaking it.
>> And then what does the training look
like? you you're working with the AI
champions to identify like opportunities
to refine that GPT. There are many more
people on the team who can get value
from this. You don't just like give it
to them. What is that? What does the
kind of training look like?
>> You know, it so our training is um it's
functional. So, it's team by team. So,
anytime that we're training a team, we
are training say the marketing team or
the compliance team or the legal team.
And so depending on the company, it may
be the case that we actually had a
client recently where they said, "We
don't want everyone to see kind of like
the prompts behind the scenes. It's just
too much. No one has time for that. We
just want kind of like the resource
available to everyone on the team." And
so in that case, we will just work with
a core set of the AI champions. We will
train and empower them to have the the
kind of like all of the resources,
skills, and information that they need
to kind of manage those tools. And then
everyone else will basically kind of
just get trained on using those tools.
But I would say most of the time we're
working with teams that really deeply
want to understand how they themselves
could be getting value out of these
tools. So when we're working say with a
compliance team, we will come in giving
them a prompt or a uh you know say a GBT
for example to to say something very
simple that that again solves a pain
point that that we know that they have
and then we will explain to them how we
built out that prompt and that tool in
order to solve that pain point for them.
Uh we will make sure that they can use
it and know how to use it. So there's a
lot of hands-on time for them to see how
it's structured, see how to use it. But
what we really hope people get out of
these conversations is that they have
the skills to just build more of this
themselves. So we kind of want to make
ourselves obsolete and we want to
continue to help people where they don't
understand how they can be using, you
know, kind of AI. But our goal is for
really people to feel empowered to be
experts in using these tools coming out
of the sessions. Yeah, you mentioned uh
working with teams to solve problems.
What's your take on the rush to stand up
AI labs or centers of excellence versus
like working with line level teams to
solve specific problems?
>> You mean when organizations sort of like
build out like internal AI task forces?
>> Yeah.
>> Yeah. I think I think in the future I
hope more organizations start to think
about having a a a centralized and
coordinated group of people that are
responsible for the AI initiatives at
that company. And I I think the
solutions need to be functional. They do
have to be specific to each team. So uh
so I I do think in the future there are
just going to be more roles where there
are sort there are sort of like I don't
want to think of them as centers of
excellence. That sounds kind of lame,
but I really think of them as kind of
people who are really excited about uh
making sure that when there is someone
somewhere in the company that has built
a tool for themselves that they're
getting real value out of, whoever that
is, that that is being highlighted as an
opportunity for other teams for which
that tool might be relevant. And then
you really multiply the impact of that
tool times the number of other team
members that could potentially be
benefiting from it. And so this is where
I say that there there really does need
to be a coordinated approach to AI.
Oftentimes when we're working with a
client, we will come in and do that to
some extent. But what we want to start
to help shape by nominating AI champions
and having people who are responsible
for the output of their team and who are
sharing that information across teams is
we want to start creating that dynamic
internally where people see
opportunities, share those opportunities
and then everyone benefits from them. So
I I I guess to answer your question in
the future I hope that there are more
coordinated
sort of centralized approaches to how a
company is thinking about AI and
supporting internal AI initiatives and I
think there's plenty of opportunity and
most of the opportunity actually lives
in solving function specific challenges.
>> Uh as we come to a close I'd love to ask
you some questions that solicit advice
for people listening. Uh the first would
be for teams that are just getting
started. What's your like go-to first
move for them to build some momentum?
For teams that are just getting started,
I would say if you start out and think
it before you go and just play with AI,
which we very much want to encourage,
if you think about
what the tedious, painful tasks are for
you, and then you go and have a
conversation with your kind of
enterprise provisioned AI tool and then
ask it how it could help you in
addressing those specific tasks. I think
that would be a good sort of starting
point because as I mentioned, you want
to start solving for things that you
actually need support on. Uh and while
we want to encourage people to just play
on AI, uh we want people to people to
get real value out of it too. So if you
think about what your real needs are and
then use AI to help you think about how
it can help you address those needs, I
would say that that's a fantastic
starting point.
>> Great. And I guess conversely, what's
something you see clients doing that
they should not be doing? I think as I
mentioned earlier, you know, it's it's
so easy to it's easier than ever to
build products uh and also to buy the AI
product duour that sounds like it's
going to solve all of your um you know
all of your data problems or kind of
like what whatever sort of you know
solution says like it's it's going to
you know do rag methodology to surface
the answers that we know whatever
whatever
I think the biggest challenges that
we're seeing companies run into is one
building products they actually don't
need and that don't solve either a
challenge that a customer has or that
the internal team has or two buying a
bunch of AI products or a a set of AI
products that are promising to do
something specific that really a lot of
the time the team is either
underleveraging or they could be using a
uh an enterprise tool an enterprise LLM
tool to support them on. So I would
really start with claude or chat dbt.
Make sure your team really knows how to
get value out of those tools, really
understands what the limits are of those
tools and then if there are additional
tools on top of that or additional
products on top of that to build, you go
from there.
>> Awesome. Last question. Who should reach
out to Natalia and every like what what
is a great client for you?
Yeah, great clients for us are teams
that are usually their enterprise teams.
We work with technology companies,
really big technology companies, private
equity firms, hedge funds, um that are
ready to as an organization make that
pivot into AI adoption and are ready and
prepared to uh do kind of a companywide
approach and build a road map and
execute that roadmap for the entire
team. We love the clients that we work
with. We go deep with a few clients
every year. Uh we we're excited to
support more companies that are ready to
make that transition. Awesome. Natalya,
thank you for coming. I'm just curious.
>> Thanks, too. It's good to see you.
Loading video analysis...