The AI Agenda: GPT5 leaks and the business of AI News — Steph Palazzolo, The Information
By Latent Space
Summary
## Key Takeaways * The AI industry is experiencing rapid growth and significant investment, particularly in inference providers, leading to large funding rounds and the emergence of new startup categories. (0:15:00) * Journalists covering AI face challenges in navigating hype cycles, verifying information from secretive companies, and understanding the business models behind cutting-edge technology. (0:08:30) * Major tech companies like Meta are investing heavily in AI, with strategies ranging from open-sourcing models to pursuing "super intelligence," raising questions about their long-term business impact and consumer adoption. (0:25:00) * The race for AI talent involves substantial compensation packages, with companies competing fiercely for skilled researchers and engineers, leading to significant financial outlays. (0:55:00) * The trajectory of AI progress is a key focus, with ongoing debates about the effectiveness of scaling pre-training versus advancements in reinforcement learning and the development of new architectures. (1:05:00) ## Smart Chapters * **00:00:00 The Business of AI Journalism:** Steph Palazzolo discusses her background as a journalist covering the AI beat for The Information, detailing her transition from tech investment banking and the publication's approach to in-depth tech and finance reporting. * **00:08:30 Sourcing "Leaks" and Scoops in the AI Industry:** Palazzolo explains the process of gathering exclusive information in the fast-paced and often secretive AI industry, emphasizing the importance of building relationships and understanding sources' incentives. * **00:15:00 Analyzing the GPT-5 Hype Cycle:** The discussion delves into the anticipation surrounding GPT-5, exploring its potential impact on the AI landscape and the broader trajectory of AI development, particularly in reasoning and creative capabilities. * **00:25:00 Competition and Strategy: OpenAI vs. Microsoft vs. Google:** The conversation touches upon the competitive dynamics between major AI players, examining their strategic decisions, product roadmaps, and the challenges they face in maintaining leadership. * **00:35:00 The Economics of Foundation Models:** The financial aspects of developing and deploying large AI models are explored, including the significant costs associated with compute, data, and talent, and the emerging business models for inference providers. * **00:45:00 Enterprise Adoption of AI and ChatGPT Agents:** The adoption of AI technologies within businesses is discussed, focusing on how companies are integrating AI tools like ChatGPT agents into their workflows and the potential impact on productivity and operations. * **00:55:00 The AI Talent Wars: Hiring and Team Dynamics:** The intense competition for AI talent is highlighted, including discussions on high compensation packages, hiring strategies, and the internal dynamics of AI research teams. * **01:05:00 Predictions for the Future of AI Models:** Palazzolo and the hosts speculate on the future direction of AI development, considering advancements in model architectures, reinforcement learning, and the potential for new breakthroughs. ## Key Quotes * "I feel like I'm pretty new to the field and learning about how things work. And whenever I first joined, I think you kind of just assume that as a writer like most of your time would be spent writing, but that's actually probably like 20% of my day." (0:17:00) * "I think first is thinking about the incentives of the person that you're talking to. So obviously if I'm talking to a PR person at OpenAI or Anthropic or one of the big labs, like they're obviously going to be pushing a certain narrative that's like probably positive about their company and maybe not so positive about other companies." (0:57:00) * "I think the argument for them that I've heard from investors is just like yeah like as you know AI apps become more popular like we're going to see the amount of money going into AI inference to just like explode and go way beyond the amount of money and compute being spent on on training." (0:14:00) * "I think the most interesting part coming out of the GBT 5 launch is kind of Not just thinking about like oh like what does this mean for open AI standing in the AI race but like literally what does this mean for just like AI progress for every AI lab?" (1:03:00) * "I think the media is like fighting an uphill battle right now and it's it's tough and that's like tech media. So like I think like covering politics like way way worse." (1:37:00) ## Stories and Anecdotes * Steph Palazzolo's first AI-specific piece was an accidental discovery about VCs' obsession with generative AI, which she wrote in the summer of 2022 after hearing about apps like Dolly in conversations, a trend that exploded with the release of ChatGPT just months later. (0:05:00) * A spokesperson for Meta reportedly pushed back on rumors of billion-dollar signing bonuses for AI researchers, highlighting the difficulty in verifying such claims and the potential for companies to strategically leak information to the media. (0:55:00) * The "Windsurf" acquisition is described as a particularly controversial "aqua-hire" deal that generated significant outrage due to its structure and the perception that many employees were left behind, especially as Windsurf was seen as a company on the rise. (1:22:00) ## Mentioned Resources * The Information: A tech and finance media publication known for its in-depth reporting and scoops. (0:02:30) * AI Agenda: The Information's daily AI newsletter. (0:03:00) * Business Insider: Previous publication where Steph Palazzolo covered AI. (0:03:30) * Dolly: An AI image generation app mentioned as an early example of generative AI. (0:05:00) * ChatGPT: A prominent AI chatbot that significantly boosted the generative AI trend. (0:05:30) * Jessica Lessin: Founder of The Information. (0:06:30) * Fireworks, Modal, Together, B10, F: Companies operating in the AI inference provider space. (0:15:00) * CoreWeave: A GPU reseller mentioned as a public company in the AI infrastructure space. (0:16:00) * Cursor: An AI coding assistant. (0:17:00) * Meta's AI research and "Super Intelligence" labs: Discussed in relation to their strategic direction and potential for advanced AI development. (0:26:00) * OpenAI: A leading AI research lab. (0:34:00) * Anthropic: An AI safety and research company. (0:49:00) * GPT-5: The anticipated next-generation model from OpenAI. (0:50:00) * Llama 4: A model from Meta. (0:29:00) * Claude (by Anthropic): An AI model known for its coding capabilities. (1:11:00) * CodeX (by OpenAI): OpenAI's coding assistant. (1:13:00) * Chroma: A research group that published a paper on context utilization in AI models. (1:14:00) * Devon (by Cognition): A coding agent mentioned with a high monthly cost. (1:17:00) * Windsurf: A coding agent company that was acquired. (1:22:00) * Klein: An open-source coding assistant. (1:29:00) * Bolt, Lovable: Startups targeting non-technical users with coding assistants. (1:30:00) * Signal, Telegram: Messaging apps often used for secure communication. (1:41:00) * Mumbok architecture: An alternative AI architecture mentioned. (1:50:00) * RWKV: An alternative AI architecture. (1:50:00) * XLSTM: An alternative AI architecture. (1:50:00) * Reinforcement learning environments: A trending area in AI development. (1:51:00) * Salesforce: A CRM platform that could be used as an environment for AI agent training. (1:52:00)
Topics Covered
- A Journalist's Day: 20% Writing, 80% Networking
- Are AI Inference Providers Just "GPU Resellers"?
- Meta Needs Practical AI, Not Just Superintelligence
- How Journalists Combat Misinformation and Protect Sources
- AI's Rapid Pace Exaggerates Societal Inequalities
Full Transcript
[Music]
Hey everyone, welcome back to the lid in
space podcast. This is Allesio, partner
and CTO at Desible and I'm joined by
Swixs, founder of Small AI.
Hello. Hello. Today we're it's an
unusual podcast because we're in a
remote studio, but we're joined not by
researcher or founder, but a journalist,
our first ever journalist on the
podcast. Welcome Steph Palazola from the
information.
Hi. Yeah, I'm super happy to be here.
Steph, I don't know if you remember, but
actually I think the first time I came
across your work, I actually did a
parody of it for the Sam Alman blip back
in like 2023 where I I put like sort of
Adam Newman as like the pro perspective
CEO of Openly Eye. That was pretty fun.
No, that was super funny. I remember
just seeing like a bunch of parodies of
that over the next couple days. It was
just like every single insane tech
personality you could think of. Um, so
yeah, that was a very insane week, but
that that meme definitely stands out to
me. I definitely remember that.
I was like, who's the funniest possible
CEO I could install in OpenAI? I know
the the Wei work guy. But yeah, may
perhaps you could introduce yourself to
our audience. How'd you get started in
reporting and basically covering AI for
the information?
Totally. Yeah. So, um, my name is
Stephanie Palazolo. I cover AI here at
the information and so that means um
working on just stories about the space
whether that's about um investments or
big tech companies broader trends but I
also help to run our daily AI newsletter
um AI agenda so that comes out four
times a week and we get more into like
the weeds and talk a bit more about what
we're seeing with developers and and
researchers. Um, and so I've been here
for about two years. Before that, I was
covering the same beat at Business
Insider. And then before that, actually,
I I have a bit of a weird background
because I used to work in tech
investment banking. So I was like in
finance like doing the IPO and any
craziness during uh co and yeah I really
enjoyed working with tech um tech
companies and startups but I was like I
don't know if finance was my passion
which I don't think is like the craziest
thing for anyone to say but um I just
had always loved journalism growing up
and I was like what better time in my
life to try something kind of out there
and try to combine like tech and writing
and see how it goes then yeah I mean
everything's worked out so I'm very
happy. Yeah, in in in remarkable
fashion. Yeah, Lester, you wanted to see
something.
What was your first story?
How did how did you get started
covering?
Yeah, I mean I think I mean I'm sure
that my first story was probably just
like some silly like write up of like a
report or something, but I think the
first story I remember doing about AI
was during summer 2022. I I live in New
York, so every couple months I go visit
SF. And specifically on that trip, I
went and everyone was like, "Oh my god,
let me show you this like funny app
called Dolly and like we're going to
make pictures of like cats floating in
space and it's like so funny and cool."
And then it just started to come up in
so many conversations that I was like,
"Huh, like this is like a fun little
trend. Like maybe I should write
something about it." So then I ended up
doing a piece that was like why VCs are
obsessed with like this new area called
generative AI. And then obviously like 3
months later it was like blown up and
chatb was was released. Um, but yeah, I
remember that was my like first
generative AI specific piece that I did,
which was like honestly kind of just
like an accident. Like I just happened
to talk with a bunch of VCs who were
excited about it. Um, but yeah, and then
ever since then, I've been been covering
the space. So
yeah, we slowly being pulled into the
orbit. Before we go deeper into your
work and how you cover AI, I just wanted
to discuss the information more more
broadly. Um, I actually remember when
Jessica Lesson left like uh I think Wall
Street Journal um and joined and and
started the information I I was
basically following her since then. Um
and basically I'm just kind of Yeah,
like I'm also a finance refugee by the
way. I I used to be in a hedge fund as
well.
Nice. Be nice. All all uh recovering ex
finance people. Love that.
Yeah. I mean like look I I think we have
an interesting perspective in terms of
like covering these startups as
effectively stocks. like you know I I
don't view myself secretly as like you
know all all the writeups that I do are
industry sector reports that analysts
make cover except they're not publicly
listed companies. So like I'm curious
like how is the information organized
today? AI has taken over everything. So
is like everyone covering AI how do you
how do you sort of split the work and
and all that?
Yeah. No, it's funny. I feel like people
always ask me like oh my god AI is so so
crazy. How do you cover it by yourself?
And I'm like, we actually have like 10
different, you know, 10 different
writers who are basically covering like
different areas of AI. So I I get a lot
of help. Yeah. Specifically, so the
information, we've been around for over
a decade. We are kind of like a tech and
finance media publication.
uh specifically I would think of us as
like yeah kind of going above and beyond
to get you behind the scenes and to get
the scoops around what's happening at
some of the most important you know tech
media finance companies today and so
specifically like for me covering AI I'm
kind of part of like what you might
think of as like the enterprise tech
team and so we have reporters who are
covering like the major tech companies
like Google Amazon Microsoft but then we
also have reporters that are covering
like cloud more broadly or chips or like
crypto
And so we have like a mix of reporters
that cover specific companies, but then
also reporters that are covering like
larger beats like AI for instance. And
then of course we have another part of
the business that covers like finance
and deals and retail companies and
consumer companies. And so yeah,
basically like we just cover anything
and everything around tech and and
finance. So it's quite a wide array of
things. And we're always like looping in
like you know anytime I do a story
around like OpenAI's need for compute
like we're always looping in like my
colleagues Ana who covers cloud or
Chenner who covers chips and our
reporters who cover like the big cloud
providers. So it's very collaborative.
Yeah I'm sure it has to be. Uh okay you
you so you let's let's see what you do
right you you you write the AI agenda
you show on uh you you um also appear on
TITV which is like your new sort of TV
channel.
you you report groups like like once to
twice a week I I feel like uh
how do you like is it you know how do
you do it? What what is what else is um
part of your role?
Yeah. Yeah. I know it's kind of funny
because like I think for me cuz I came
from finance I never had done a
journalism job before that. Like I still
feel like I'm pretty new to the field
and learning about how things work. And
whenever I first joined, I think you
kind of just assume that as a writer
like most of your time would be spent
writing, but that's actually probably
like 20% of my day. Um, I think like the
vast majority of my day is spent maybe
like I don't know I don't know if VCs
would like this analogy, but maybe in
some ways like how a VC might where
you're constantly reaching out to
founders and interesting startups and
asking people like, "Oh, if I want to
learn about XYZ topic, who should I be
talking to next?" So like the vast
majority of my day is actually on on
calls with with with you know founders
and researchers and investors and
basically just like begging them and
being like hey like what's like the
hottest gossip that you guys know or
like what's like the coolest new trend
in AI that that you've seen.
And so I think it's just constantly
trying to stay ahead of what's in the
mainstream news about AI. And I think
for me two things that I really care
about are, you know, maybe you relate to
this, like previously working in finance
is thinking about like the business
models and the financials of these
companies. So we've done a bunch of
pieces or like I've done a bunch of
pieces around like infants providers and
like their gross margins and business
models, which which initially might
sound very boring, but I think is
actually really interesting. And then
also like on the tech side of things
like I think we're always trying to
understand what is the trajectory of AI
development like what are the obstacles
and how are the big AI labs trying to
get you know past those those
challenges.
Yeah we can get right into it. I was
actually going to start with the labs
first but I mean since since inference
is coming up. Uh today's big news is
modal is raising a billion dollars. I
saw the other day fireworks is raising
at 4 billion. Things seem on fire
everywhere but like you know what's
going on in the inference market. Yeah.
Yeah.
Yeah.
What's what the high level take for
people people kind of out out of the
loop.
Yes. The high level take. So, yes.
Basically, I'm sure a lot of your
listeners already know this, but these
inference providers are companies that
are, you know, helping developers like
run and train and customize open source
AI models more easily than they would be
able to otherwise. And so, yeah, there's
just so many companies in the space. You
mentioned some of them. Fireworks,
Modal, there's Together, there's B 10.
There's just a ton of these companies in
the space. So
f is another one. Yes. Um and so yeah,
it's it's very interesting because a lot
of these companies have gotten so much
funding, but there's still a lot of like
skepticism around this space and like
whether these companies are actually
going to end up being like, you know,
like billions or trillions of dollars,
like trillion dollar company. Um so I
think a lot of people like think
basically just call these companies like
GPU resellers. So they're just like,
"Oh, all you're doing is just being like
a cloud basically,
you know, helping developers get access
to chips to run models on. So you're
like a knockoff Amazon or like a
knockoff like Azure or Google Cloud."
Um, which is like a bit derogatory. But
I think that's kind of like what the con
argument that's like the argument
against them. Um because also unlike
software companies, they have to like
spend so much money getting the you know
getting those chips and they have to
keep a bunch of them like unused at any
given time in case there's a sudden
spike in demand. I think like the
argument for them that I've heard from
investors is just like yeah like as you
know AI apps become more popular like
we're going to see the amount of money
going into AI inference to just like
explode and go way beyond the amount of
money and compute being spent on on
training. So we should have a bunch of
companies that are available here to
like help developers build these apps
and like access models really quickly.
And I think the space is also there are
some companies in the space that are
starting to like think about maybe
having their own data centers and moving
more into like the data center like
truly cloud provider world versus just
like I'm going to use chips from
existing clouds. So yeah, I don't know.
It's like a very interesting space and I
think it's just an example of like a new
category of startups that really didn't
exist like even 2 or 3 years ago that
are just popping up because of like the
demand for AI models.
Yeah. And you know I I think just to
point to another example that we didn't
mention but Core Weave technically a GPU
reseller um and you know a public
company. So like definitely a lot of
I mean I I would be curious for Allesio
like what you think about this space
like from kind of like the the VC side
of things. give us your hottest take.
I think on the VC side initially I think
everybody well not everybody obviously
because some people invested in it. I
think the first the timeline of when the
existing companies were going to catch
up I think people overestimated how
quickly they would do it. I think the
same thing happened in coding which I
think the a lot of the sentiment
initially was like well you know I mean
we have cursor on our pockets like two
and a half years ago
and the sentiment is like oh it's great
open is using it but like GitHub cannot
possibly let them run away with it you
know and I think with inference it was
like similar where it's like hey you
know obviously these tools are growing
quickly because you know the main
providers don't really offer these and
um blah blah blah and then there was
kind of like this question of is there
like a AI air pocket in pre-training
where you kind of have all these like
commitments on the GPU and the
hyperscalers but then they're going to
go back to inference and so it's kind of
going to bring the price so down can you
really build a company in the space but
I think f especially has done amazing
because um media is still very open
source driven when it comes to image and
video generation in a way the language
is not as much so I think they they've
been doing great and you know very happy
for their announcement today but yeah we
haven't done any but also like we have a
smaller phone and I think all of these
inference things are like big money
investments. Like you cannot do a $5
million round in like an inference
company, you know, you kind of need a
good chunk of capital to to run the
business. So obviously luckily the VC
model has evolved now to have these mega
funds that are able to support this like
100 million like you know FA raised 125
million million today. You need a lot of
money to run these things and together
raise a huge amount of money like it's
not easy to survive and I'm curious to
see where where they land in a in a few
years.
Yeah. No, it's crazy to just see like
casually like a hundred million dollar
plus round and being like, oh yeah,
that's so that's totally normal for an
early stage AI startup. So yeah,
I mean two billion seed rounds is normal
now. So you know
that's true. That's true. Yeah, the the
small the small seed round of of two
billion.
I mean we we'll go we'll get to the
labs, but I just wanted to stay focused
on on the infra side. Uh we did have a
chat with Eric uh from Mold and you know
we talked a little bit about how it's
almost a little bit like a bank, right?
like you it's basically what you're
doing is maturity transformation of of
long-term contracts into shorter term uh
you know usage of of things and you know
I think who whoever is the best
technologist uh and the best developer
experience is going to win there in
terms of serving out these GPUs for
providers and the other thing I always
think about from a finance point of view
that maybe people don't even think don't
think about that closely but obviously
push back if you if you disagree is that
it's basically a proxy for the gap
between open models and closed models
because if the open models do where um
the sort of the API hosts the inference
hosts do better and if open models fall
behind then they do worse. So so far
DeepSeek obviously like still one of the
biggest news of the the year
what was a big gift to to uh to to the
the inference providers and the sort of
relative decline of Llama and
disappointment llama 4 was uh you know a
problem for them. So yeah that that'll
be how I characterize it.
Yeah. Yeah. No, I think I think that's a
very fair point. I do think there's
probably a lot of usage of the inference
providers for not necessarily to get
state-of-the-art models, but because
like there's so many tasks that you can
that you'd want to use like a cheaper
model for or I even I know even whenever
like the Deepseek R1 model first came
out, one of the major like startups in
the space was like yeah 75% of the usage
that we see for for R1 is people trying
to distill it into like whatever use
cast uh use case they want, which I
think is like really interesting because
I don't know if I would think that like
distillation is like a like I I don't
know if I would say it's a good thing
that 75% of your usage is distillation,
but I mean you can't deny that like
whenever you see models come out like
DeepSeek like that that is a huge draw
to these inference startups for sure.
Yeah. Um I I think that's absolutely it
and also partially because Deepseek was
huge, right? There's a lot of these like
Kimmy K2 and what's the other one? GLM
uh 4.5 right now. All these are hundreds
of billions of parameters like very
costly to to serve for like sort of
day-to-day use. So you have to distill
it somehow.
Yeah.
Yeah. Uh so I think the other more
recent news I'm just going to go in sort
of like reverse cron order but also like
feel free to just do tangents on on
themes and stuff. Uh the next one is
meta. This is a fun one. I was super
happy to reach out to you because you
not only like breaking news and scooping
things, you actually put opinions into
your newsletter, which is I think very
brave as a journalist,
right? Because like sometimes you have
to like meet these people face to face
and you're out here saying Meta should
not have super intelligence. What's up
with that?
That's true. Yeah, I know. Whenever I
whenever we published that with that
headline, I was like, "God, I feel like
I'm going to get some angry comments
with this." No, but like
I don't know. I think there's like two
ways to look at this. Like I think one
way is like obviously if you're going to
reach super intelligence like I think
every big tech company with the
resources like does not want to miss out
on that. So like it makes sense like
Meta's like you know if super
intelligence is a future like I want to
be part of that and I want to be part of
like that development. But
I think just
if you take a look at like how Meta
actually makes money today, right? It's
through like social media apps and ads
and these things that are have been
around for for quite a while.
Oh yes, there's my lovely piece.
I'm providing visual aid for people and
also people can Google it. Yeah,
thank you. Yes,
but like if you think about it, a lot of
things like ads like you don't need
super intelligence. So whenever you
think about all the investment that Meta
is putting into AI today, which is great
and great for a lot of open source
developers like
maybe what's actually good for their P&L
and their finances, is to not focus so
much on building these like insane huge
state-of-the-art models, which they've
obviously struggled with more recently,
but it's to take things that are, you
know, 70 80 like 90% as good as the best
stuff out there, and figure out how to
use it in its apps. Like I think that's
like a problem I've seen from like Meta
and a lot of other companies where it's
like the models are getting better.
That's great. But it's like if I'm going
on Instagram and like the only AI stuff
I see is like oh do I want to talk to
like like an AI version of like the hawk
to a girl or like like I'm like who's
using this? Like like this cannot be
like where consumer AI ends up, right?
Like that's just so depressing and like
I don't know who's using this stuff. So
I'm like I feel like we've seen progress
in models and even for Meta like maybe
the I mean as we saw with Llama 4 even
the progress wasn't like amazing it
seems but like we've seen even less
progress like in actual products and
like innovation there. So I'm like I
would love it if Meet you know came up
with some really cool like AI apps that
I couldn't even think of versus just
like more and more models. But yeah, I'm
curious like what you guys think. And I
think people have very strong opinions
on whether Mark is making the right
moves with like some of these big hires
and the super intelligence labs and all
this craziness going on.
Uh yeah, I mean it's really hard to know
what to think. Obviously, so we're we're
recording this the day after Facebook
had their earnings report and Mark uh
put this like personal letter which was
very inspired by Nad Freeman
on personalized super intelligence. I
host the AI engineer summit in world's
fair and at the summit in in in February
uh Sum from Meta was also talking about
personal intelligence which I think like
it's it's of a kind like I think um that
part of that group in that that part of
Facebook really believes in that that
agenda. We're hoping to talk to Mark for
the podcast we ask him directly. A lot
of people think that it's basically he
just wants to be the dominant player in
every platform. So like you know if VR
is a thing fine we'll do VR. if crypto
is a thing, we're fine. We we'll have a
crypto play, you know, and like this is
this is the the the AI play, but
obviously I think he's going after this
bigger than basically everything else
he's ever done, which is his right as a
as a young founder still still in
control of his company. But like it's
it's also very bold and it's very
inspiring. I mean, we haven't shipped
anything yet, so we don't know what like
what what it's going to look like, but I
think it makes sense. I don't know about
business model, but like you know, he
was happy to bleed like $20 billion a
year on VR, so like why not?
They got glasses. They just put AI in
the glasses and it's fine. We're all
gonna wear glasses.
Yeah.
Yeah.
Yeah. I mean it was I mean even like
reading his the kind of letter he put
out about personal super intelligence
and like hearing the way he talks about
super intelligence is very interesting
in contrast with like you know like
anthropic open AI these other players
that are like we want to solve like the
world's most dangerous like diseases and
you know solve all these like huge
problems we have in science. And then
yeah it's interesting. And I think Meta
has really just doubled down on this
idea of like, okay, like that's cool and
all, but just like, how about the
everyday person? Like, we want to help
them, too. Which I think I think is
good. I think it's good that they're not
trying to just do what the other labs
are doing, but I feel like I still have
to see how the personal super
intelligence like marketing and angle is
going to play out with like consumers
and stuff.
Yeah, totally. Well, the other big part
of MSL, which obviously everyone's very
excited about, is the pay packages that
people are getting. you guys have been
reporting very diligently on that. And
other this is also like very breaking
news. I'm glad that we're doing this
today rather than like maybe last week
because I I get to up to like update
things, right? And we get we get some
fresh perspectives.
A meta spokesperson said uh push back on
on the sort of billion dollar offers
that have been rumored, right? Uh so
like we're past the 100 million mark
right now. We're at we're at a billion.
Um, and he it seems like he deleted this
tweet, but basically he he sort of
pushed back on the uh the sort of pay
packages, but like how do you know when
you're being
told the truth, right? You have a lot of
people leaking you information. There
are a lot of people accusing Sam Alman
of say of throwing out numbers in order
to play the media. I don't know how true
that is. It could be true. It could not
be. I just I just don't know. Like, how
do you know when you're being played?
Yeah. Yeah. I mean, it's a it's a tough
Yeah. It's something that I think every
journalist has to struggle with and deal
with. I think first is thinking about
the incentives of the person that you're
talking to. So obviously if I'm talking
to a PR person at OpenAI or Anthropic or
one of the big labs, like they're
obviously going to be pushing a certain
narrative that's like probably positive
about their company and maybe not so
positive about other companies. But even
like some of the researchers and stuff
like they might have very strong
opinions about like the safety culture
at an anthropic or like again like
Mark's strategy at at Meta like everyone
like has biases and they want to like
make themselves and their companies like
look good. So I think you just first
have to keep that in mind. So I think
like pretty much anything we get a tip
like this where it's like a huge number
that makes a certain company look good
and another company not look so good. We
wouldn't report that unless we had like
multiple sources and we would know like
who, you know, like who did that offer
go to. If it's over multiple years, how
much of that is like cash? How much of
that is stock? Is it over like 2 years?
Is it over four years? Right? There's a
lot of ways to like manipulate a number
like 1 billion and make it seem like way
more crazy or like way less crazy than
what it actually is. So I think it's
very tough because like there's crazy
numbers like that getting thrown around
but like I think as journalists we do
have responsibility to get into the
details and kind of question a little
bit more like okay why is this person
telling me this and like is it actually
helpful for me to write this number or
is there actually like xyz things that I
also would need for this to be like a
full like a story that tells the full
truth. So yeah, I think the reporting on
some of these comp numbers has been a
little tough because I think at this
point anyone just throwing out any like
hundreds of millions of dollars number
or billion dollar off like offer number
and everyone's like okay like that
sounds reasonable. So like yeah I think
some of these numbers like I think
journalists should include more details
on like exactly how these offers are
structured that could make them look
better or worse.
Yeah, like I have Dylan Patel pushing
back on even the $100 million uh signing
bonuses that people that I think you
guys have rewarded. Like I I don't know
what the the factor I mean unless you've
seen the actual offer. Like it's really
hard like you know firsthand to to
actually know what's going on.
Yeah, that's that's true. And I I do
think like there are situations where
like Sam Alman is on a podcast like yeah
like Mark's
given hundred million dollar offers to
our
right
researchers and right and it's like
that's immediately a headline because of
like how important Sam Alman is but like
it doesn't really go through the same
like scrutiny as just a normal news
story from like a publication. So I
think a lot of CEOs are very smart about
this and they like know how to throw out
stuff and immediately it catches on with
the news.
Yeah. because like then then that opens
for counter narratives which it's just
is to me it's fascinating how CEOs sort
of kind of use the media as a tool to
play games with other CEOs obviously and
like we're part of it like so it's it's
a very meta thing but like you know like
recently I think this week also there's
there's this story about like how you
know multiple multiple of these offers
have gone to thinking machines people
and all of them have rejected it or
whatever right and definitely that's a
planted story from think thinking
machines I I I don't know how to how
else to say it
they just want to look Good.
I mean, yeah, that story definitely
makes Thinking Machines Lab like look
look good for sure.
Okay, cool. Before we move on from meta,
I was just want to, you know, leave the
door open. Any other themes that you're
watching uh on on the meta side?
Yeah, I mean, I think we definitely
covered a lot of the really interesting
ones. I think I mean one theme that I
think like Mark has even kind of touched
on recently is whether they'll continue
open sourcing models, whether they're
going to start closed sourcing stuff and
like what does that mean for its
business model. Um cuz obviously like
like right now, you know, Meta doesn't
make money from an and an an API the way
that like OpenAI and Anthropit does, but
if they start closed sourcing models,
maybe that might change or that could
change kind of the shape of the way that
they think about how to make money from
this. And so yeah, I think that's very
interesting. And I don't know, I think
for me as a consumer, like I'm always
really interested in like consumer AI
because I feel like there's not really
as much coverage on it as I would like.
And like I personally I use a lot of AI
stuff, but nothing that's like truly
kind of transformed my like
nonwork-related life. So I'm very
curious like what Meta will come out
with with consumer AI.
And you know, if if there's any lab that
would do it, it's them and Open Eye, I
guess. Yeah. You know, we we did a
podcast. We also had our first I think
our first VCs on the podcast, Justine
and Olivia Moore who cover consumer AI
and that just seems like a fun job. They
scroll Tik Tok for a living. That's
that's what they do.
Oh wow. Okay. Well, yeah. I I also
scroll TikTok for a living. So, you
know, I I I wish I could get paid money
for that, but no, but it's a very tough
job. Honestly, I think investing in
consumer AI is like in some ways way
trickier than investing in enterprise
AI. So, props to them.
Yeah, I I've come around. So, I think
that is the Silicon Valley narrative,
right? that like consumer AI is a kind
of a crapshoot. Um, you know, sure,
every every now and then you get a
Facebook, but most most of the time it
flops. But I think like I think the the
the idea or the meta or like discussion
or vibe in Silicon Valley has shifted a
little bit mostly basically influenced
by chatbt where if you know if you build
a strong consumer business, you can get
the scale and the diversity to build up
like infrastructure and then data and
all that. And I think like it seems very
powerful but obviously also subject to a
lot of fads. Anyway, I'm not I'm not a
consumer guy. You can tell. Cool. Which
what other lab is sort of top of mind? I
I I guess you know uh we're we're
recording this on with GT5 imminent.
That's another big story. Uh you want TV
talking about it. How has been your
coverage of just generally just you know
stepping back Open EI in general? Like a
very tricky large secretive company to
to cover. What's it like?
Yeah, I mean it's it's crazy. Well, we
are hiring specifically an OpenAI
writer. So, that can tell you like how
much work goes into covering that that
company and the fact that we
only open.
Yeah. So, we want to have one person
that's just like reporting on Open AI,
which is like crazy. I feel like um
Yeah. I can't even think really of past
examples of where you have you would
have a journalist, you know, focus on
like one, you know, early stage startup
just by by itself. But yes, it's a very
complicated and somewhat secret
secretive company. And I think
especially because like there are so
many angles to come at it from, right?
Like there's like consumer product,
there's enterprise side, there's, you
know, the work it's doing with the
government now and the and the
regulation and laws and and policy that
it's involved with. And then obviously
there's stuff like Stargate and compute
and like the crazy data center project
that they're working on. Yeah. I mean
like really like any angle under the sun
like you can write about it with with
OpenAI.
I think specifically whenever I think
about like GBD5 which we recently wrote
a story on. Um I think like what's so
interesting about that to me is that
kind of like you know the past year
we've seen a lot of progress especially
in reasoning models. But I think like
under all of that has been the fact that
like GPT4 has been kind of the like
leading GPT model for for a very long
time and everyone's kind of like okay
when is something worthy of the GPT5
name going to come out. Like we saw 4.5
which was like low-key kind of a flop
and like I think has actually been
deprecated from the API now which is
crazy.
Yes. I'm very sad.
Yes. But like you know as much as we see
01 03 doing super well like this has
been the big question like what is going
on with GBT
in the story that we wrote we kind of
talked about how at least the people
that we've talked to who have tested it
so far have been pretty impressed. They
seem to think that it's been you know
there's been improvements in a number of
domains and both like like science and
coding but also areas like creative
writing and general knowledge.
But I think like the question that we
have coming out of this is how like what
does the performance of GPT5
say about the trajectory of AI?
If GPT5 is like a flop for some reason
like what does that mean about like what
does that mean for reasoning models and
like the good progress that we've seen
there? Like is 04 also going to be a
flop? But then on the flip side, if GBT5
is good, is that necessarily because
pre-training is still going super well
and we're we've a we've been able to
scale up pre-training really nicely and
everything's going super smoothly or is
it actually because like reinforcement
learning is the area that we're getting
the most improvements from now and like
if that's true, how much further can we
go with like reinforcement learning and
and postraining? So I think like that's
going to be the most interesting part
coming out of the GBT 5 launch is kind
of
Not just thinking about like oh like
what does this mean for open AI standing
in the AI race but like literally what
does this mean for just like AI progress
for every AI lab?
Yeah. How do you think about poking into
the other labs? So you know JD5 is
coming out soon. Are you in parallel
trying to see okay is Entropic going to
release something? Is Gemini going to
release something? Is like is XAI? You
know Elon always wants to drop things at
the same time. What's kind of like the
the routine?
Yeah. Yeah. I mean, we're always keeping
an eye out on like all the labs and
there definitely have been some points
where I know like a year or two ago, I
think OpenAI like released their like
multimodal model like the day before
Google released theirs. So, there's
always like some fun little like drama
happening in the background. But I think
for the most part at least like and I
hope I don't jinx myself, for the most
part, I think right now like GPG5 is the
main model on the horizon for us. I
think most of the other labs like have
like somewhat recently released models
and so I would expect maybe for it to be
like a couple months before they come
out with with something but you know
like we see Anthropic for instance like
raising a a huge round right now and so
I have no idea if that's at all affected
by like GBG5 coming out and them wanting
to like really accelerate things because
they're like worried for instance but
yeah I think depending on how this model
goes like we will definitely see more
models released from some of these other
labs or like them wanting to raise
rounds and things like that to like get
the next model here faster.
Yeah. Uh I I I mean I think u cycle of
you ship a big model get some get some
um track get some like I guess traction
and noise and excitement and that helps
you with the fundraiser. next thing you
know happy to move on uh in terms of
topics but obviously opening eyes is
very huge but like I do think that it's
also very interesting when each lab CEO
puts out a blog post because it often
tends to tends to lead the fundra so I
think most directly I think for example
machines of loving loving grace was
directly before the hundred billion
dollar fundra for philanthropic and um
you know I think I think like there's I
think there's some correlation there in
terms of like when Daario shows up
somewhere and says like oh like a third
of the workforce is going way. That's
also like a very obvious like eye of
fundraising sign. And I I don't know if
like people read the TV leaves like that
or am I, you know, being like sort of
conspiracy theory.
Yeah. Yeah. I guess I think about what
was it the anthropic memo where they're
talking about taking money from like the
Middle East. Although I think that one's
like pretty obvious. I was like, okay,
definitely something's going on.
That was what What was that? Like that.
Yeah, that was like very interesting. I
mean, I don't know. Obviously, like I
don't know Daario personally or anything
like that, but I think in some ways you
got to respect somebody who's like that
open with their employees for better or
for worse. Like potentially for worse
because it leaked and I think people
were like not super happy with them. Um
yeah, I don't know. I think it's
interesting with like AI. There's a lot
of these conflicts where it's like, you
know, a lot of the companies for
instance are like taking are removing
from their terms of usage like we can't
you can't use AI for weapons or
military. Um, and then sometimes like
you kind of hear from them and they're
basically just like, "Well, obviously I
would I don't want our AI to be used to
like kill people. But if we take this
off then at least we can like be in the
room with like the military or the
government and be able to like give them
advice and like shape what they're
doing." And so yeah, there's a lot of
like weird like moral conflicts I think
with these companies and like who they
take money from or like who they allow
to use their models. That's very tough.
But yeah, well, what did you think of
the Daario memo? Were you like
surprised?
You know, you see these things, you
know, everyone knows like when you put
something like this out, it's going to
be all over the press and like why would
you put anything potentially
embarrassing? Like you just you just
didn't have to feed your your critics,
you know, you can just say like, "Hey,
we're we're we're looking at the Middle
East." You don't have to say like we're
compromising our principles or whatever
to to go into the Middle East. Anyway,
yeah, I think more broadly, you know, I
don't have I don't have a take on that.
I I think um everyone has to sort of
find their way. Um I think Google you
know has had to deal with the don't be
evil slogan that they had in the early
days and you know at some point like
what does evil mean to you doesn't mean
doesn't doesn't mean the same thing to
me as if I live in a completely
different country and context than you.
So like yeah that there's there's one
there's one form of discussion. I think
the other thing that that happens a lot
with big founders with big CEOs which uh
you know I've had sort of offline
offline conversations a lot of them like
want to build AGI but like that that I
approve of in order to fight the AGI
that I don't approve of. So like it's
you know the only thing that can beat a
bad AGI is a good AGI and and so they
they rather at least be part of that
debate rather than I guess is not be. So
I think that's that's a very
motivational factor.
Yes. Yeah. That's that's very true. I
feel like it's a very tech thing to be
like to be like I I know how to protect
us from like the evil tech.
I mean you have to like somebody has has
to try and if you don't try someone else
will try.
That's true.
So like you know you want to be high
agency or you're not high agency and and
the I mean the other thing is like you
know you if you have the ability then
maybe you have the responsibility you
know not to not to be too spider about
it.
That's true. That is true.
So so yeah I mean and then you know I
think the other the other things about
anthropic are are like um you know they
they made a real dent with with on the
coding side. Are they going to be a
coding oriented lab? they you know they
they they made some movements on
financial services and and uh you know
other other verticals like maybe defense
but I think coding is obviously a very
key battlegrounds like we just talked
about how GT5 is maybe open pushing back
on cloud code and sonnet and opus but
like is that a big part of the
conversations that you're having?
Yeah, definitely. I mean it definitely
feels like anthropic has decided that
coding is kind of their like
battleground and I mean so far they've
been doing really well. I think like one
thing I've heard and I'd be curious like
I think like I'm not a developer
obviously so I'd be curious if this is
your opinion as well but like I think
one thing that developers have told me
is that like one reason why they like
clawed for coding so much is because
it's not only good at kind of like
competitive or academic programming
tasks but it's also like these more like
soft skill type or not not soft skills
but more like more practical everyday
things that they're good at like working
with you know huge code bases that have
lots of like old legacy code in them or
just even being like persistent with
solving a coding problem and realizing
like hey I've tried X thing like three
times and it hasn't worked so like let
me try something different. So it kind
of feels like as scientific as like code
feels like it seems like behind the
scenes at anthropic there's a lot of
like it's like a like more kind of like
art than science like there's a lot of
like shaping of the model that is
happening to help with kind of what
engineers deal with during their
day-to-day versus more like academic you
know competitive programming questions.
I think the $200 a month for unlimited
usage definitely helps people go towards
entropic with cloud max on the clock
code. But yeah, the the models are
definitely very good at that. And I
think they have now a great harness to
get data to make the models even better.
Like in the um in cloud code now it
basically tells you, you know, you can
respond and tell cloud what it got wrong
if you don't want to do the thing that
it says it's going to do. Uh, and I
think obviously OpenAI wants to get the
same type of data with codeex, but I
think they're way far behind as far as
adoption, at least in our in our
circles. I'm sure they have a lot of
usage in in other groups. But yeah, the
models are good. And you know, one thing
Chroma released this context rod paper
recently about context utilization and
the cloud models are actually the best
at using kind of like longer context. So
I think there's maybe been this question
of like, you know, maybe the model is
not better code necessarily. it's like
just better using the context and and
kind of the day-to-day that ends up
playing a lot a lot to it. So, I think
we don't yet know like you said it's
kind of like it it's better but like why
is it better like it's unclear
necessarily but um the vibes are
definitely are definitely good.
Yeah, it's like it's hard to explain in
words. I mean, like, how how upset would
you I mean, I know Anthropic has come
out with some new news about like kind
of rate limits, but like how upset would
you guys be if they like raise the
pricing for from 200 or like put some
pretty intense like usage base pricing
on it?
Well, you can pay as you go on top with
the API. So, at least, you know, I think
that's what they're going to test. It's
like, okay, how much over do people go
uh with the API as you go pricing?
I don't know. I mean, I always said with
cursor, I was always saying this. I was
like, I would definitely pay 200 bucks a
month for cursor. Like, when it was like
20 bucks a month, it just like it
doesn't make sense that it only cost 20.
I don't know what the claw number would
be. I think I think yeah, 400 I would
probably pay.
Okay.
But above that, it's kind of like, okay,
I don't know. But maybe once I start
using the API as pay as you go on top of
it and I realize how much to spend, I'm
like, okay, maybe I should still doing
it. I don't know. It's going to be a
interesting I I don't know Sean what
what your take is.
Yeah. How much are you willing to pay
for a
Yeah.
cursor or you know
look like I mean I'll pay as much as I I
get value out of it. And if you know if
I my job is to produce a lot of code
then and there's a lot of you know
there's there's some there's some top
tier thing that is generally not just
producing slot but like actually
productively paying creating creating
code then then I think you know I would
definitely pay like thousands a month
right like uh I was out when they came
out with the sort of $200 a month plan
for CHP pro I forget the actual title
the names of these plans I was I was out
here saying like we're going to have
2,000 of them on CHBT right they're like
they just have they they he's just like
stuffing up products in there and model
value in there, you'll get it. And I
mean similarly, I think like Devon is is
or so the Cognition's Devon is like in
the 500 to,000 a month uh tier and like
I I think prices will go up as long as
you're using that inference
productively. I think specifically for
anthropic and cloud code. Uh it's really
more about quote unquote the margins
which is the the meme I always post now
every time there's a sort of entropic
post about like how their revenue
projections have gone like from four
billion this year to 9 billion uh
because of quad code
and and I always post like what are the
margins like you guys started talking
about topline but like you know if
you're if you're all negative margins
then like you know it's not that
exciting.
Yeah. But like
yeah no I mean we had a story yesterday
about open AI and how like I think
they've reached about 12 billion AR and
yeah but their burn went from like they
projected like 1 billion to like 8
billion or something. So yeah so like
the question is like revenue is great
but like how about those margins? So I
think that's very fair. Yeah.
Yeah. And like it's going to be the
major factors of like compute personnel
is now an increasingly key one. um data,
you know, we we have this concept of the
four wars of of AI and it's it's all
around like the key battlegrounds that
that people have. The last one is
multimodality.
And so like uh I I think like there
there's there's a lot of like really
interesting uh work there that's that's
being done that like uh I think it's
like a balancing act like you you just
kind of have you need a visionary
founder
of successes like you know two to three
times a year. Um, and you just kind of
keep the the plates spinning while uh
while where while you're growing this
thing, but like I really wonder what the
sort of quote unquote terminal value,
you know, just to use the finance term
of what all these things are.
Yeah. Yeah. No, I mean especially now
that like I mean we we saw the like
boomerang with the claw code leads like
going to cursor and then coming back.
Yeah. What was that? What was that? Can
you double click on it?
I to be honest like I I know that we
like we kind of broke that story but I
think your guess is as good as mine. We
know that they that they went there.
They were there for I think about 2
weeks and they came back. I think our
sense is that it wasn't a situation
where it was just about comp where like
Anthropic was just like, "Oh, we'll like
pay you more." But yeah, I don't know.
It's a really interesting question. I
don't know.
Okay. So, for example, you're saying
it's not about comp in the sense that
like they were poached by Cursor and
then Dario raised this $100 billion and
he goes turns around and goes like,
"Hey, where are my quo guys?" And
they're like, "Screw you. Get them
back."
I mean, I think yeah, my my sense is
that it wasn't that case. I mean, I'm
sure Kurser gave them like a great
package and I'm sure like the package
Yeah, I'm sure we if we heard some of
those numbers, we'd be like, "Wow,
that's that's awesome." But I mean, like
Anthropic like has a lot of money and
like they they care a lot about their
workers. I think if it was just about
comp like they could have probably just
given them a higher offer. They they
really wanted to keep them. So, I think
there is something else going on. I
don't know if it's like culture or just
like how interesting the work is.
Something that must have brought them
back to anthropic. I think
margins like this the other thing,
right? Like it's very damaging to like
for somebody to go in, look at cursor
and go, "Yeah, I'm not sticking around."
And then they leave, right? Like so I to
be super clear to anyone listening, I
this is not insider information. This is
just uh you know this is these are the
things that people say in the valley
of like how it looks when nobody talks
about the actual reason. People just
speculate.
You did have a nice piece on like the
retention curves of uh of these
different coding agents
up there. That's a big story.
We can talk about that. We've kind of
segueed from uh the the model labs into
the the coding agent startups.
What was your sort of summary of like
the whole windsurf situation?
Let's just leave it at that. The summary
of windsurf.
Yeah. Summary of the windsurf. Yeah.
That actually is really funny cuz that
happened like while I was on a on a boat
in the Mediterranean. So, I was like I
just came back and I was like, "Oh my
god, like some some went down while
I was gone." No, but I mean it's very
interesting. I think it's another
example of these like strange aqua hire
deals that are happening. But like I
think it's interesting like Windsurf I
think got so much more attention and
honestly like hate from people in
Silicon Valley, people on Twitter, on
XAR than some of these past deals
because of like the the structure and I
think how many people were like left
behind. And so, and I think there's also
this sense of like a lot of these aqua
hires have happened when the startup
that's getting acquired is maybe not
doing super well or like it's kind of
reached its peak and it's it's on the
way down, but it kind of felt like
Windsurf was like on the up and up and
people were still really excited about
them. And maybe they just got a deal
that was like too good to turn down, but
I think a lot of people were
disappointed that like a company that
was doing so well would kind of just
choose to go like the acquisition route.
Um, but yeah, I mean I don't know like
as developers like I mean I'm assuming
that you guys would prefer when maybe
you'd prefer Win Surf to stay an an
independent company so you have more
like coding assistance and options to
choose from. Maybe that doesn't matter
but um yeah
I think Windserve as a as a product is
going to stick around and like people
who really like Windsurf I mean I was
use Windsor user for a long time um are
going to keep using it because it fills
a a need and like obviously Cognition
bought it for for a reason. Yeah. So no
no no particular thoughts there. I think
the the people are mostly in drama about
one the sort of weird I've been calling
this exeu hires basically like it's like
not quite an eu echo hire but like you
high ID executives and then two like
obviously the this quote unquote the
startup founding engineer employer
contract which uh was you know is is
calling the question now with
what happened. Yeah, I mean I think the
thing that is uncertain for me is like
there's been like five of these
character scale all these. Is this the
new normal? Was the Windsor deal any
different? Because somehow the others
had a lot less outrage than this one
even though a lot of the the sort of
surface level details were the same. Um,
and like I mean I I would really love to
like talk to one of the lawyers involved
in like these things like how did they
how did they negotiate the deal points
and like technically the entire company
is cashed out so like how do the
investors feel about it?
Yeah,
it's all it's all super interesting.
Yeah. No, I mean yeah I think I think
two things I think first like talking to
investors I think they're starting to I
think they're kind of they're kind of
over these types of deals. like they're
like, "Hey, I mean, like this is like
fine, but like we don't invest in
companies so they can get like a weird
aqua hire situation a couple years
later. Like we're investing in them
because we think they can go public or
like, you know, at the very least get
bought in like a pretty good exit." So,
I think a lot of VCs are like not super
thrilled with these deals continuing to
be like the new norm. But yeah, I think
to the something that you said earlier
like there's a sense that whenever you
join a startup like you're going to be
compensated for the risk that you're
taking on of joining like something
that's unproven and then now you're
having these situations where like yeah
some people do get rewarded for taking
on that risk but then other people who
have been there for like just as long or
you know like feel like they've put in a
lot of work like are getting like
seemingly nothing out of the deal. So
yeah, I I I think it's definitely like
messing with the trust between startup
employees and CEOs and founders for
sure.
Yeah. You know, I think the the one
thing I would sort of put a footnote in
there is um you know, like they I think
even in the Windsurf deal, they they did
get uh offers at um at Google, but like
you know, there was some kind of
revesting thing going on. Uh yeah, cool.
Any other sort of uh uh coding agents
commentaries while we're still on the
sort of coding topic? anything else
you're watching. How do you cover uh I
guess like coding in general if if
you're you know out in in New York and
you know we're over here.
Yeah. If I'm not if I'm not like a
developer and I'm like trying to
understand what the heck is going on
with coding startups.
Yeah.
Yeah. I mean, I think a lot of it is
just like finding sources and developers
that I really trust and like being very
straight up with them and being like,
"Hey, I like I'm not a developer, but
like show me a couple examples of things
that you built with this. Like tell me
your opinions about what they're good
for or not good for."
I think like one thing I'm interested in
is like the rise of some of these like
open source coding assistants like
Klein, which I I know was on your
podcast and they're announcing their
their A today. Yeah. Yeah. And um and
also the rise of kind of these like
coding assistants targeting
non-technical users like Bolt or uh
Lovable where yeah you see a bunch of
these startups that have raised a decent
chunk of funding that are like we want
to target like either people wanting to
make like prototypes of actual apps or
just like small businesses that want to
make like a website or you know
something that's not super com complex
but like we'll get them up and running.
Um, there's also been a lot of stories
around like what can go wrong when
you're using like Vibe coding. Like it
can accidentally delete your code. It
can like totally not have some security
thing that you need in there and like
lots of stuff gets leaked out. And so
yeah, vibe coding is great, but I think
we're only just now starting to see some
of the consequences of what happens
whenever you're not like careful with
it.
Yeah, very mixed feelings for all those.
Uh, go ahead, Allesia. on the client
thing. I was gonna say sorry it was just
funny that Forbes did an artle and used
the latest in space bot screenshot as
their
what
nice
for listeners the screenshot guy is
Allesio so
it's actually kind of cool that like
they stole his artwork I guess I don't
know
yeah it was funny anyway just
I feel like you guys should you guys
should get a get a commission for that
or something you know
it's Forbes who cares I mean okay I mean
I'll go there you know I I don't know if
it's offensive or not, but like look
like Forbes Forbes and Business
like you don't work there anymore, but
like Forbes and Business Insider like
really fell off. Like even like people
are saying like Tech Crunch really fell
off. Like the information is like
actually up there. Like we will never
have those people on. We'll obviously
talk to the information, but like what's
going on with with like legacy online
tech media has like really declined
really hard and like I don't know what's
going you know is there is there chatter
on the reporter side? Yeah. I mean,
yeah, media is like a very crazy
industry. I mean, as somebody that like
still feels pretty new to it, there's
just a lot of stuff going on. I think
like I think like business modelwise,
like ads are not working very like
they're not working as well as they used
to. I think like the information like we
do well because we have this
subscription model, but like still I
cannot tell you how many people complain
to me cuz they're like, "Why is your
subscription like literally $500?" Like
that's insane. And so like yeah, so like
we do it because we need to like stay
alive as a business and like I want to
get a salary, but like also there's
you're also giving up some like
accessibility for people who like want
to read our stuff. Although we do have
many free newsletters. I'll put that out
there.
So there's like the business model stuff
is like not going super well. And I
think a lot of publications are
struggling to kind of like find their
niche in the way that they've done in
the past. So, like I think the
information like our whole thing is like
scoops and exclusives, but like if
you're a company that's more known for
like summarizing like the daily news or
even like having some sort of opinion or
analysis like now that you people on
like X who are doing that or you can
just use like Chatbt to send you summary
of the news every morning like you're
just competing against so many other
sources of news and I don't know I just
feel like in general like a lot of
people just for some reason kind of
distrust like media now and are turning
to like influencers that they trust or
you know people on social media, people
on on X for news because they feel like
a personal connection with that person
and they like think that they are a
trustworthy person. So I think the media
is like fighting an uphill battle right
now and it's it's tough and that's like
tech media. So like I think like
covering politics like way way worse.
Yeah. Yeah. Um, well that's out of her
domain, but I I always think it's it's
fascinating just like I I hear some
really awful stuff at Techrunch and I'm
like not Techrunch, you know, like
Techrunch is a is an institution, you
know. Anyway, so so okay, there's the
the two angles I really want to focus in
on. I I'll start with the first one,
which is Scoops and Exclusive, right?
You guys are amazing at it. Could you
peel back the curtain a little bit on
how obviously like not the whole like
secret sauce or anything, but like like
h like what you know, how do you how do
you get people to talk to you? Like I
try to get there a little bit as an
amateur, but like also you're really
good at it. Like let's let's face it,
like you're really good at it.
Thanks. I mean, I don't know about that,
but I don't know. I think like I think
everyone kind of has their own approach
or like their own just like thing that
they do. So, I feel like I know some
reporters who are like super blunt, like
very to the point, and they will just
like be super persistent, and just like
call you up, ask you straight about the
deal and
kind of give off the vibe of like, hey,
like I know what I'm talking about, and
like I'm going to get this information,
so like you should help me. I think I
know other people that are much more
like
it's Caris for sure, right?
Right. Oh, yes.
I mean, Caris model.
Yes. And then also I know other people
who are much more like kind of like uh
disarming or like they say stuff like oh
like you know I think I saw like this
like tweet that said this and like did
you hear about that? Like I don't know.
I think for me like I cannot be like
scary or aggressive. Like I
unfortunately just don't have that gene
which I I wish that I did have it. And
so I feel like my thing is mostly just
to be like try to be like super curious
and like talk about my newsletter where
I do cover things that are not scoops
and be like, "Hey, like I I love a good
scoop as much as anyone else, but like I
also care about the tech and the
science, so like let's just like start
there and talk about that." So I think
like you need to kind of figure out like
what works best for your style and like
who you are as a person, but then also
think about oh the person you're talking
to. If I'm talking to a researcher,
maybe I don't want to just come out and
be like, "Oh my god, is like so and so
buying so and so," but just start with
more of like, "Hey, tell me about your
research and like, I'm just curious
about it." Versus like if you're like
talking to a VC, it might be more
transactional where you're like, "Hey, I
heard about these five deals. Have you
heard about them? Like hopefully I'm I'm
helping you, so can you help me out,
too?" So, it's just a lot of different
approaches. I think also it's just like
not being embarrassed about your job and
just like like literally like I will
just cold call people I've never talked
to and just be like hello like I don't I
know you don't know who I am but like
please tell me this piece of information
and like sometimes like you accidentally
call their mom instead of them or you
like get yelled at on the phone and then
you just have to be like okay like
that's just the way it is and like it's
just my job and I'll just move on. So,
there's a bit of kind of like just
accepting the fact that this is your job
and like not caring if sometimes you
people are like, "What is this annoying
journalist doing calling me?"
I'm surprised you actually cold call.
Like, I will never pick up a cold call.
I know. Yeah. No, I Something that I
feel like uh especially like millennials
and Gen Z, like we hate calling people.
So, that was like lowkey something that
I had to get over cuz I was like I used
to hate calling people and I was like
I'll just text them and like maybe
they'll like get back to me. But, no,
you're just like aggressively calling
people all the time now. I have a I have
a new respect for for your work. Uh al
also like do you have to like how do you
protect sources? People trust you and
like they can get fired for if for like
telling you things they're not supposed
to tell you.
Yeah.
Do you just like reward things? You get
a second source. You say many people
close to the matter. Like what what's
what are the techniques here?
Yeah. I mean I think for me like
protecting my source is like my number
one priority. Like there are times where
I hold off on a story or you know I let
people and maybe somebody tells me
something and then they're like actually
like I'm really worried. I don't want
you to write about that anymore. I'm
like it's okay. Like this is not that
big of a deal. Like like I've done stuff
like like that because like I care about
my sources and like them obviously not
getting in trouble or getting fired. But
yeah, there's like there's lots of stuff
stuff that you can do. Like I think we
always try to get like multiple sources
on facts or obviously like I think we
have a responsibility to our readers and
like we want them to trust us. So we try
to tell them as much as we can about
like how our source knows this
information. So if like they were in a
in a meeting or if they saw like a
document. So we we try not to use stuff
like oh like a person familiar with XYZ
cuz that's like quite vague. You don't
really know like what that means. But we
still do try to we try to give some
information but like keep it as broad as
possible. So like instead of saying
investor, we might say like a person
who's talked to the executives or
instead of saying like a person that
worked on this team specifically, you
might say like a person with knowledge
of this product or like a person or like
an an employee at this company. So it's
like always stuff that you can do. I
think it's just it's just like building
trust with your source over time. And
like you know like I want them to trust
me because I want this to be a long-term
relationship. So, like even if we just
talk completely off the record the first
like 10 times, like that's totally fine.
Like I I'm I'm in it for the long run.
So, I just don't want them to ever feel
like uncomfortable or like they're
putting their, you know, their job at
risk because of me.
Totally. I mean, we do the same for
people that talk to me and and Allesio.
Yeah. So, obviously anyone listening, if
if you have stuff for for Stephanie, um
I'm sure she's easy to reach. A lot of
people seem to use Signal or Telegram. I
I mean I don't think it's that encrypted
or you know but like yeah that's that's
a that's a tool of your trade right? Um
yeah
I think the other the other thing for
for readers I think is also like a cool
like peak behind the curtain of like how
these things get formed. Uh I don't know
if you want to say anything to readers
on like sort of how to read the tea
leaves on on the stuff that that you do.
But like um
yeah it's fasating.
Yeah that's that's a good question. I
like to think that we're like pretty
straightforward. Like I think we are
like pretty like straightforward and to
the point with our writing and
you are I really appreciate that as much
as possible. Oh, thank you.
Yeah.
Yeah. Sometimes it's like I remember
whenever I first started covering AI
here and like trying to think about how
to explain like model weights and like a
layman's I was like this is like
impossible. But yeah, I think for
readers like I think,
you know, I think one thing that we and
every media publication can always do
better at is like taking a step back
from the individual scoops and
understanding like what this all means
more broadly. And so we we try to write
about that in our in our newsletters on
like our TITV. But a lot of the times I
think it's like really easy to get
caught up in like oh this model
developer raised like this inference
provider raised and it's like very hard
to kind of think about okay like what
does this all mean or like what does it
mean that five inference providers
raised in the last like 3 months or
something. So I think whenever you're
reading just keep that in mind and we
will like link to related stories all
the time. So like definitely let
yourself go down that rabbit hole of
like you know understanding how this
story isn't just a story or scoop on its
own. it's connected to like this long
narrative of like the progress of AI
over the past couple years and just try
to think about like what does this mean
in this like broader story that we're
trying to tell here. And so obviously we
want to do that for you as much as we
can, but I think that's also just a way
to think about reading these scoops that
will hopefully like let you understand
help you understand more what's going on
in the broader context of the tech world
than just like a single story.
Yeah, I I I definitely encourage people
um the way to elevate the conversation
instead of just repeating what you just
saw on Twitter is that to tie it into a
broader trend to have an understanding
of why now understanding of like the
economic forces at play, agendas of
different players. It's a lot of what I
do is is just trying to read into that.
Um honestly like um I have a little um
chip on my shoulder. I don't I don't
think we do it that well, but I'm kind
of curious. Do you guys maintain a list
of trends we are tracking?
databases of stuff, but like I almost
feel like I can't keep up of how many
kind of thesis I'm I'm tracking in my
head because it all comes through
conversations and conversations like in
in a single conversation you can have
like 20 20 different things that you
touch on.
Yeah, definitely. I mean, yeah, that's
like the hardest thing I think
sometimes. Like I think there's some
like just organizational things that you
can do. Like we have like weekly
meetings about AI or like deals that
we're chasing. And so I think some of
that also rests on our editors because
they're not kind of like in the weeds
chasing down stories all the time. Like
they're better at like, okay, here are
like the five trends that we're thinking
of writing stories about and like I'm
going to assign this one to you
Stephanie and like let's talk about this
every week.
Yeah. Yeah. And I think there's like I
mean there's like this is obvious but
stuff I do like I have like a notepad of
like all the all the major trends that
I'm chasing and so at any point I kind
of like know what the top five like
priorities are for myself. So I think
it's just understanding like you can't
remember all 30 trends and tips that are
going on but like just keep in mind the
top five maybe that you're thinking
about and think about how tips and
information that you hear can tie into
those. I was hoping you have some
magical AI tool that I can Yeah.
Sorry. Yeah, it's like journalist
journalist AI. Well, if somebody wants
to make an AI startup for that, that
would be great. Although, I don't know
if like journalist is a very lucrative
market for companies to go after, but
I mean, yeah, you know, like meeting
note-taking, like you say, that you're
meetings all day. I mean, that's a huge
market. It's it's basically granola and
notion. And, you know, I think there's
there's a there's a long tale of others.
There's a grip tape which I think is
journalist focused which is kind of cool
that they're they're boost company
but like yeah I mean I think there's a
lot um I think people want to get on top
their personal knowledge management they
want to build the second brain I was a
big part of that movement uh before AI
stuff like yeah and then more more
broadly thematic wise you know it's
literally like again like going back to
like finance days like that's what I did
as a as a hedge fun analyst I would I
would track like portfolio themes and
like here's here's here's all the, you
know, the trends that the news items
that coming in that change our
bullishness or bearishness on on a theme
and like then then you have to express
that in your portfolio through the
stocks that you make. But like my
portfolio, your portfolio here is your
coverage, right? Like and and like every
guest we pick is a guest, you know,
takes a slot for a guest that we do not
pick because we want to emphasize this
theme. For us, like we tend to emphasize
engineers and coding and so our lanes
are relatively well defined, but like
yours are very wide.
Yeah, definitely. Okay, cool. Well, how
do you choose to cover smaller startups,
right? Like a lot of people want to get
on and be covered by the information.
Sometimes you cover, you know, a lot of
it's like the big labs and of course
like huge amounts of money, but like
occasionally you cover like Versel and
today you're covering modal like how do
you how do you choose the smaller
startup coverage? When should they reach
out to you?
Yeah. No, it's funny about modal that we
it's like the world has gotten so crazy
now that like a billion dollar startup
we're like it's like a baby. But uh
oh that's cute. But but New York tech.
Uhhuh.
True. True. Yeah. That was the key
because I
New Yorkers like finally like
not just data dog and MongoDB.
Exactly. Exactly. No, but I mean like I
Yeah. Like I mean just reach out to me
and I I'm I'm even if I don't cover
Okay, so actually let me take a step
back. I think if you want to get
coverage, part of it is like
understanding that like it's a long
game. you might not get coverage right
now, but like I might not have the room
in the newsletter to cover your funding
round right now, but like I'm always
looking to meet new startups and like
hopefully we can find a way to do
coverage or have some sort of story
that's around your startup later. So, I
think don't just like come in and be
like, "Hi, we've never met, but like I
need you to cover my like $3 million
seed round or something." But I think
also like another way to pitch to like
talk to me and pitch to me is like again
what I just mentioned about taking that
step back. And so like even if I don't
cover smaller startups like standalone
like for instance this week I did a
story about kind of the next step in
data labeling and data curation which
are like these startups building like RL
environments for for agents. And so I
listed out like I don't know like four
or five startups that are doing that.
And these are some that like haven't
even like really raised funding. They're
like they're like just pretty small. And
one of the startups I talked to like we
had like a really great conversation and
I included a lot of information about
like their company and the story because
like they were just super helpful and
like clearly very like well educated on
the space. And so I think if you're like
a startup and you're like, "Hey, I think
like my space is really interesting and
like people aren't really understanding
like why this is important, like don't
think about just pitching your startup
to me, but like pitch me on like why
your entire space is important and like
I will write something like it's more
likely that I'll write something about a
trend or like a new industry."
Yeah. Um, and by the way, uh, I had to
draw for for inerson meeting, but I I I
figured this this conversation is great,
so I just want to keep it going. Yeah.
So that's great. I think uh I wanted to
get that on the record because like I
think people want to work with you but
like on you know and they just don't
know how and sometimes like you know
like I u people want to work with us and
the worst thing is they get a PR firm
that like barely knows us
spells my name wrong and like you know
like gets in touch with me with like a
story that is like completely not a
story and like they know it I know it
like this is just a formality they're
just getting paid to send the email like
it's so routine and like I think I think
people want to know how to break through
the noise and come to you when the good
stuff because a lot of people are
working on good stuff. So, yeah.
Yeah, totally. Yeah. I mean, like don't
like just send me like an email like
don't have Yeah. Don't have like a PR
team send me like a really generic email
and like just totally get your company
wrong. Um I think it's like better if it
comes from you personally and like I
I'll appreciate it a lot more.
Yeah. I'll even say like for the early
stage startup founders listening I think
most of your PR teams are kind of
pointless unless you really want to get
on like CNBC. Most most of us are like
easily reachable, you know, like like we
we screen tons of requests and it's
fine. Like it's it's it's like you're
probably much more much better at
pitching your own company than than the
PR team and we make a decision like
really quickly. Like it's not it's not
that complicated. Cool. Like u Yeah. So
more broadly, any other sort of coverage
areas or themes that are or sort of big
hits that you're um that you know you
want to talk about? like I don't get a
sense of like what your biggest hit is
or your general sort of thematic
overview if that that makes sense. You
know, there's like robots, there's XAI,
there's perplexity, there's all these
other things. Uh what what stands out to
you as something that you really want to
uh chat about?
Yeah, I mean I guess two things. Well, I
think first like the first thing is like
I think this is what I'm most interested
in, but um I did kind of touch on it
with like our conversation about GBT5,
but just like the trajectory of AI
progress and kind of like you know first
there was like pre-training scaling and
then now there's like reasoning models
and reinforcement learning then like
what comes after that like is it RL
environments for Asians? Is it something
that's different than the transformer?
Like I think we're always curious just
like what is coming next. And I like I
think that's why I love to talk to more
people at the big AI labs cuz I feel
like they're really on the forefront of
that.
They can't tell you.
That's like a national secret.
Yeah. Yeah. And it's like Yeah. I think
a lot of times it's it's like I'm not
even trying to get like a scoop. Like
I'm just like as a person living in a
world. Yeah.
It's like what do I need to be like
excited about and like what do I need to
be worried about? Like I I'm just like
genuinely curious. not as a journalist
but again like as a human being living
in today's world.
I think like I mean this is not a topic
that I would like necessarily write
about because like our you know our
audience is like business focused but
like yeah I think the societal impact of
AI is like fascinating like I mean I
have a younger sibling who's going to
college right now and like you know has
no idea like what they should major in
or like what job they should have
because like everything's just so up in
the air
and like I have friends who are working
in like investment banking who are being
asked to test like financial modeling
software that could one day like replace
their jobs. And that's without even
getting into the whole like AI boyfriend
girlfriends like people on Twitter going
crazy because like the AI is supporting
their like manic episodes and just like
yeah like there's just so much like
we're shaping the AI the AI shaping us
in society. It's like, yeah, I don't
know. It's like, and I think sometimes
like you hear some of the CEOs say stuff
like like Mark from Meta made a comment
where he was like like on average today
people have like I don't know what it
was like four friends, but we really
have demand for like 12, which is just
like such a crazy statement to say.
I didn't I didn't I didn't I didn't read
that. Okay.
Yeah. So, it's kind of like Yeah. It's
like this back and forth between
technology and society. And I'm just
like very curious like what the average
college student is going to look like 10
years from now or like what my job is
going to be like 10 years from now and
how I'm going to like like how my
friendships and you know work
relationships and stuff are going to
going to work. So
I have a lazy joke but it's not really a
joke. Like you know everyone imagines
that everyone else's job is going away
but theirs, right? So Mark and Jason's
like, you know, like everyone's job is
at risk, but like but VCs will be fine.
Door cache is like podcasting will be
fine. So like uh yeah, I mean that's a
lazy throwaway joke. I think the more
serious take is like uh the top I think
um AI and just technology in general
accelerates the difference between the
halves and have nots. And so there's
like relatively recent college grads
that uh that are now getting the $100
million offers from from Meta. And
there's also like I think something
crazy like 20 30% unemployment from
fresh college grads for like people who
don't meet that bar. And so like I think
it's just going to exaggerate
inequalities, you know, that's why
people talk about things like universal
basic income. Some some amount of like
sometimes like human human society is
just not made to adjust as quickly and
things have been accelerating and like
we just our systems just don't adjust
that quickly. like we have like annual
schedules but these things ship like two
three you know major models a year and
like yeah I I I do think like at some
point like you know the autonomy of
knowledge work automation of knowledge
work I think really starts to hit the
GDP and in society and like I think
that's uh that's really scary.
Yeah. Yeah. It's kind of like the whole
joke of like instead of being like, oh,
like I'm related to the Vanderbilt
family, kids in the future going to be
like, oh my god, my great great grandpa
was like an like a a founding researcher
at Meta Super Intelligence Labs or
something. And like that's going to be
the new like status of being part of the
elite, which is like scary.
Uh they traffic a lot of memes. So this
is like literally the daddy what do we
do to get so rich? They were like you
know your dad took like you know the the
meta meta offer. Uh
exactly.
Well, yeah. I mean like so I and you
know my positive spin on this is that
you know we we used to freak out about
kids wanting to grow up to be YouTube
stars and Tik Tok stars. Now they want
to aspire to be ML researchers. Maybe
it's not a bad thing. You know if you
want to do more STEM it's not a bad
thing you so maybe they'll they'll be
that u and uh you know well I I mean I
think coming back to like the sort of
more technical stuff because you know
that's that's the the sort of meat and
potatoes of what we do. I think most
people that I talked to are not
seriously pursuing alternative
architectures. There are some notable
exceptions primarily together with the
Mumbok architecture um recurs with RWKV
there's like the XLSTM that was uh
created by separate hawk rider and so
like there are alternatives but mostly
people are happy scaling it up and
sometimes um this is what we call the
lottery ticket hypothesis. It doesn't
matter what the the global best is as
long as like everyone's optimized for
one architecture. It's fine.
It's kind of like English isn't the
perfect language, but you know, it's
good enough like and everyone speaks it
like
you know it's got it quirks, but like we
like we did try to design the perfect
language. It's called Espiranto and
nobody speaks it.
Yeah.
So like that transformer as the lingo
frana I think makes sense to me and now
like people are just like scaling up
data and like other and techniques. I'm
curious if um you're the fifth person
this week to talk about reinforcement
learning environments. Is that just like
a theme? Like apparently there are like
10 startups all doing this and like
applied comput the the the hypy one.
It's like three kids in a basement worth
$500 million or something.
Yeah. Yeah. No, I mean I I feel similar
to you. This is like me last week. I was
like, "Oh my god, why have like four
people brought up this like idea to me?
I like need to do like a a newsletter on
it." But yeah, I think it's like like
this idea where it's kind of like why
like let let's just like let agents and
models like roam free in this like
escape room copy of whatever app and let
it figure out for itself like how to
like you know put a lead into Salesforce
or like do these different things. And
yeah, I think it's like this idea of
like having less human involvement in
reinforcement learning um and getting it
to be more automated. But I mean, from
what I've heard, it seems like still
pretty early and like I think people are
still trying to figure out like how to
actually get it working super well.
But are you are you you saying when you
say it's still pretty early, like I
expect this to work, I expect this to
work immediately. Like is is it not
working?
No. No. No, I I I think it is working,
but like like I think it's just very
compute inensive to have like an AI
agent like taking a bunch of like
actions
actions in like this environment and
like a lot of them like don't work out.
Some of them do. So I think it's like
it's like quite expensive and at least
the way somebody explained it to me is
like if I want to make like a copy of
Salesforce to teach an agent how to like
navigate it like it's not easy to just
create like a copy of Salesforce.
there's like a lot of different like
buttons and like different databases it
connects to and like yeah I think it's
just complicated to make it a realistic
environment for the model to play around
in. But I agree I think it's like really
interesting and very promising and at
least the one the startups that I've
talked to a decent chunk of them are
already starting to work with like the
AI the big AI labs and like these are
quite early startups. So it seems like
they are very the AI labs are are very
excited about this approach. So
yeah I I would say I hear the same
thing. Cool. Thank you so much for
taking the time. Any other parting
thoughts, questions, calls to actions?
No, I mean this has been super fun and
yeah, I would just say for all your
listeners like I am always open to
chatting even completely off the record
and yeah, I don't know. I just like find
AI to be like fascinating and I love
talking to people about it. So, please
email or or text me and I would love to
chat.
Yeah. Um, thank you for all the great
work you're doing. Um, yeah. Well, I'm
sure we'll chat again.
Sounds good. All right. Thanks.
[Music]
Loading video analysis...