The Race to Put AI Agents Everywhere
By The AI Daily Brief: Artificial Intelligence News
Summary
Topics Covered
- OpenClaw Ignites Agent Viability
- AI Replaces Computer Canvas
- Local Agents Unlock Real Work
- Agents Outpace Human Software Use
- Nvidia Enterprise-Proofs OpenClaw
Full Transcript
Today we are talking about the absolute race to productize and get agents enterprise ready. Welcome back to the AI
enterprise ready. Welcome back to the AI daily brief. We are coming up on the end
daily brief. We are coming up on the end of Q1 and as part of that I've been working on a big quarter 2 state of AI report. As you might expect, maybe the
report. As you might expect, maybe the key story of Q1 was OpenClaw. Not even
just because of OpenClaw itself, but because of what it represented. I think
you can look at OpenClaw as the instantiation of the new capability set that shifted around the end of last year and which has really come to the four this year. It's what I called on
this year. It's what I called on yesterday's episode AI second moment and refers to this idea that agents are actually at this point viable and that people are in the midst of a million experiments right now giving agent
systems access building new types of systems to have agents interact and especially and as we'll talk about today solving some of the key challenges of agents to make sure that they can diffuse across the entire business
world. Part of the specific catalyst for
world. Part of the specific catalyst for today's show is Nvidia CEO Jensen Hong's speech at their annual GTC event yesterday where Jensen said explicitly every software company in the world
needs to have an openclaw strategy and where he began to show off their enterprisegrade version of the software.
Now even before this the clawification of the world was well underway. Kevin
Simbach from Deli Labs recently wrote a post about all of the different variations and competitors and started by claiming that open claw opened the door. Kevin writes, "Before OpenClaw,
door. Kevin writes, "Before OpenClaw, agents were mostly technical experiments that produced nothing more than timeline sllo. After OpenClaw and with the advent
sllo. After OpenClaw and with the advent of Opus 45 and 46, agents became accessible, just a telegram message away, always on, actually doing helpful things, and kickstarting a new generation of digital opportunities.
Open Claw quickly proved two things at once. People don't want AI chat. They
once. People don't want AI chat. They
want to get work done. And giving an LLM broad access to your machine and or personal info is both insanely useful and mildly terrifying. So, as he writes, the last month has been a weird kind of Darwinism with builders shipping faster
than slot posters, security people screaming into the void and a growing cohort of people saying, "Oh crap, this is actually going to rewrite how software and digital businesses work."
And yet, as Kevin acknowledges, not everyone is sold on OpenClaw itself. And
there has been a mad race to build or update alternatives. A bunch of them
update alternatives. A bunch of them like Nanobot, ZeroClaw, Picoclaw, or Nanoclaw are all attempts to reduce the overall complexity down to some specific useful feature set. And then there's others like Open Fang, Hermes, Moltus,
and Ironclaw that are all trying to bring security to it through self-hosting. Yet, if that represents
self-hosting. Yet, if that represents one end of the spectrum of the clawification of AI on the other hand, you have a huge number of companies, some that were AI native, some that weren't AI native, offering up what are
effectively their own versions of OpenClaw. In other words, agents that
OpenClaw. In other words, agents that are deeply integrated and integrable with some key set of systems and personal context. At the end of
personal context. At the end of February, Notion introduced custom agents which have a lot of features in common with OpenClaw and also all of the context that comes from integration with notion where many companies are running
all of their information. And of course, we also got Perplexity Computer.
Perplexity Computer is a very fullthroated reimagining of Perplexity from the ground up into a complete problem solution design system capable of spinning up complex systems of agents and sub aents to get things done and
build things that people want. In the
couple weeks since Perplexity released computer, they've also released computer for enterprise which can operate from within Slack and which also has direct connections they claim to more than 400 applications. And they also even got on
applications. And they also even got on the Mac Mini part of the theme with their launch of personal computer which they call an always on local merge with Perplexity computer that works for you 24/7. Getting philosophical, Perplexity
24/7. Getting philosophical, Perplexity CEO Arvin Shrinabas wrote a long post about why the AI is the computer. In it,
he argues AI models are becoming so capable that the products built around them have been bottlenecked for showing their true potential. The chat UI is good for answers and agents are good for individual tasks. Meanwhile, the UI for
individual tasks. Meanwhile, the UI for entire workflows has always been the computer. Effectively, what Arvind is
computer. Effectively, what Arvind is arguing is that the full potential of agentic systems requires the complete canvas of what your computer offers.
Bridging from local files to cloud systems and beyond. Which brings us to the not one, not two, not even really three, but closer to three and a half new entrance into this clification of
everything category that were announced just yesterday. Manis, which was
just yesterday. Manis, which was purchased by Meta in December, was one of the early leaders throughout 2025 in general purpose agents. This week, they announced a new Manis desktop app, the
key feature of which they called my computer. Very much picking up on the
computer. Very much picking up on the new design pattern, they write, "It's your AI agent now on your local machine." The use cases they point to
machine." The use cases they point to include organizing thousands of unsorted photos, renaming hundreds of invoices, building desktop apps in Swift entirely on your computer with no code written manually, combining with existing
connectors to create seamless automated workflows, and creating local routines with personal projects, agents, and scheduled tasks. In the blog post,
scheduled tasks. In the blog post, without naming openclaw, they acknowledge the realization of the need to be able to bridge from cloud to local. They write the cloud sandbox has
local. They write the cloud sandbox has served men well inside an isolated secure environment. It has everything an
secure environment. It has everything an AI agent needs. Networking, a command line, a file system, and a browser. This
is the foundation of Manis' power as a general AI agent, always online and always ready to work. However, there has always been a fundamental limitation.
Your most important work happens on your own computer. Your project files,
own computer. Your project files, development environments, and essential applications all reside locally, not in the cloud. My computer then is a way to
the cloud. My computer then is a way to close that gap. Now, one interesting thing about the Manis announcement is that they're thinking a little bit ahead in terms of the specific opportunities that come with desktop. for example,
doing something that I haven't seen from a lot of the other competitors. They're
actually pushing the idea of building fully working Mac apps, not just cloud-based applications that other people would use. Cedric Chi writes, "Cloud Code, Cowwork, OpenClaw, Codeex, and Manis all seem to be converging on
the same idea. The agent lives on your machine." The second related
machine." The second related announcement yesterday came from Adaptive. They wrote, "Introducing
Adaptive. They wrote, "Introducing adaptive computer. We put AI inside of
adaptive computer. We put AI inside of an always on personal computer that it uses to get work done. Schedule agents,
create software, automate anything. By
the end of this year, they write, "AI agents will use more software than humans do. You won't be the one clicking
humans do. You won't be the one clicking the button or browsing the web page.
Your agent will that requires a new kind of computer. We built one. Most business
of computer. We built one. Most business
software, they continue the same problem. Someone has to sit there and
problem. Someone has to sit there and operate it, moving data, updating records, filling out forms. That someone is usually you." The example they gave, interestingly, is the real world business example of a hardware store owner who has 47 new products in a
spreadsheet and needs them to get added to Square. Adaptive says, "Drag the file
to Square. Adaptive says, "Drag the file into Adaptive, tell it what you want, and it handles the rest." Out of scope of this particular show, but I think it's super interesting that you're seeing these very bleeding edge tech companies trying to appeal to the
hardware store owner use case. They then
go on to pitch their secret sauce, which they call encoded memory. They write,
"What makes Adaptive different is what happens after. It encodes what it
happens after. It encodes what it learned, how Square works, how your catalog is organized, and how you prefer things to be done. So the next week when you ask for a daily sales report at 8 p.m., builds the agent, schedules it,
p.m., builds the agent, schedules it, pulls from Square data that it already knows. Now, anytime there's a new
knows. Now, anytime there's a new launch, it tends to be pretty hard to get good signal from Twitter at this point because so much of the discourse is either AI bots or undisclosed paid tweets. But Ole Lemon did write of a
tweets. But Ole Lemon did write of a good experience that he recently had through adaptive. The example he gave
through adaptive. The example he gave was automating YouTube AI research.
Basically, his argument is that YouTube has a ton of really great videos on in-depth AI systems that are extremely up-to-date and current with the moment, but there is a ton to filter through that makes it hard to sit around and
browse to get the diamonds in the rough.
The prompt he gave adaptive was, "Analyze YouTube videos about AI and Claude workflows from the last 24 hours that have at least 10,000 views. Pull
the full transcripts, extract the top three most tactical and actionable workflows, and send me a daily email report every morning."
The third and maybe biggest openclaw and agent related announcement yesterday however came from Nvidia. The context
for that quote we heard at the beginning about every company needing an openclaw strategy was the setup for Jensen introducing Nemo claw. Now functionally
this is not actually a standalone agent but rather a software toolkit built on top of the openclaw project. Openclaw
creator Peter Steinberger wrote yesterday been so much fun cooking open shell and Nemoclaw with the Nvidia folks. huge step towards secure agents
folks. huge step towards secure agents you can trust. So what this is is basically an approach that adds privacy and security to OpenClaw instances by giving them an isolated sandbox to work
in. The agent can still access resources
in. The agent can still access resources as necessary, but the Nemo Claw stack formalizes access control. Specifically,
it integrates into policybased security and other guardrails to theoretically allow it to operate safely within enterprises. Neimoclaw is model and
enterprises. Neimoclaw is model and hardware agnostic and allows users to choose between cloud and local models.
Encapsulating this whole shift, Jensen Huang said, "Open Cloud gave the industry exactly what it needed at exactly the time. Just as Linux gave the industry exactly what it needed at exactly the time, just as Kubernetes showed up at exactly the right time,
just as HTML showed up. It made it possible for the entire industry to grab onto this open source stack and go do something with it. Now, what's been interesting about the response is that for most, although not for all, this
hasn't been a jump the shark or jump the lobster moment. Instead, people have
lobster moment. Instead, people have been pretty enthusiastic about what Nvidia is trying to do. Kevin Simbach
again writes, "Excited to dig into Nemoclaw. I've spent a good bit of my
Nemoclaw. I've spent a good bit of my career in enterprise. I've been pretty vocal about OpenClaw not being enterprise ready, but the concept of an agentic workforce is a killer and enterprises are going to want it, so this may be what really kicks it off."
Tristan Rhodess writes, "I've been avoiding OpenClaw and waiting for it to mature. There have been countless
mature. There have been countless variation in forks along the way, but Nvidia is the most valuable company in the history of the world. Does that mean Nemo Claw becomes the dominant variation of OpenClaw? Eric Sue wrote an entire ex
of OpenClaw? Eric Sue wrote an entire ex article called Nvidia just solved the one problem blocking AI agents. Of
course, all about the security concerns.
Now, one thing I will say that's been interesting from our own experience.
Regular listeners know we have two different Open Claw related things going on right now. Claw Camp is an open, free, self-directed program that walks people step by step through setting up their own openclaw and giving them
access to a community of other builders who can help them along the way that at this point more than 7,000 people have signed up to participate in. Enterprise
Claw, meanwhile, is a managed 6week executive sprint that's meant to help individual enterprise leaders and teams from enterprises get that same sort of learning, but in a much more in-depth and supported way. Now as part of
enterprise claw we gave people the choice to either use openclaw or do a generic version of agent team building using claw codeex cursor etc. And interestingly it's about half and half
in terms of who wanted to learn on openclaw versus who wanted to use other systems. Meaning that even in the pre-enterprisegrade openclaw world there is still demand for figuring out how to use this platform which I think is
certainly validation of everything that Jensen is saying. Now, Robert Scobble had an interesting note from the NVIDIA GTC Expo Hall that was actually more about OpenAI than it was about Nvidia.
He writes, "Visiting the Expo Hall shows you why OpenAI is changing strategy. All
the big booths are enterprise. The
biggest news here is how Nvidia is bringing Open Claw to the enterprise."
Which brings us to another important story from yesterday. The Wall Street Journal reports that OpenAI is done with side quests and will refocus on nailing a core business which is now more than
ever refocused on enterprise encoding.
The journal reporting states that CEO of applications Fiji Simo has delivered a wake-up call within the company, pointing out that their do everything strategy has reduced their lead on the competition. Simo told staff last week,
competition. Simo told staff last week, "We cannot miss this moment because we are distracted by side quests. We really
have to nail productivity in general and particularly productivity on the business front." Now, this is of course
business front." Now, this is of course a big shift away from Sam Alman's traditional management approach, which he described as betting on a series of startups within the company that led to a fairly dizzying array of product bets,
including the Sora app, the Atlas browser, and the yet to be revealed Johnny IVive device, just to name a few.
As basically everyone on AI Twitter has done, the journal compared that approach to Anthropic's very narrow strategy built around agentic coding and the way that that expands into broader sets of knowledge work for the enterprise. Now,
it's not new that OpenAI has decided to refocus efforts on similar themes.
That's been the big story since GPT5 was released and codeex came out, but there clearly seems to be a new urgency.
Interestingly, according to Simo, the code red from last year is not over.
Last week, she told staff, "We are very much acting as if it's a code red." And
while a lot of people are speculating around what might get the axe because of that, for example, the much maligned ads approach, every day it seems we get some new announcement around the codeex and their larger coding suite. The most
recent and the one that we got yesterday and that I think is coherent with all of these clawification themes is the native integration of sub agents into codecs.
The OpenAI developers account writes, "You can accelerate your workflow by spinning up specialized agents to keep your main context window clean, tackle different parts of a task in parallel, steer individual agents as work
unfolds." LLM junkie and Will writes,
unfolds." LLM junkie and Will writes, "In the next Codeex update, multi- aents will get a massive flexibility upgrade.
Hey Codex, when you implement this plan, I want you to delegate all of the lower complexity tasks to GPT 5.3 Spark sub agents instead of needing to create a 100 different custom agent roles for different situations. You can just
different situations. You can just prompt your agent to spawn whatever model or reasoning level you want with only natural language. Emanuel Dietro
went through some use cases for the sub agent system. Things like code review
agent system. Things like code review where he argues you could have one agent per concern, test coverage with one sub agent writing tests, another checking edge cases and another validating, etc., etc. And it's clear that even though the
foot is still firmly on the gas, the shift in OpenAI strategy seems to be bearing some fruit. OpenAI President
Greg Brockman wrote yesterday, "GPT 5.4 has ramped faster than any other model we've launched in the API within a week of launch, 5 trillion tokens per day, handling more volume than our entire API 1 year ago, and reaching an annualized
run rate of 1 billion in net new revenue." Sam Alman showed a chart of
revenue." Sam Alman showed a chart of Codeex usage being very aggressively up and to the right, adding, "The Codex team are hardcore builders, and it really comes through in what they create. No surprise all the hardcore
create. No surprise all the hardcore builders I know have switched to Codex.
Responding to the news about OpenAI shifting focus, Dwayne OnX writes, "I actually thought OpenAI were already doing a good job focusing on coding.
Codex is amazing for coding. One area
where they absolutely fail is UI. GBT
5.4 can't design to save its life. Even
if you have superdetailed skill to guide it, it has zero taste. And for what it's worth, I talked about this on my operator show. This has very much been
operator show. This has very much been my experience to the point where I can't just give codeex guidelines. I literally
have to give it the actual design files from Claude for it to copy exactly.
Although my experience with codecs when it comes to actually building has been really good. Summing all this up, if Q1
really good. Summing all this up, if Q1 was a realization that agents are here and a mass widescale experimentation with the form factors and design patterns introduced by OpenClaw, Q2 is
set up to be an absolute sprint to productize those agents and get them ready for broader diffusion, especially within the enterprise. One thing that I will be watching closely is how much old patterns of productization where
conventional wisdom was all about simplifying things for wider audiences still hold. Given that the breakout was
still hold. Given that the breakout was this incredibly complex system in OpenClaw, I'm not sure I know where the right complexity band is going to be or if it's going to be a spectrum of different types of complexity for
different users, but I can guarantee that just about everything that can be tried will be tried in the quarter to come. For now, that is going to do it
come. For now, that is going to do it for today's AI daily brief. Appreciate
you listening or watching. as always.
And until next time, peace.
Loading video analysis...