LongCut logo

What Manus and Groq Acquisitions Tell Us About AI Competition

By The AI Daily Brief: Artificial Intelligence News

Summary

Topics Covered

  • Agent Startups Race to Acquisition
  • China Exports AI Founders
  • Distribution Beats Model Quality
  • Nvidia Buys Speed Not Competition
  • Inference Chips Fuel Training Boom

Full Transcript

Welcome back to the AI daily brief.

Today we are discussing what two major acquisitions tell us about the state of AI competition. All right friends, we

AI competition. All right friends, we are back with the first main episode of the AI daily brief of 2026. And you

might have heard a few days ago me dropped my two episodes set about my AI predictions for the next year. Before

the proverbial ink was dry on that episode, one or kind of maybe even two of them had already start to come to pass. I'm talking of course about the

pass. I'm talking of course about the prediction that the first leading crop of generalist AI agent companies specifically GenSpark and Manis were going to be massive acquisition targets for the big hyperscalers and labs in

2026. The logic was not about any sort

2026. The logic was not about any sort of short-term need from Gen Spark and Manis. Both of those companies were

Manis. Both of those companies were doing extremely well, seeing their revenue grow incredibly quickly, presumably having access to lots and lots of private capital, but at the same time knowing that they were in a space

that was going to be directly in the line of sight for all of the big labs.

As the companies who are pushing the first generation of actually performant generalpurpose agents, they were in many ways softening the ground for the sort of interfaces and experiences that are

presumably going to become a key part of what those major labs and current chatbots ultimately offer. Ultimately,

my bet was and is that despite those companies racing to 9 figures in ARR in just a number of months, they're still going to be staring down the barrel of competition so intense that I think it

will make sense for them from a strategic perspective to get acquired by one of those partners. And obviously, I think from the perspective of the acquirers, getting all of that lived experience around how people are actually interacting with agents and

what for is going to be worth effectively whatever price they pay for it. As it turns out, the first company

it. As it turns out, the first company to go was Manis. Just before the end of the year, news broke that Mark Zuckerberg's Meta would be buying Manis for more than $2 billion. The former

scale leader, now Meta's chief AI officer, Alexander Wang, tweeted, "Excited to announce that Manis has joined Meta to help us build amazing AI products. The Manis team in Singapore

products. The Manis team in Singapore are worldclass at exploring the capability overhang of today's models to scaffold powerful agents." Now, by way of background on Manis, you might remember that at the beginning of 2025,

a number of people thought that Manis's launch was sort of the Deepseek moment 2.0. What I mean by that is that in

2.0. What I mean by that is that in January, when Deepseek released their R1 model and their companion chatbot app to go with it, it really awoke people to the potential of Chinese labs as major competitors. A couple months later, in

competitors. A couple months later, in March, Manis' first general purpose agent launch went completely viral, although it was nearly impossible to get an invite code. Building on that momentum, Manis raised money in April at

a $500 million valuation with the round being led by Benchmark, an investment that was somewhat controversial because of Manis' Chinese origins. Now, 9 months on from that, Manis has proved that they

were not just a hyped up launch. In

December, the company claimed a 125 million revenue run rate, and going from 0 to 100 million in 8 months by some estimates makes them the fastest growing startup of that scale in history. Now,

it's very clear that in spite of all that, Manis' Chinese roots continue to loom large over the deal. Manis was

originally launched out of offices in Beijing and Wuhan to a largely Western user base, and the company quickly relocated to Singapore to distance themselves from the US China AI conflict. Meta went to great length to

conflict. Meta went to great length to get ahead of the issue, providing a statement that said, "There will be no continuing Chinese ownership interest in Manis AI following the transaction, and Manis will discontin its services and

operations in China. Still, Manis' CEO is a Chinese national and will now take a prominent AI role at one of the largest US tech companies. From the

Chinese perspective, the acquisition is a huge validation of the Chinese AI startup ecosystem. Lie Jing, the founder

startup ecosystem. Lie Jing, the founder of a Chinese startup incubator, told Bloomberg, "This is truly an exhilarating event, a big era that belongs to China startup founders."

Entrepreneur Hang Dong Shu said, "It's the best gift for the start of 2026.

This is among the most significant news in recent times, a real boost for startup founders of Chinese ethnicity, especially those building businesses overseas. Tony Pang, the writer of the

overseas. Tony Pang, the writer of the Recode China AI newsletter, suggested that Manis has created a new playbook for Chinese founded startups, writing, "This isn't just another normal acquisition story. It's a blueprint for

acquisition story. It's a blueprint for how a new generation of Chinese entrepreneurs can build world-class AI products, win over global capital and tech companies, and execute a clean exit. It's also a microscope through

exit. It's also a microscope through which we can observe the latest dynamics of USChina AI competition where talent and technology flows across borders even as geopolitical walls rise higher. Po

Xiao wrote, "China trains AI users but exports AI founders. Manis just became the latest proof." In another tweet, he wrote, "The question everyone in Chinese tech is asking, what if Manis had stayed

instead of relocating to Singapore? The

answer is uncomfortable but clear. In

China's AI app market, big tech controls 70% of the top 20. Bite Dance launched 11 AI products in 2024 alone. When a

startup's product goes viral, incumbents clone it in days. Manis relocated 40 core engineers to Singapore. The move

was a survival decision. The Singapore

relocation gave Manis something critical. Defensible traction. That's

critical. Defensible traction. That's

what Meta valued. Now, holding aside the geopolitical dimension of this, the more interesting questions to me, frankly, are about the product itself and what it means for Meta's strategy. The product

will continue to operate with Manisa CEO Xiao Hong stating, "Joining Meta allows us to build on a stronger, more sustainable foundation without changing how Manis works or how decisions are made." Tech analyst Rahar Jark wrote,

made." Tech analyst Rahar Jark wrote, "Meta has just opened the floodgate for the AI agentic application layer." He

goes on to argue that Manis is more than just an LLM wrapper. Manis, unlike

ChachiPT, he writes, was built to execute tasks rather than provide text answers. The goal is to assign it a

answers. The goal is to assign it a highle task so the agent can navigate different tasks autonomously to complete the job. The unique part is that instead

the job. The unique part is that instead of just talking about a problem, Manis writes a Python script on the fly to solve it, executes that script in a secure sandbox, and looks at the result.

Now, in this way, it actually brings up another one of my predictions of meta re-entering the AI competition conversation in a big way this year.

Basically, my argument was that if 2025 was a rebuilding year with the recruitment of the super intelligence team and the changes to how AI was organized internally, we were going to see in 2026 the manifestation of that

strategy come to the four. Now, I don't think it's exactly clear what part of this whole PI Meta is going to go after.

But perhaps with this Manis acquisition, we're starting to get a picture of what that might look like. Raard again

continues, "This best fits into Meta's WhatsApp as an assistant they can offer both to consumers and businesses and a strong play for their Meta Rayban smart glasses where you need an autonomous agentic system to run those glasses."

Ben Palladian writes, "Manis wasn't a vibes hire its capability overhang to scaffolding to real agents. This is how chatbots turn into labor." And I think some people's interpretation is that

this is going to be meta moving more into the enterprise and getting work done side of things. But I'm not so sure. I think First Mark's Matt Turk is

sure. I think First Mark's Matt Turk is a little closer when he writes, "If you're Amazon, you need your manis. If

you're Shopify, you need your manners.

If you're bookings, you need your manis.

If you're a big consumer and commerce brand and don't own a major LLM, you need to build or acquire an agent because consumer intent is going away from consumer apps." And so, the point

here is what I'm using Manis' general purpose capabilities for right now, i.e.

building slide presentations and things like that is probably not what Meta is interested in using Manis for in the future. To the extent that Matt is right

future. To the extent that Matt is right and consumer intent is moving away from consumer apps and we will increasingly in the future be deploying agents on our behalf to do the things that we do now

around e-commerce and interacting financially on the internet. This is a way for Meta to build the next generation way that its billions of users continue to use it as their starting point for everything that

touches commerce on the internet. Shaun

Chahan writes, "Meta didn't pay 2 billion for Manis' technology. They paid

for 8 months of distribution proof. Open

AAI has better models. Anthropic has

better reasoning, but neither owns a workflow where 3 billion people already live. The agent war won't be won in

live. The agent war won't be won in benchmarks. It will be won in the apps

benchmarks. It will be won in the apps users refuse to leave. Distribution is

the new moat. Model quality is table stakes. I don't think we know exactly

stakes. I don't think we know exactly how it's going to play out yet. I don't

even think that Meta necessarily knows.

I just think that they knew that general purpose agents are going to be an increasingly important part of not just the AI battle, but the internet landscape in general. And that by buying Manis for what is ultimately an incredibly cheap price, frankly, they

were going to get a massive head start in this essential area. Now, the second big story of the break period was also an acquisition, and this one happened just before Christmas. Well, technically

it's a licensing deal, but honestly, it's an acquisition. Let's be clear. I'm

talking, of course, about Nvidia agreeing to a licensing deal with the biggest air quotes you can possibly imagine, with chipmaker Grock, paying them $20 billion for the use of their technology and the acquisition of

several key executives. Grock, which is spelled with AQ, not to be mistaken to Elon Musk's chatbot Gro with a K, is a decade old chip startup. The company was founded by former Google executive

Jonathan Ross, who helped invent Google's TPU chip architecture. He took

that knowledge to Grock and focused on producing high-speed inference chips.

Now, at this stage, Grock has carved out a small market share, largely producing chips for NeoCloud servicing customers with specific latency needs. Their chips

aren't necessarily better than Nvidia's general purpose GPUs, but they can be as much as 10 times faster at producing tokens during inference. Jonathan Ross

is among the executives who will be joining Nvidia, leaving Grock to continue as an independent company. That

means, of course, that Nvidia will now have the creator of the TPU in house working on inference optimization. It's

also not exactly clear how much of a company will be left over once the deal is closed. But despite initial concerns

is closed. But despite initial concerns that this was going to be another deal where the top executives get a major payday and the employees get left in a lurch, it appears that that actually

won't be the case. Axios's Dan Primac tweeted, "Been a bunch of chatter about how Grock employees made out in the Nvidia deal. Made some calls to find

Nvidia deal. Made some calls to find out. in short, very very well, even if

out. in short, very very well, even if not fully vested. Specifically, it

sounds like around 90% of Grock employees are said to be joining Nvidia and will be paid cash for all vested shares. Unvested shares will be paid out

shares. Unvested shares will be paid out at the $20 billion valuation, but via Nvidia stock that vests on its own schedule. So, what is this acquisition

schedule. So, what is this acquisition about? Some of the early chatter

about? Some of the early chatter suggested it was simply about Nvidia snuffing out the competition. And I

don't think in this case that that's really accurate. At 20 billion, it's the

really accurate. At 20 billion, it's the largest acquisition in Nvidia's history and large enough to rank as a top 15 tech acquisition. It's roughly similar

tech acquisition. It's roughly similar in size to the WhatsApp, Slack, and LinkedIn acquisitions. The sheer size of

LinkedIn acquisitions. The sheer size of the deal has Wall Street concerned given that it was framed as a non-exclusive licensing agreement. That raised a lot

licensing agreement. That raised a lot of red flags for investors who are already concerned about Nvidia's valuation. Nvidia's stock struggled over

valuation. Nvidia's stock struggled over holiday trading sessions, suggesting that there isn't very much enthusiasm for the deal. Still, UBS nailed their colors to the mast and reiterated their buy rating for Nvidia just before the

new year. They wrote that the deal,

new year. They wrote that the deal, while coming at a substantial price tag, could quote, "bolster Nvidia's ability to service high-speed inference applications, an area where GPUs are not ideally suited because of all the

offchip high bandwidth memory. This

would also be one of the fastest growing parts of the inference market. And we

see this as another pivot to offering ASIC- like architectures in addition to its mainstream GPU roadmap. Now, despite

this being technical, it's worth unpacking just a little bit. Nvidia's

GPUs are reliant on high bandwidth memory, which is currently experiencing a price spike due to global memory shortage. Gro's architecture, on the

shortage. Gro's architecture, on the other hand, utilizes less costly SRAMM and allows Nvidia to offer a completely different product. Effectively, the more

different product. Effectively, the more mature that AI gets, the more that different workloads have different types of needs that can be optimized by different types of chips. The

architecture of Gro chips is extremely relevant for things like low latency applications, i.e. the sort of general

applications, i.e. the sort of general purpose agent interactions we were talking about before with the Manis acquisition where people don't want to be sitting around waiting for a response. They want to be interacting as

response. They want to be interacting as though the agent is actually an agent working on their behalf. As well as potentially being relevant for other types of applied AI contexts like edge devices running smaller models and

eventually lower power chips to put inside robots and embodied AI. There

also is potentially a virtuous cycle.

Here's Gro CEO Jonathan Ross.

>> Nvidia will sell every single GPU they make for training. right now about 40% of their you know market is inference.

If we were to deploy a lot of much lower cost inference chips what you would see is that same number of GPUs would be sold but the demand for training would increase because the more inference you have the more training you need and vice

versa. You can almost say we're one of

versa. You can almost say we're one of the best things that's ever happened to Nvidia because they can make every single GPU that they were going to make and they can sell it for training high margin right gets advertised across the

deployment and you know we'll take the low margin high volume inference business off their hands and they won't have to sell either margin sums up when Grock floods the market with cheap inference chips everyone's going to need way more training to feed

all that inference capacity it's a perfect cycle more inference equals more training needed anyways guys for my money those are the two biggest stories from the holiday period. But of course,

we are just at the beginning of the year and I expect a lot more to happen in very short order. For now, that is going to do it for this first episode of the AI Daily Brief of 2026. Appreciate you

listening or watching as always and until next time, peace.

Loading...

Loading video analysis...