LongCut logo

失衡的乌托邦:Meta的开源AI路线是如何遭遇滑铁卢的

By 硅谷101

Summary

## Key takeaways - **Meta's AI restructuring: Layoffs amid high-profile hiring**: Meta AI announced significant layoffs of 600 positions, including core research directors and AI executives, while simultaneously spending hundreds of millions to poach top AI talent. This move highlights a contradictory strategy within the company's AI division. [00:01] - **FAIR's utopian vision vs. GenAI's product focus**: Meta's AI strategy was structured with FAIR focusing on frontier research and AGI, while GenAI aimed to integrate AI into products. This balance, ideal in a 'utopian state,' began to falter as productization gained priority over foundational research. [01:50] - **Llama 4's failure: Overemphasis on productization, neglect of inference**: Despite Llama 3's success, Llama 4's development prioritized multimodal capabilities for product integration, neglecting advancements in inference technologies like 'thought chains' (CoT). This shift, compounded by competition from DeepSeek and OpenAI's o1, led to a critical imbalance and project failure. [16:24], [27:41] - **Management chaos and tight deadlines led to Llama 4's decline**: The rush to meet tight deadlines for Llama 4, exacerbated by unexpected competition and internal prioritization conflicts, led to team burnout and a significant drop in quality. This 'firefighting' approach, driven by management perceived as lacking deep AI expertise, ultimately undermined the project. [20:03], [20:39] - **Alex Wang's rise and Meta's AI division overhaul**: Following Llama 4's issues, Meta's AI division underwent a major restructuring led by the 28-year-old Alex Wang. He now heads the new MSL department, reporting directly to Zuckerberg, consolidating power and aiming to rebalance core research with product integration. [24:08]

Topics Covered

  • Meta's AI strategy: Balancing frontier research with productization.
  • Llama's 'open weights' model redefined open source AI.
  • Llama 4's failure stemmed from prioritizing product features over core tech.
  • Internal communication breakdown hindered Llama 4's development.
  • New leadership structure consolidates AI power under Alex Wang.

Full Transcript

In late October 2025,

Meta AI announced layoffs of 600 positions

, including core research directors and

executives in charge of AI business, who were leaving and marginalized.

Even Turing Award winner Yann LeCun

was considered to be in dire straits.

When I saw the news, I was shocked.

On the one hand, Zuckerberg was using hundreds of millions of dollars in annual salaries

to poach AI talent

, but at the same time, he was laying off staff so decisively.

What was the reason behind this contradictory behavior? So

we interviewed

Tian Yuandong, former FAIR research director and AI scientist

at Meta, Gavin Wang, a former Meta employee who

participated in Llama 3 training,

a senior HR expert in Silicon Valley, and some anonymous people

to try to reconstruct what happened with Meta's Llama open source roadmap.

Why was

Llama 3 so amazing

, but Llama 4, just one year later, was so disappointing?

What happened in between ? Was

Meta's open source roadmap

doomed to be a mistake from the beginning?

In the current fierce competition of large AI models,

can a utopian AI research lab

still exist?

We can't let people who don't understand it be the leaders

or planners of

Llama. 4. When planning,

you can sense that there might

be some changes in the leadership direction.

Facebook has

money, credit cards, people, and data—

it has almost everything. So

why isn't it doing well now?

Let's go into today's video

and see how Meta's open-source AI approach

has hit a roadblock.

First, let's talk about

Meta's overall corporate structure for its AI strategy.

At the end of 2013,

Zuckerberg began building Meta's AI team.

At that time, Google acquired

Geoffrey Hinton's DNN team

and recruited Hinton.

At the same time,

Meta invited Yann LeCun to lead the development of AI.

Thus, two of the Turing Award's three giants

began to step into commercial technology to lead AI research and development

. When Zuckerberg invited Yann LeCun

to join Meta,

the latter mentioned three conditions:

first, he would not move from New York

; second, he would not resign from his job at New York University

; and third, he must conduct open research,

publicly release all his work,

and open-source the code.

So we can see that Meta's initial

strategy was open source

. After joining Meta, Yann LeCun

began to work on cutting-edge AI research

and established Fundamental AI. The Research Lab

, also known as the renowned Fair Labs,

leads cutting-edge research in artificial intelligence.

Fair Labs is responsible for frontier research, which

involves exploring

new ideas, approaches, algorithms,

frameworks, and model architectures that may

not have significant applications

at present. This exploration may lead to major breakthroughs.

That's the general logic.

However, for Meta,

the ultimate goal is to see the progress of AI in its own products.

Therefore, a group called "Generative AI" (GenAI)

was set up in parallel with Fair Labs.

This group has different functional teams,

including

the Meta AI team that develops the Llama open-source model and applies AI capabilities to products,

the data center team that builds AI computing infrastructure, and

other smaller departments

such as Search and

Enterprise.

Video-gen (Video Generated) models,

GenAI, and FAIR are parallel entities,

like a balance scale.

On one side is cutting-edge research, and on the other is productization.

Ideally,

cutting-edge research leads to better product performance

, and profitable products give management

a greater incentive to allocate funds to FAIR for R&D.

FAIR provides good ideas and work,

which are then

used by GenAI, for example,

to incorporate into production

and be used in the next generation of models.

Many people's initial goal is to do something

different,

a different direction,

or a different kind of work. However

, whether they can truly achieve AGI (Artificial General Intelligence)

is a significant question.

So, FAIR's goal is AGI,

but GenAI's goal is

to integrate AI into Meta's existing products

and make AI effective.

One key aspect is Llama.

Llama is a very large model,

and then there's the question of how to make AI

work well

in specific applications

. Maintaining this balance

is an ideal utopian state,

and the prerequisite for this utopian state is that

Meta's AI model level

must always remain at the forefront,

or at least at the forefront of the open-source field

, and not fall too far behind closed-source models.

When was the happiest time

at Fair?

I think Fair was very happy from when I joined

until 2022.

Yes, I think that period was very happy

because after the advent of large language models,

the

entire ecosystem and the relationships between researchers

changed. Because

computing power has become a crucial factor

with the advent of large language models , and given

the limited computing power

available, various problems and contradictions arise.

Everyone wants to train a very large model, which leads

to

inter-model conflicts. For example, if one person has more GPUs, another will have fewer

,

but

without enough GPUs, a good model cannot be trained.

Therefore, for this reason,

the situation after 2023

will certainly not be as good as before.

How did Meta's AI balance become unbalanced?

We can see some clues and traces

in the release of Llama's fourth generation.

Incidentally, the reason

Meta named its large language model "Llama"

is reportedly because of

its large size... The abbreviation "LLM" for Language Model

is difficult to pronounce

, so a vowel was added

, making "Llama"

easy to pronounce, remember,

and spread. This is how

the name of large language models

became associated with alpacas

. Let's first look at Llama 1.

This

laid the foundation

for Meta's "open source" approach to large models. On February 24, 2023

, Meta released the Llama model,

emphasizing "smaller parameters and better performance,"

releasing multiple scale versions of 7B/13B/33B/65B.

It was emphasized that the 13B model at the time could

outperform the 175B parameter GPT-3 on multiple benchmarks

. A week after the official announcement, Llama's

weights were "leaked" as seeds on 4chan,

sparking widespread discussion in the AI ​​community about open-source models

and even prompting a letter from a senator questioning Meta.

Although there were many dissenting voices

, the industry unexpectedly supported

Llama's "accidental leak,"

which was seen

as a reshaping of the "open source for large models" landscape

and quickly spawned

numerous grassroots fine-tuning projects.

Here, we'll briefly explain

the definition of "open source" for large models.

Actually, Meta isn't entirely open source.

Meta calls these

"open weights."

So what are these weights?

In machine learning, there are three parts

: architecture,

weights, and code. "Weights" are

all the numerical parameters

the model learns.

After training,

all the parameters are stored in several large binary files, each containing

the

matrix values ​​for each layer of the neural network.

During inference,

the model code loads these weight files and

uses the GPU to perform matrix operations to generate text.

Therefore, "open weights" means

providing the trained parameter files to the public,

allowing external users to load, deploy, and fine-tune them locally.

However, it's not completely "open source,"

because true open source means disclosing training data,

code, and licenses, etc.

Meta hasn't disclosed this information.

Even later versions like Llama 2, 3, and 4

only made the weights open

, with slight relaxations in licensing policies

. The conclusion is that

although Llama is "semi-open source,"

compared to companies like OpenAI,

Anthropic, and Google, which are completely closed-source

and only

provide model capabilities

through APIs, Llama has

brought considerable vitality to the open-source community.

On July 28, 2023,

Meta, in conjunction with Microsoft, released the large model Llama.

Llama 2, a new generation of models with three parameter variants (7B, 13B, and 70B), is "open source"

and, while also "open weights,"

it's a free and commercially viable version

compared to Llama 1

which

was not commercially viable and could only be applied for for research purposes.

It also relaxes licensing restrictions

. Magazines like Wired have pointed out that

Llama 2 makes the "open route"

a reality in the face of closed model giants

. We've seen

Llama 2 quickly become popular

in the developer community

; its availability has significantly expanded the ecosystem and AI development,

making it

the preferred model for many. It

no longer needs to be constrained by the OpenAI API's rate limiting

, nor does it need to explain to customers

why they have to

pay extra dollars based on usage

. The key difference lies here:

Meta and Microsoft's bold move

has completely changed the industry landscape.

They forced other companies

to be more open

because they

set new industry standards for

what a good model should

look like and

how open source licensing should be done.

Then came Llama 3 in 2024,

the most glorious and successful period for the Llama series. Entering the world of

Llama... In the Llama 3 era,

Meta has become a top player in the AI ​​open-source community.

From April to September 2024,

Meta released three model iterations.

On April 18, 2024,

Meta released

two Llama 3 versions, 8B and 70B

, claiming to "significantly surpass Llama 2" for the same scale

and using it as one of the foundations for the Meta AI assistant

. Then, on July 23,

Meta launched

three Llama 3.1 models: 405B, 70B, and 8B

, claiming that 405B is

one of the "world's strongest open and available foundational models,"

and simultaneously launching on

platforms such as AWS Bedrock and IBM WatsonX.

Just two months later, on September 25, 2024,

Meta launched Llama 3.2,

focusing on small but comprehensive multimodal capabilities,

adding 1B and 3B lightweight text models

and 11B and 90B visual multimodal models,

targeting terminal and edge scenarios

. It is also integrated with platforms such as AWS and

the open-source framework platform Ollama, and can also run locally.

We interviewed

Gavin Wang from the Llama 3 team,

who is in charge of Llama... The post-training work for Llama 3

showed us that

the GenAI team was progressing at "light speed"

within the entire Metabase,

giving us

the feeling that "one day in AI is a year in the real world."

At that time, Llama 3.1/3.2

saw significant progress,

such as the release of

the multi-modal model,

and later, their

lightweight models,

1B/3B which

are very lightweight.

I think they

made a lot of progress for the product ecosystem, with

a lot of support from the open-source community.

I have friends on the Llama Stack team

who specifically support

the entire Llama ecosystem

for enterprise and small business deployments

. The strong launch of Llama 3,

especially version 450B,

was seen as

a step closer to the closed-source camp in terms of

model capabilities and was also seen as

a way to rapidly promote the implementation of AI applications.

For Meta employees,

especially the AI ​​engineers in the Llama team,

this was a project they were extremely

proud of.

At the time, the narrative was that

Meta was the only major company with

an open-source model

and made significant contributions to the entire open-source ecosystem

. I think many people felt

that it wasn't just about doing a job

, but that we were truly supporting

the development of the forefront of AI.

Every thing we did

felt very meaningful.

I felt very proud at the time. When

I went out and told people that

I was working on

the Llama 3 team,

and they were founders of some startups,

they would all say

thank you for your efforts.

It felt like the entire tech community,

especially the AI ​​startup community,

was counting on Llama.

Meta

hoped that the release of Llama 4 would further expand

its influence in the AI ​​development community

and maintain its position as "the only open-source model among the top large models."

Zuckerberg posted after the earnings call

at the end of January 2025, saying

, "Our goal for Llama 3

is to make open-source models competitive with closed-source models

, and our goal for Llama..." The goal of Llama 4 was to take the lead

, but the release of Llama 4 three months later

was a complete disaster and a Waterloo. On

April 5, 2025,

Meta released two versions of Llama 4,

Scout and Maverick,

claiming a significant leap forward in multimodal and long-context capabilities

, and prominently citing

its leading performance on the LMArena leaderboard in its promotional materials.

The Maverick version was second only to Gemini 2.5 Pro,

tied for second place with ChatGPT 4o and Grok 3 Pro.

However,

the feedback from the developer community was not positive.

They believed that Llama 4's performance was below expectations.

Rumors began to circulate

that Meta

's version that reached second place on LMArena

was cheating,

and that Llama 4's ranking on LMArena

was based on an optimized

variant that had been trained with dialogue reinforcement

, potentially misleading LMArena and causing overfitting.

Although Meta executives quickly denied cheating

, the impact was rapid.

On one hand, the media widely viewed it as

a "bait and switch" tactic

of "using a specially tuned version to manipulate the charts, "

and industry discussions about the credibility

and reproducibility of benchmarks intensified.

On the other hand,

the release of Meta's more advanced Behemoth version was delayed,

severely impacting public relations and the overall timeline.

At the time of writing,

Behemoth had not yet been released,

and Meta had likely given up.

What followed was

Zuckerberg's all -or-nothing

acquisition of Scale AI,

bringing in Alexander Wang to lead the new AI architecture,

and then spending hundreds of millions of dollars to poach talent

and disrupt the Silicon Valley AI talent market.

Then came the recent news that

Alexander began restructuring Meta's entire AI architecture,

laying off 600 people.

But if you look at this timeline,

doesn't it still seem disjointed

? So what happened

in the year

between Llama 3 and Llama 4?

Why did Llama 4 suddenly fail

? Wasn't that too fast ?

Perhaps we've found some answers

through retrospection.

Remember what we said earlier about

Meta's internal AI architecture being a balance scale?

So, Llama... 4. The reason for the failure

is likely that the balance was off

. Let's go back to Meta's AI architecture.

FAIR and GenAI are two parallel groups.

Yann LeCun is in charge of FAIR,

but Yann LeCun is often

immersed in his own research and development.

Sometimes he even argues with people online,

such as Musk,

and often says he is not optimistic about the LLM route,

which makes Meta very troubled.

So in February 2023,

Meta's senior

management transferred Joelle Pineau, the head of Meta AI research, to FAIR

to become the global head of FAIR,

and to lead FAIR together with Yann LeCun.

The head of the GenAI department is Ahmad Al-Dahle.

This guy worked at Apple for almost 17 years.

Zuckerberg recruited him because he

wanted to combine AI with Meta's various products,

including the AI ​​integration of Metaverse smart glasses

and the chat tool meta.ai, etc.

After the success of Llama 2,

the company began to develop Llama. Throughout Llama 3,

Meta's senior management increasingly emphasized

the "use of AI in their own products."

Consequently, in January 2024,

Meta's AI team underwent a restructuring, with

the two FAIR leaders reporting directly to

Meta's CPO, Chris Cox.

Llama 1-3 represented an era

where everyone was frantically

pursuing the scaling law.

At that time, the industry

was focused on improving the capabilities of foundational models and

exploring

the boundaries of

large language models.

However, Meta's leadership,

including Zuck (Mark Zuckerberg) and CPO Chris Cox,

recognized early on that

for LLM capabilities to be

truly implemented and generate social value,

it must start from product capabilities.

Therefore, the core goal

of GenAI during Llama 2 and Llama 3

was to truly productize and engineer research results

. Consequently, at the highest management level—

vice presidents and

senior directors—the

company's middle and senior

management essentially

comprised individuals

with prior product and engineering backgrounds,

led

the Llama team. When Llama 3 was successfully launched and

the top management began to formulate

the roadmap for Llama 4,

all attention was focused

on product integration

, namely multimodal capabilities.

Therefore, the importance of model inference capabilities was neglected

. During the year of development between

Llama 3 and Llama 4, on

September 12, 2024,

OpenAI launched the o1 series models based on thought chains.

Then, in December 2024,

China's DeepSeek open-source model emerged,

using the MOE hybrid expert architecture

to significantly reduce model costs

while maintaining inference capabilities.

Before you were pulled in to help with Llama 4,

what were you researching?

We were doing

some

research on the inference process,

mainly on thought chains

, their form, and training methods.

Actually before

o1 came out (last September

) , we noticed that

very long thought chains

would affect

the scaling law of the entire model.

Researchers like Tian Yuandong

in the FAIR group

were already working on thought chain research.

However, this cutting-edge exploration of inference capabilities

was not promptly communicated to the Llama model engineering team

. When planning Llama 4,

I sensed

a shift in leadership direction.

Overall, they still wanted to support

Meta's core products, particularly

Llama's ecosystem.

Multimodal computing was a key focus,

but DeepSeek,

launched in January,

boasted exceptionally strong

inference capabilities, which

were a topic of discussion.

However, due to Meta's emphasis on

multimodal computing, they

didn't prioritize inference.

After DeepSeek's emergence,

there was reportedly discussion

I had already left the Llama team

by then—about whether

to refocus on inference.

This likely caused

some prioritization conflicts

and limited time

, leading to everyone working overtime and making numerous attempts.

I believe DeepSeek's arrival caused

some management chaos

in terms of resources and priorities

. Another point is that Llama 1-3

's model and organizational structure

largely continued the initial design,

but Llama

4...

The success of 3 itself

inspired everyone to take it a step further and

undertake larger projects.

However, some problems arose.

My observation is that many of the company's senior

management, such as vice presidents and senior directors,

have traditional

infrastructure backgrounds

, or perhaps some with backgrounds in

computer vision

but less in natural language processing

. They lack a deep understanding and knowledge

of AI-native technologies

or large language models. In reality, those who truly understand

these areas are often

the academic research-oriented PhDs who

are actually doing the work,

especially those we are very proud of.

Chinese academic PhDs

are generally very technically proficient

, but they often lack influence

or resources within companies. This can lead

to situations where outsiders manage insiders,

which may be unforeseen. The emergence of

OpenAI's O1 series

and Deepseek

threw Meta into disarray in early 2025,

prompting top management to temporarily send Fair's research team

to support Llama. The development of version 4

could be described as "firefighting,"

and this "firefighting team"

was led by Tian Yuandong.

A major lesson I learned is that

for projects like this

, you can't let

people who don't understand the project lead it

or do the planning.

If something goes wrong,

everyone should agree that

we can't release it at that time;

we should postpone it.

It should be a

phased approach

: release only when things are running smoothly. You

can't set a deadline in advance.

Setting a deadline in advance

makes many things difficult to do. For

example, many people in our team

were extremely tired.

For instance, I was in California

, and several team members in the Eastern Time Zone

called me at midnight;

they were still working at 3 AM there.

It was incredibly tough

. So,

I think this was a major problem

. Why were they so tired?

Because the deadlines were very tight. We

had to release the version on schedule by

a certain day.

Generally, when you're doing project management,

you have to work backward from the end

of February or early March, looking at

what needs to be done

by the end of March.

But if you're doing these things... When

you discover that a model isn't working

or there's a problem with the data,

a

major issue arises:

how do you get everyone to stop because of your statement

? For example, if you say, "This data is unusable;

we need

to replace it,"

then things get complicated.

We have to postpone the whole thing by

a week or two.

But whether this can actually be done

is a big question.

Under intense deadline pressure,

the end result is that the project can't be completed,

or people can't

voice their objections

. This leads to

a significant drop in quality,

which is a major problem.

Why does Meta have such strong pressure regarding

deadlines? Because

it's already the leading

open

-source model , but of course, DeepSeek's

release at the beginning of the year

was

unexpected.

But why is there such a strict deadline—"

I must release this by this time"?

There was a deadline set by high-level approval

, but I can't go into details

. You should ask the relevant people; those

who know, know

. We

can basically find some answers here,

starting with Llama...

The "productization of AI" approach established in 2013

focused on multimodal and application-oriented models,

busy integrating applications and businesses,

but neglected inference capabilities

and cutting-edge technology research . This forced

the FAIR team, on the other side of the scale,

to "put out fires" from other groups

, thus disrupting the balance

. In reality,

the competition for cutting-edge models was too fierce

, making it difficult to actually use

FAIR's papers. Although some

papers were used

, there were still some problems

in communication.

When I was at FAIR, I felt

that sometimes when I pinged GenAI people,

they would

n't reply.

Yes, after I went to GenAI, I felt

that I really couldn't communicate with them (FAIR researchers).

Why

? Because

they

were too busy.

For example

, if I didn't check my phone for half an hour,

there might be 20 or 30 messages there,

and I had to check them.

So there were many people to contact

and many decisions to make. So

I can understand

that in an environment like GenAI,

it's difficult to have

a long-term thinking process

. So how did Zuckerberg

fix this imbalance?

He directly parachuted in a special forces team

led by Alex.

Let's return to Meta's AI business structure.

Following another restructuring,

the upper management has experienced a series of upheavals.

Alex Wang led dozens of

top researchers hired at high salaries

to form

a special group

within Meta called TBD , which enjoys unlimited privileges and priority.

TBD, FAIR, and GenAI together form

Meta Superintelligence Labs'

MSL department,

reporting directly to Alex

, who in turn reports directly to Zuckerberg.

This means that

Yann LeCun of FAIR now reports to Alex

. Joelle Pineau

was previously required to report to

Ahmad, the head of the GenAI group.

We see that Joelle left in May of this year

to become Chief AI Officer at Cohere

, and Ahmad has been relatively quiet for a long time

, not being appointed to lead any important projects

. CPO Chris Cox has also been overshadowed by Alex

and excluded from direct leadership of the AI ​​team .

Therefore, the current structure is

a situation where the 28-year-old Alex is the sole leader.

I've heard various complaints within Meta

about Alex and his

extremely privileged group,

including the fact that people in the TBD team

can go three years without performance reviews

and can ignore all messages from other VPs

. All AI papers

must be reviewed by TBD (Technology at Diplomacy and Research Center) staff

before publication.

Many of TBD's members are relatively young, which

has caused considerable dissatisfaction

among senior researchers,

leading to internal political infighting and

a sense that another wave of conflict is brewing.

However, it's undeniable that

this privilege comes with performance targets

. For Zuckerberg, this performance target

isn't just about making Llama great again,

but about "Meta must win."

In this AI race,

this restructuring

may be Zuckerberg's last

and most important opportunity

. Alex wrote in an internal team email about

three changes he would make : first, strengthening

the core basic research capabilities

of the TBD and Fair teams;

second, enhancing the integration of product and application development

and continuing to focus on products as the model focus.

Third, a core infrastructure team was established

to support research betting.

The first point is to

centralize basic research at TBD Lab and FAIR,

making them more closely integrated

. Some of the researchers laid off

mentioned in emails

that their projects might not have been high-impact;

they were doing some cutting-edge research

, but it wasn't relevant to our current work

because much cutting-edge research is highly abstract

, from a mathematical and

theoretical perspective,

and quite far removed from engineering.

So, the first point is centralization

, and the second

is to more closely integrate products and models. One of

the people who joined with Alex Wang

was the former CEO of GitHub.

These two

are equivalent to Facebook's Zuckerberg

simultaneously bringing in two high-level talents

: Alex Wang,

who manages models,

and Nat Friedman,

formerly of GitHub. The CEO

is product-oriented

because products provide better feedback to the model,

creating a flywheel effect during use.

Thirdly, you see,

by building a unified core infrastructure team,

the management of the GPU and data center

becomes more centralized.

Previously, this was likely fragmented,

with several leaders involved;

you had to apply for

a GPU. Now, GPUs are centrally managed

.

So, the email is quite clear.

Whether Alex can live up to Zuckerberg's bet remains to be seen.

Perhaps we'll have the answer soon.

In summary,

Meta

was a leading open-source model in the first three generations of Llama

, guiding the open-source camp against

closed-source platforms like OpenAI and Google Gemini.

However, after the great success of Llama 3,

the company's top management was eager to combine AI with productization.

In planning the roadmap,

they used a "product-driven R&D" approach,

focusing Llama 4's upgrades

on engineering performance such as multimodality

, but missed

the time advantage of cutting-edge inference technologies

like CoT (Co-Link). Although FAIR's AI scientists, including Tian Yuandong,

were already researching CoT at the time

, after DeepSeek caused a sensation,

Tian Yuandong's team from FAIR was temporarily brought in

to optimize Llama. The MoE architecture on the 4th generation

ironically disrupted

research and development in CoT and inference capabilities,

causing a complete imbalance

between cutting-edge AI technology research and product engineering.

During the interview,

I repeatedly thought of

historically shining frontier labs like

Bell Labs, IBM Watson Research,

and HP Labs,

but they all

declined due to their inability to balance cutting-edge research and commercialization.

FAIR, with its over ten-year history,

was once a utopia for idealistic AI scientists

, but now it has become another victim of commercialization

. Our interview with Tian Yuandong

actually had many more interesting parts,

which we'll share in the next video

in a dialogue format.

He talked to me about many things unrelated to Meta

, but related to a senior AI researcher's beliefs, interests

, and cutting-edge thinking on AI development.

I think it's very valuable

and I hope it will be helpful to everyone.

So please don't forget to subscribe to our channel

and don't miss the updates!

I'm Chen Qian, co-founder of Silicon Valley 101.

Your comments, likes, and shares

are the best motivation for us to create in-depth technology

and business content

. See you in the next video, bye!

Loading...

Loading video analysis...