LongCut logo

数据中心上太空?新的泡沫,还是下一个金矿?

By 硅谷101

Summary

Topics Covered

  • Space Becomes Cheapest AI Data Centers
  • Space Gifts Uninterrupted Solar Power
  • Space Cooling Approaches PUE of 1
  • Edge Computing Verifies Space GPUs
  • Hybrid Ground-Space Computing Emerges

Full Transcript

Have you ever considered that the next generation of "computing power factories" might not even be on Earth?

In the past few years, AI has turned data centers into new "energy monsters."

Electricity, heat dissipation, water supply, and site selection have all become key bottlenecks restricting the evolution of AI.

Suddenly, a seemingly futuristic idea was brought to the forefront : moving data centers into space.

Building data centers in space might sound like a PowerPoint presentation designed to attract investors , but in reality, a land grab for "orbital computing power" has begun.

At the recently concluded Davos Forum, Musk declared that within the next two to three years, space will become the lowest-cost place to deploy AI data centers.

SpaceX's core goal this year is to verify the full reusability of Starship , followed by the launch of solar-powered AI satellites in the coming years , potentially expanding to a scale of hundreds of terawatts.

Musk then dropped another bombshell: on February 2nd local time, SpaceX announced the acquisition of the artificial intelligence company xAI , bringing its total valuation to $1.25 trillion.

Musk revealed that one of SpaceX's most important tasks after the merger is to promote the deployment of space data centers . Documents show that

. Documents show that SpaceX has submitted a launch plan for up to 1 million satellites to the Federal Communications Commission.

The ultimate result is that space will become the lowest-cost place to deploy artificial intelligence. The lowest point , and this will become a reality within two years, or at the latest three.

Amazon founder Jeff Bezos's Blue Origin secretly assembled a development team over a year ago to build a dedicated satellite for orbiting AI data centers.

We will begin building these giant gigawatt-class data centers in space.

Google also recently announced a space data center plan called Suncatcher, which is expected to send the first "rack-level computing power" into orbit in 2027. I have no doubt

in 2027. I have no doubt that in about ten years, we will see this as a more normal way to build data centers.

And the big players aren't just staying in the research stage; some have already started taking real action.

Just recently, Nvidia, through the startup Starcloud, launched a satellite equipped with H100... The launch

of a GPU satellite into orbit and the first successful training of a Nano-GPT model in space signifies that space computing power construction has entered the practical verification stage.

This may herald the birth of a brand new industry: space data centers.

Therefore, the question of space data centers today seems to be no longer "whether to do it," but rather "who can do it first. "

So why are tech companies willing to endure extremely high launch costs to send servers into space?

How should data centers be built in the vacuum of tens of thousands of meters above the Earth's surface? Can computing power truly run cheaper and more efficient AI when it leaves the Earth's surface?

This video will take us into the world of space data centers.

To understand why data centers need to go to space, we must first look at the current state of space... How tough is the life of a ground-based data center ? If you ask Silicon Valley bigwigs

center ? If you ask Silicon Valley bigwigs what the ultimate bottleneck to AI evolution is, they probably won't say algorithms talent , or even chips.

Instead, they'll say two fundamental physical limitations: electricity and heat dissipation . In a previous

. In a previous video about "The Real Bill of Data Centers," we broke it down in detail.

Although power supply and cooling equipment together account for less than 10% of the total construction cost of a data center , they are the real bottleneck for data centers today.

Modern ground-based data centers are essentially power-consuming behemoths.

The continuous power consumption of a hyperscale AI data center has jumped from tens of megawatts (MW) to hundreds of megawatts , even approaching 1 gigawatt (GW).

What does 1 gigawatt mean?

If a system operates at 1 gigawatt of power 24 hours a day, 365 days a year, the electricity generated is about 8.8 terawatt-hours, roughly equivalent to the annual electricity consumption of a medium-sized city.

The problem brought by AI is not just "power consumption," but that almost all electricity will eventually turn into heat.

For example , the power consumption of a single high-end GPU like the H100 is close to 700 watts , and a training cluster often has tens of thousands of cards.

The direct result is that heat dissipation is becoming a more expensive system engineering project than computing power.

With the exponential increase in global AI computing power demand, traditional air cooling technology is struggling to meet the heat dissipation needs of high-density computing devices.

As a result, liquid cooling has become a necessity.

Data shows that a large data center often requires 1-2 liters of fresh water for cooling every kilowatt-hour of electricity consumed.

This means that a 100-megawatt AI data center may consume millions of liters of water per day.

What's more troublesome is that as GPU power consumption continues to rise, the efficiency improvement of cooling systems is slowing down significantly.

However, AI still needs to rely on large-scale energy consumption to continue to advance. AI giants

are racking their brains to obtain electricity, acquiring and transforming power plants, building their own power grids, snapping up gas turbines, and researching nuclear energy.

The ground has been drawn into an AI energy war.

Against this backdrop, a question naturally arises: is there a place with more abundant and stable energy, and where heat dissipation can be more direct and efficient?

The answer is space.

Beyond the atmosphere, space has prepared three great gifts for mankind , a "computing power paradise" that the ground can never provide.

The first great gift is energy.

On the ground, energy is a complex system issue involving power generation, transmission, energy storage, peak shaving, carbon emissions, land, and other aspects. Even the most ideal new energy system

other aspects. Even the most ideal new energy system cannot escape weather changes and seasonal fluctuations.

However, the logic of solar energy in near-Earth orbit is completely different . Without atmospheric refraction

. Without atmospheric refraction , cloud cover , or the alternation of day and night , as long as the solar panels are large enough, theoretically you can obtain almost zero-cost clean energy that is uninterrupted 24 hours a day. Calculation data shows that the utilization efficiency of solar energy in Earth orbit is 8 to 10 times that on the ground.

This means that energy has become a "continuous variable" rather than an "intermittent resource" for the first time. This is extremely important for the development of AI, because the most critical factor for AI training and inference is not "cheap electricity," but rather the need for long-term, stable, and uninterrupted power input . If we broaden our perspective further,

. If we broaden our perspective further, we will find that "solar energy" is just the tip of the iceberg of the space energy gold mine.

The "solar energy" we use in space today is essentially just a byproduct of solar fusion reactions.

The sun itself is a natural nuclear fusion reactor that has been operating stably for 4.5 billion years.

The energy it releases every second far exceeds the total energy needed by the entire human society.

Today, in order to obtain energy, many investors are flocking to research and manufacture small-scale fusion reactions . Musk said that

. Musk said that this is completely unnecessary because we already have a free, uninterrupted ultimate energy source hanging above our heads. This is like installing a mini ice maker in Antarctica and saying, "Look, we've made ice!"

I can only say, "Congratulations!"

After all, you're right next to a 3,000-meter-high glacier.

The second gift is heat dissipation.

On Earth, we need huge fans and expensive liquid cooling systems , but heat dissipation in space is a completely different set of physical laws.

AI operation generates enormous heat , while the background temperature in space is only 3 Kelvin (about -270°C).

Simply facing the heatsink away from the sun is enough to achieve efficient natural cooling.

In a vacuum environment, heat doesn't need to be "moved away" but can be released into deep space through radiation.

We can directly throw waste heat into space using huge radiative heat sinks.

Former Microsoft Energy Strategy Manager Ethan Xu told us that this means the Power Usage Effectiveness (PUE), or energy efficiency, can approach 1 infinitely.

The temperature in space is extremely low, and we know that in traditional data centers, nearly 4% of the electricity is used for cooling rather than powering computing power.

Therefore, in space, if the near-absolute zero-degree environment can be effectively utilized, the waste heat generated by data centers can be directly discharged into deep space through radiation.

In this way, the PUE of data centers can theoretically approach 1.

That is, almost all the electricity supplied to data centers is used to power computing power , not cooling.

The third gift is extremely low latency.

Light travels 30% faster in a vacuum than in optical fiber.

Through laser links, space data centers can bypass complex terrestrial networks and submarine cables, achieving true "global computing power in seconds."

When computing nodes begin to appear in orbit, they are not "far from Earth," but rather potentially within specific... In network topology, space becomes a relay node closer to users and faster.

Therefore, space simultaneously satisfies the conditions of continuous energy, extreme heat dissipation approaching physical limits, and communication. These three things

and communication. These three things are precisely the three most scarce resources for AI computing power at present . However, this seemingly perfect solution

. However, this seemingly perfect solution faces a huge entry problem in reality: how can we cram servers heavier than pianos and more fragile than porcelain into rockets and then precisely deploy them into orbit?

How exactly should space data centers be built?

Currently, global exploration has gradually converged into two mainstream paths : "on-orbit edge computing" and "orbital cloud data centers."

One addresses "current problems," while the other bets on "future scale," solving problems at different levels and representing different stages of ambition.

Regarding these two paths, Zhejiang University and Nanyang Technological University of Singapore recently jointly published a new study in Nature, systematically proposing a complete technical framework for the first time.

We interviewed Ablimit , the first author of the paper. Dr. Aili

will help us understand the differences between the two approaches and how they are built.

First, let's look at the on-orbit edge computing model.

An edge data center is not a complete "cloud."

Its core logic is relatively simple: instead of transmitting all the data collected by satellites back to the ground , AI accelerators are sent directly to the already operational satellites , allowing the data to be analyzed, filtered, and compressed in space.

This is suitable for smaller, more specialized scenarios.

Edge data centers mainly consider single satellites or small satellite constellations.

For example, these satellite constellations may provide remote sensing or imaging services.

When we upgrade them, we add better computing power, such as AI accelerators , to enhance the special computing capabilities of these satellites , such as image processing , and greatly reduce the amount of data that these satellites need to transmit to the ground station

. This will first greatly reduce service latency

. This will first greatly reduce service latency and indirectly reduce the amount of data that the ground data center needs to process . A representative successful case of on-orbit edge computing

. A representative successful case of on-orbit edge computing is the collaboration between Starcloud and NVIDIA.

Last November, Starcloud successfully deployed NVIDIA H100... The Starcloud-1 satellite, launched by them, carries an H100-class GPU.

The entire computing system weighs only 60 kilograms, roughly the size of a small refrigerator.

The satellite's mission is not to "demonstrate computing power," but rather to directly receive data from a synthetic aperture radar (SAR) satellite constellation, process it in real-time in orbit , and transmit the results back to Earth.

So far, it has completed several important tasks in space.

First, it successfully invoked Google's open-source model Gemma and sent a friendly greeting to Earth, "Hi, Earthlings!"

as if it were an extraterrestrial intelligent life form.

Second, it used the complete works of Shakespeare to train OpenAI founding member Andrej... Karpathy

's NanoGPT enables models to express themselves in Shakespearean English.

Furthermore, it can read sensor data in real time and perform real-time intelligence analysis , such as instantly identifying wildfire heat signatures and promptly notifying ground personnel.

The success of Starcloud-1 also signifies that , for the first time, computing power in space is no longer merely an "auxiliary system" but directly participates in the computation itself .

The reason why "on-orbit edge computing" has become the first successful route for building space data centers is based on a very clear technical and commercial logic.

Firstly, the technical difficulty of on-orbit edge computing is relatively controllable.

This controllability is not because "sending GPUs into space" is easy , but because it involves extending existing technologies rather than a system-wide reconstruction.

Secondly, at the hardware level , this route does not invent new computing architectures ; it still uses mature data center-level AI accelerators , simply repackaging them. Firstly ,

in terms of system-level on-orbit edge computing, it doesn't pursue complex computing power scheduling and multi-node collaboration . Instead, each satellite corresponds to a specific type of mission,

. Instead, each satellite corresponds to a specific type of mission, such as remote sensing image processing, meteorological disaster monitoring, and military reconnaissance.

Therefore, it's more like a "mission-specific computing device" than a distributed cloud system.

Because these missions are highly deterministic , it means that algorithms, computing power scale, power consumption, and heat dissipation can be fully designed and verified before launch , rather than "playing on the spot" in orbit.

Moreover, even if a computing satellite malfunctions, its impact is localized and isolated, unlike cloud data centers where a single issue can have a domino effect.

Furthermore, at the application level, its business model is very clear.

Through on-orbit computing, it can significantly reduce downlink bandwidth pressure , lower communication energy consumption, and significantly shorten decision latency, serving various missions.

Therefore, this is not a story of "future computing power," but rather immediately quantifiable efficiency and benefits.

In addition, Dr. Aili stated in the interview that the more significant meaning of "on-orbit edge computing" lies in the fact that this approach is helping to accomplish a crucial task : verifying whether computing power can operate stably and reliably in space for a long period of time, thus laying the foundation for the future construction of orbital cloud data centers.

It is a very important first step because you need to verify several things, the most important of which is the computing power of the GPU in space.

The environment in space is very different from that on Earth; the biggest difference is the presence of many high-energy particles , which have a much greater impact on computing devices.

First, they need to know if the GPU can provide the computing power they want , and then they also want to see if the GPU can withstand these particles and provide service for several years or even more than ten years.

However, because "on-orbit edge computing" mainly serves specific tasks , it also... With a very clear ceiling , it's more suitable for image recognition, object detection, and event filtering than general large-scale computing.

Furthermore, from a physical perspective, due to limitations in satellite size, power supply, and heat dissipation , it's impossible to infinitely stack GPUs , let alone train extremely large models.

Therefore, "on-orbit edge computing" is more of a verification and experimentation of space data centers . The goal of orbital cloud data centers,

. The goal of orbital cloud data centers, however, is more direct and ambitious: to build a truly meaningful cloud computing infrastructure in space.

This approach no longer revolves around a specific type of task but attempts to form a system with multiple computing nodes, high-speed inter-satellite communication, and unified scheduling and orchestration in orbit , ultimately allowing computing power in space to be called upon, allocated, and expanded like a cloud on the ground.

One of the most systematic orbital cloud concepts currently is Google's Suncatcher. The core idea of ​​Project Suncatcher is to deploy relatively fixed computing platforms in orbit to provide computing power supplement to ground-based data centers through continuous and stable solar power.

In this concept, space computing power is not an independently operating "alien system" but is integrated into the existing cloud computing system, becoming part of the ground-based cloud.

It does not pursue global mobile coverage , nor does it undertake direct communication with users. Its

main task is to share the computing power pressure of ground-based data centers.

Simply put, you can think of it as a super-large-scale computing rack suspended in space.

In their published papers, dozens of satellites form a group.

It does not cover the entire region; it is a group that maintains a roughly this shape.

I guess their consideration is to ensure that they can achieve data communication with ground-based data centers from a certain location in space.

The paper on this project published by Google has elaborated on the setting, construction plan , and cost calculation of the Suncatcher system in great detail . In terms of model,

. In terms of model, Project Suncatcher almost breaks down ground-based data centers into many small units and then "space-izes" them one by one.

Its idea is to deploy a batch of satellites with solar arrays in the more stable twilight orbit, with Google on each satellite. The TPU accelerator satellites are interconnected via free-space optical communication (FSO) and a more "intelligent" control system allows them to "fly close together" in space without colliding.

The paper also cites a very specific structure: a cluster of 81 satellites with a radius of 1 kilometer.

On the hardware side, Google developed a special version of the TPU specifically for its space data center. Radiation testing experiments on the Trillium TPU showed that , under a radiation dose equivalent to approximately 5 years of orbital mission lifespan, the TPU did not experience fatal failures.

Regarding cost, Google conducted a detailed learning curve analysis based on SpaceX launch data , predicting that by the mid-2030s, LEO... This means that

LEO... This means that the cost of launching to low Earth orbit could potentially drop to less than $200 per kilogram , and if Starship becomes reusable, the launch cost could even drop to $60 or even $15 per kilogram . According to Google's plan,

. According to Google's plan, two prototype satellites are expected to launch in early 2027, at which time they will test the actual operation of the TPU in space and verify the optical communication link.

If Google's approach is "starting from the data center , breaking it down into a satellite constellation, and then launching it into space," then SpaceX's approach is exactly the opposite : "starting from a satellite constellation and evolving the constellation into a computing cloud."

SpaceX possesses the largest existing low-Earth orbit constellation, Starlink.

Currently, Starlink has approximately 9,300 active satellites , representing about 65% of all operational satellites in orbit.

These satellites are interconnected at high speed via laser links.

This means that if a company wants to create a "distributed system" in space SpaceX is one of the few companies that truly possesses a "distributed hardware foundation."

SpaceX's vision is to gradually evolve some Starlink satellites from "pure communication nodes" into nodes with both communication and computing capabilities.

This way, computing power will no longer be concentrated on a few fixed platforms but distributed throughout the entire orbital network.

But how will this be achieved?

In reality, the Starlink satellites already in orbit will not directly become data centers . They must be upgraded to a "next-generation satellite"

. They must be upgraded to a "next-generation satellite" to truly carry computing tasks.

The core mission of the Starlink satellites currently in orbit is communication; they are responsible for user access, data relay , and inter-satellite laser link forwarding.

While these satellites possess some computing power , they are not designed for high-density computing.

Therefore, directly "upgrading" them into data centers is not practical in engineering.

So, the next step is... SpaceX is more likely to introduce a new type of modified "computation-enhanced satellites" in subsequent launches.

These satellites will undergo significant design changes, including higher power supply capabilities, dedicated heat dissipation structures for computing power, and stronger inter-satellite communication interfaces.

The core function of these satellites is as computing nodes in the network , not just pure communication nodes . Once launched,

. Once launched, these new satellites will connect with the existing Starlink constellation via inter-satellite laser links to form an on-orbit, hierarchical cloud system.

Dr. Aili stated in an interview that SpaceX's approach coincides with the orbital cloud data center construction method that their research team has been considering for many years. "

Our proposed cloud data center framework is based on existing communication satellites, such as Starlink .

We add general-purpose servers, other equipment , and increase the number of solar panels , cooling plates , or even more cooling plates , as well as better bandwidth capabilities.

So , in terms of thinking, it is very similar to SpaceX's approach . The core characteristic of their model is

approach . The core characteristic of their model is that it does not pursue building a super-large-scale computing center all at once, but rather relies on the existing Starlink constellation."

By continuously adding node capabilities , the orbital network itself gradually acquires computing attributes, eventually forming a globally covered , dynamically scheduled, distributed network.

Its advantages lie in lower evolution costs and controllable risks ; even if a computing node fails , it won't drag down the entire communication network.

Besides on-orbit edge computing and constellation-based orbital clouds , there is a more direct and "ground-centric" exploration direction : building centralized data centers in space.

The core idea is simple: instead of distributing computing power across numerous satellites , it centrally deploys rack-level computing systems in space stations or large on-orbit platforms— essentially moving a small ground-based data center entirely into orbit.

Currently, this approach is mostly in the research and early engineering verification stage , but some institutions and startups are already making moves.

At the aerospace agency level , NASA and the European Space Agency have conducted experiments related to on-orbit computing, data processing , and edge computing in the environment of the International Space Station (ISS).

In addition, some commercial space exploration companies are also studying the feasibility of embedding data centers in space stations, including Axiom Space Voyager. The advantages of

Voyager. The advantages of space-based data center models lie in their centralized structure, clear maintenance logic, and engineering thinking that most closely resembles that of terrestrial data centers.

However, the costs are equally significant: extremely high launch and on-orbit construction costs , limited scalability , and a strong reliance on on-orbit maintenance capabilities.

Firstly, its computing power is relatively concentrated, similar to terrestrial data centers.

This concentration of computing power results in more reliable, faster, and lower latency communication between various racks or chips.

However, on the other hand, reliability issues may arise during operation and maintenance.

In a distributed data center, if a computing node on a small satellite fails , dozens or hundreds of other nodes will be affected. In

a centralized, large-scale data center, a major problem could simultaneously impact the computing power of many data centers.

Here, we have already seen a fairly complete picture of space data center construction.

Some choose to start with the most pragmatic on-orbit edge computing, while others attempt to directly build a true orbital cloud computing system.

Although their paths and paces differ , they all point in the same direction: computing power is being seriously pushed into orbit.

However, as these routes move from blueprints to engineering and the real world, the real tests have only just begun.

Space has the sun and a vacuum environment, seemingly... While inherently suited for computing power

seemingly... While inherently suited for computing power , the engineering aspects are far more complex.

This is a typical communication satellite.

The two large "wings" you see are solar panels, providing power to the entire satellite —its primary, and almost sole, energy source.

In the center of the satellite is a relatively compact "box" —the satellite platform.

This includes attitude control, propulsion system , power management, thermal control, and computing control units , ensuring stable orbit and precise pointing towards the ground . The protruding structures in front of or below the satellite

. The protruding structures in front of or below the satellite are communication payloads . They receive signals from the ground,

. They receive signals from the ground, perform basic processing and amplification, and then relay them back to Earth— traditional communication methods. The satellite's design goals are very clear: minimize computation, heat generation, and power consumption , leaving complex calculations on the ground and acting only as a "signal relay." However,

truly transferring computing power to the satellite involves more than just "adding an extra chip"; it requires a complete overhaul of the satellite's engineering logic, from energy and heat dissipation to structural design.

The first change will be to the energy system.

To support continuously operating computing units, the average solar panels on a single satellite will need a larger area, necessitating a more complex power management system. This

is because computing power doesn't require "average power" but rather a stable, continuous, and uninterrupted power input.

For example, a 100-megawatt solar power station on the ground would likely be equivalent to...

An area the size of about 200 football fields is enormous.

However, if the same solar panels were to be placed in space, it's conceivable that the unfolded area would require at least dozens of football fields.

Therefore, you need to find a functional solution : how to fold the solar panels using lighter and more efficient materials, launch them into space, and then unfold them.

During routine operation and maintenance, you also need to find automated methods, such as using robots, to maintain the solar panels . This

is completely different from sending a worker to troubleshoot and repair a problem on Earth .

Next, the satellite's "central hub" will change.

In traditional communication satellites, the central box mainly handles control and scheduling.

In computing satellites, this will house the actual computing payload —AI accelerators, storage modules, and data processing units —which will become the new "core organs."

This will lead to changes in the heat dissipation structure.

Communication payloads generate limited heat, but computing payloads continuously generate heat.

This means that specialized radiant heat sinks must be added to the outside of the satellite to stably deliver heat to deep space . These changes

. These changes will alter the satellite's weight and structural center of gravity, thus posing entirely new requirements for launch capabilities and constellation deployment pace . Even if technically feasible,

. Even if technically feasible, space data centers still... A

more pressing issue is whether the complexity of the engineering and the cost of construction can be afforded. While

the construction process for terrestrial data centers is highly mature, with standardized pathways for design, construction, and power-on space engineering is forced into a complex chain, from system-level design to modular manufacturing , multiple launches, on-orbit deployment, and integrated operation , followed by operation, maintenance, and decommissioning.

Any one of these stages could determine whether all previous investments are wasted.

This necessitates an extremely conservative approach to the project itself.

In our previous video on data center construction costs, we analyzed that building a 1GW terrestrial data center would currently require approximately 51.6 billion yuan. However

, building a space data center currently involves several key components.

These include the energy system (space solar array cooling system, including ultra-large area radiant heat sinks) , computing power, aerospace-grade system packaging , launch, and on-orbit assembly . The cost of launch and on-orbit assembly alone

. The cost of launch and on-orbit assembly alone is almost comparable to that of a terrestrial data center.

This is because, in order to be "sent up," the computing power, energy system, and cooling system must be disassembled, weight-reduced, and repackaged.

This not only increases the manufacturing cost per watt of computing power, but once the scale reaches hundreds of megawatts or gigawatts, the number of launches becomes a significant cost multiplier, according to calculations by NASA and JPL, among other organizations.

To achieve a 1GW-level continuous power on-orbit energy system in space would require solar arrays covering millions of square meters, meaning the total system mass could reach tens of thousands of tons.

Even according to SpaceX Falcon... 9.

The lowest internal launch cost is approximately $15-28 million per kilogram.

The total investment for this part alone reaches $20-30 billion.

Furthermore, terrestrial data centers can tolerate a certain percentage of failures because hardware can be replaced at any time . However, space data centers cannot.

. However, space data centers cannot.

The computing system must operate stably for many years without maintenance.

This means larger-scale components, more stringent testing cycles, and a slower pace of technological iteration.

The end result is that each watt of computing power bears a higher "survival cost."

Therefore , even with a very conservative estimate, the construction cost of a 1GW space data center could reach hundreds of billions of dollars.

However, Ethan also stated that although building space data centers is still very expensive now , with significantly reduced launch costs and near-zero energy costs, space data centers may outperform terrestrial systems in terms of overall lifecycle cost.

I think that fundamentally space data centers , from an economic perspective, aim to compensate for their initial investment cost disadvantage with exceptionally low operating costs over the next few decades. If initial investment costs continue to decrease

, and operating costs continue to decrease over the next few decades , then overall, space data centers are likely to achieve cost advantages over terrestrial data centers in the coming years.

While the potential of space data centers to be cost-effective, even at a level comparable to or lower than terrestrial data centers, remains technically and costly feasible, a significant challenge remains : regulation.

Regardless of the construction method, space data centers will inevitably lead to an order-of-magnitude increase in on-orbit equipment.

This means that to achieve data center-level computing power , satellite constellations could essentially surround the Earth.

Given the already congested low Earth orbit, this introduces systemic problems. First, orbital congestion means computing satellites are often heavier, have longer lifespans, and operate with greater complexity.

When satellites from different countries, companies, and of different types operate simultaneously in the same orbital layer, coordination becomes exponentially more difficult.

Second , there is the risk of collisions and space debris.

If a high-power computing satellite fails and cannot be deorbited in a timely and controlled manner , it could become a long-term source of debris . Once generated, debris

. Once generated, debris propagates at extremely high speeds in orbit, impacting not only individual projects but also the long-term safety of the entire orbital environment.

This means that the advancement of space data centers requires not only technology and capital but also a new orbital governance approach, stricter deorbiting and decommissioning standards , and long-term collaboration across borders and operators.

Having understood the series of technical, cost , and regulatory challenges facing space data centers, one conclusion becomes clearer: space data centers have never been a path to "short-term success."

From the perspective of the entire computing power system, the future role of space data centers is more likely not to replace terrestrial data centers , but to be a supplement.

At least in the foreseeable future, terrestrial data centers will still have irreplaceable advantages : lower cost, faster deployment , more flexible maintenance, and a more mature ecosystem.

For most general computing tasks, placing computing power on the ground is still the most economical and efficient choice . The significance of building space data centers

. The significance of building space data centers is not that they are "cheaper today," but that they provide a computing power growth path that is no longer completely constrained by ground physical conditions.

As the scale of computing power continues to expand and terrestrial data centers begin to be increasingly constrained by energy supply, heat dissipation capacity, water pressure, and land resources, space provides a long-term feasible alternative.

Therefore, even if space data centers are truly built, the more realistic and likely form will not be "the entire computing power goes to space," but a hybrid computing power system where ground and space coexist.

Terrestrial data centers will continue to undertake the main computing power, core storage, and high-frequency interaction tasks , while space data centers will play a role in specific scenarios.

Space data centers should be very feasible in certain scenarios . For example, the training process of AI

. For example, the training process of AI requires a lot of energy, but the customers of AI training are mainly researchers within companies, not ordinary consumers.

Therefore, it is possible to reduce the energy consumption of AI training. For

computing power demands that are not particularly high in terms of latency or reliability , placing them in space is feasible.

Furthermore , with the advancement of space technology, much of the data collected and processed in space needs to be processed there.

Therefore, space data centers can exist as a form of edge data center.

If terrestrial data centers defined the growth of computing power over the past two decades , then space data centers are more like laying the groundwork for the next era of computing power— an infrastructure yet to be launched.

Today they are still expensive, complex, and controversial , and have a long way to go before scaling up . However, they address

. However, they address an increasingly real problem: when computing power needs to be... Can the Earth-based world continue to expand indefinitely?

Perhaps space data centers won't be the main focus in the short term , but they are reminding us that when humanity begins to seriously discuss sending the "cloud" into orbit , it means that computing power is being treated as a fundamental resource requiring consideration on a planetary scale.

The significance of space data centers may not just be when they will be implemented , but that they also make us realize that the boundaries of human computing are no longer limited to Earth.

Thank you for watching this video.

That's all for this episode.

Your likes, follows, and comments are the best motivation for "Silicon Valley 101" to produce in-depth technology and business content.

I'm Chen Qian, see you in the next video!

Bye!

Loading...

Loading video analysis...