On November 2, 2025, a 60-kilogram satellite carrying an NVIDIA H100 GPU reached orbit and did something no hardware had done before: it trained a large language model in space.
Starcloud's demonstration wasn't a publicity stunt. It was the first proof that data center-class compute can operate beyond Earth's atmosphere.
The timing isn't coincidental.
Data centers consumed 415 TWh of electricity globally in 2024—about 1.5% of the worldwide supply—and the IEA projects that to double to 945 TWh by 2030.
Virginia's data centers consume 26% of the state's electricity. Google's Iowa facility used 1 billion gallons of water in 2024.
A 2024 Fairfax County incident saw 60 data centers simultaneously switch to backup generation, creating a 1,500 MW loss that nearly triggered cascading grid failures.
Space offers unlimited solar power and natural radiative cooling. But whether orbital data centers make economic sense depends entirely on launch costs that don't exist yet.
Here's what's real, what's hype, and what the next five years will actually look like.
Who's Building What in the AI Space Field
Starcloud currently leads the field.
Their November 2025 satellite carried an 80GB H100 and successfully ran Google's Gemma model—the first time an LLM has operated on high-powered NVIDIA hardware in orbit.
The company has raised over $21 million from Y Combinator, Andreessen Horowitz's scout fund, and In-Q-Tel (the CIA's venture arm).
A 2026 mission with Blackwell GPUs is planned, scaling toward a 40 MW orbital data center by the early 2030s. CEO Philip Johnston's prediction is aggressive: "In 10 years, nearly all new data centers will be built in outer space."
Google's Project Suncatcher takes a more methodical approach.
The company conducted proton beam radiation testing on its Trillium v6e TPUs and found they survived nearly three times the expected five-year mission dose without hard failures. Two TPU-equipped prototype satellites will launch in early 2027 in partnership with Planet Labs.
The concept envisions 81-satellite clusters flying 100-200 meters apart in sun-synchronous orbits, connected by laser inter-satellite links. Sundar Pichai has said publicly that "a decade or so away, we'll be viewing it as a more normal way to build data centers."
The field is getting crowded.
Aetherflux, founded by Robinhood co-founder Baiju Bhatt, has raised $60 million and targets Q1 2027 for commercial operation. Elon Musk confirmed SpaceX will build orbital data centers using scaled-up Starlink V3 satellites.
Jeff Bezos predicted gigawatt space data centers will be cheaper than terrestrial alternatives "in the next couple of decades." The EU's ASCEND study confirmed feasibility—with caveats we'll get to.
The Physics That Makes This Attractive
So, why space?
Most notably, the power advantage.
Solar irradiance in space runs about 1,366 W/m² versus roughly 1,000 W/m² at Earth's surface, and sun-synchronous orbits provide near-continuous sunlight—no night cycles, no clouds, no seasonal variation.
Google estimates orbital solar panels can be eight times more productive than terrestrial installations.
Cooling works differently but potentially better.
The vacuum of space serves as an infinite heat sink through radiative cooling. No evaporative towers consuming billions of gallons of water. No chiller systems drawing 30-40% of facility power. No grid negotiations or land acquisition.
Communication is less constraining than assumed. Starlink's optical inter-satellite links achieve 100 Gbps per transceiver. MIT has demonstrated 200 Gbps space-to-ground links. LEO latency runs 20-50 milliseconds round-trip—and light travels 47% faster through vacuum than fiber.
The Engineering Challenges That Aren't Showstoppers
Thermal management is the most misunderstood constraint.
Without air, heat escapes only via radiation. At typical operating temperatures, a black surface radiates roughly 300-460 W/m²—meaning a 100 kW system needs 100-200 square meters of radiator surface.
Scale to a gigawatt and you need 1-3 million square meters of radiators. Solvable, but mass-intensive. The ISS sheds about 70 kW using eight billboard-sized ammonia-loop radiator wings.
Radiation is less terrifying than headlines suggest.
Google's TPU testing found no hard failures up to 15 krad(Si)—twenty times the expected five-year mission dose. Commercial chips can survive their 3-5 year operational lifespan with standard aluminum shielding. ECC memory and software fault tolerance handle most single-event upsets.
The real constraint is supporting the infrastructure mass. Independent analysis found that a 40 MW orbital data center requires up to 22 Starship launches—not one. Thermal radiators and solar arrays outweigh actual servers by roughly 4x.
The Economics Equation
Everything depends on launch costs.
Falcon 9 achieves roughly $1,600-2,000 per kilogram to LEO—a 27-34x reduction from the Space Shuttle's $54,500/kg.
Google's Suncatcher analysis identifies a clear threshold: space data centers become viable when launch costs fall below $200/kg. That requires another 8-10x reduction from the current best-in-class.
SpaceX targets sub-$100/kg by 2030 with routine Starship reusability.
Current performance is closer to $500-10,000/kg depending on configuration. The gap is substantial, and history suggests caution—the Space Shuttle was projected to fly 64 times per year but averaged fewer than five.
Independent analysis challenges company projections significantly.
Starcloud suggested a single Starship launch could create a 40 MW space data center for $8.2 million, but to a range of criticism of how that’s just not possible. Working through actual mass budgets, the cost lands between $110 million and $2.2 billion.
The EU ASCEND study confirms viability only if Starship achieves roughly $10 million per launch AND new ultra-low-carbon launchers get developed. Neither condition is met, despite what leaders like Elon Musk are claiming.
Insurance adds uncertainty.
Space asset premiums run 15-25% of asset value for launch. In-orbit coverage for novel technologies commands 10-20%+. Orbital data centers have no actuarial precedent.
The Environmental Math Is Messier Than Proponents Claim
An NTU Singapore study suggests carbon payback within five years. Starcloud claims 10x CO₂ savings over a data center's lifetime. Eliminating water consumption addresses a real crisis—U.S. data centers consumed 17 billion gallons in 2023, projected to double or quadruple by 2028.
That’s optimistic. On the skeptical side, a Saarland University study found orbital systems incur "up to an order of magnitude more" carbon when accounting for launch emissions and upper-atmosphere effects.
Rocket exhaust particles deposited high in the atmosphere have warming effects 500 times greater than lower-altitude particles. The EU ASCEND study explicitly requires launchers emitting 10x less carbon than current rockets. No such vehicle exists.
Space debris complicates the picture.
LEO contains over 36,500 tracked objects larger than 10 centimeters. SpaceX already performs 4,000+ collision avoidance maneuvers monthly for Starlink. Preventing Kessler syndrome requires an estimated $2-4 billion annual investment through 2040.
What Workloads Actually Make Sense
Earth observation processing is the obvious candidate—Planet Labs captures 30 TB of imagery daily that currently must be downlinked for ground processing.
Processing in orbit reduces bandwidth requirements and enables near-real-time disaster response analysis.
Batch AI training works well too: latency-tolerant, benefits from continuous power, runs for extended periods without intervention.
Other workloads don't belong in space. Interactive AI inference—chatbots, recommendations—requires sub-100ms response times that can't tolerate 20-50ms baseline latency. Financial trading operates on microseconds. Gaming and video conferencing need millisecond responsiveness.
The decision framework: if your workload tolerates more than 500ms latency, originates in space, runs as batch processing, and requires minimal real-time ground interaction, orbital compute might eventually make sense. Everything else stays terrestrial.
Realistic Timeline
Near-term milestones are concrete.
Google and Planet launch Suncatcher prototypes in early 2027. Aetherflux targets Q1 2027 commercial operation. Axiom Space plans orbital data center nodes by late 2025.
Economic viability arrives much later—if at all. That’s the risk.
Google's analysis suggests the mid-2030s as the earliest timeframe when launch costs could reach $200/kg. The EU ASCEND project targets 1 GW deployment before 2050—notably conservative compared to startup timelines.
Skeptics make valid points.
Quentin A. Parker, Director of the Laboratory for Space Research at HKU, argues the analysis "doesn't really stand up to scrutiny... The terrestrial solutions are still there, and they're still probably a lot cheaper." Terrestrial data centers continue improving through efficiency gains, renewable energy, and nuclear power.
The target is moving.
The Bottom Line
Space data centers aren't science fiction—Starcloud proved commercial GPUs work in orbit. The physics are real advantages: unlimited solar power, natural radiative cooling, freedom from grid constraints. Google's testing shows consumer-grade TPUs survive LEO conditions.
But economics rest almost entirely on Starship achieving cost reductions that remain unproven. Independent analysis suggests company projections may be 10-20x optimistic. The environmental case requires "green launchers" that don't exist.
Space data centers will likely emerge as specialized capabilities for space-native workloads—Earth observation, satellite operations, latency-tolerant batch training—rather than a replacement for terrestrial hyperscalers. The 2027-2030 window will prove whether these concepts scale beyond demonstration.
The question isn't whether space data centers are technically possible. They clearly are.
The question is whether economics can ever compete with terrestrial alternatives that aren't standing still.
