Starcloud Raises $170M to Build AI Data Centers in Space
Starcloud secures $170M Series A led by Benchmark and EQT Ventures, hitting $1.1B valuation as it builds NVIDIA H100-powered orbital AI data centers.
TL;DR
Starcloud, a startup building AI data centres in space, has raised $170M in a Series A led by Benchmark and EQT Ventures, hitting a $1.1B unicorn valuation. The company already launched a satellite carrying an NVIDIA H100 GPU that trained an LLM in orbit — powered entirely by solar energy. The fresh capital will fund its next-generation satellites launching in October 2026.
Starcloud Becomes a Unicorn With $170M Series A: The Startup Putting NVIDIA H100s Into Orbit Is Redefining AI's Future
Space has always been humanity's final frontier, but for the first time in history, it is also becoming the next great frontier for artificial intelligence infrastructure. In one of the most ambitious and talked-about AI funding news stories to emerge from the tech world this year, Starcloud — a Washington-based startup building orbital data centres for AI inference — has secured $170 million in a Series A funding round, catapulting its valuation to $1.1 billion and officially earning it unicorn status. This is not just another headline in the ever-growing catalogue of AI funding rounds; it is a signal that the global race to solve one of AI's most pressing structural problems — energy — is now extending beyond the bounds of Earth itself.
The round was led by two of the most respected names in venture capital: Benchmark and EQT Ventures. Joining them as new investors in this landmark raise are Macquarie Capital, Seven Seven Six, Manhattan West, Adjacent, Carya, GSBackers, Link Ventures, Harpoon, New Vista Capital, and Goldman Sachs board member Kevin Johnson, who participated as an angel investor. Existing backers including NFX, Y Combinator, FUSE, Soma Capital, 3C AGI Partners, and Nebular also doubled down on their confidence in the company's vision. As part of the deal, Benchmark general partner Chetan Puttagunta will join Starcloud's board of directors, bringing with him a wealth of experience in scaling high-growth technology companies. At The AI World, we see this AI funding milestone as a defining moment not just for Starcloud, but for the entire trajectory of AI infrastructure globally.
A Bold Idea Born Out of Necessity
To understand why Starcloud's story resonates so deeply with the AI industry, one must first understand the problem it is trying to solve. Artificial intelligence, particularly the large language models and generative AI systems that have captured the world's imagination over the past few years, is extraordinarily energy-intensive. Training and deploying these models requires vast computational infrastructure, and that infrastructure demands equally vast amounts of electricity. Hyperscalers like Microsoft, Google, and Amazon are already grappling with power procurement challenges at an unprecedented scale, with some estimates suggesting that AI data centres could consume as much electricity as entire countries within the coming decade.
Starcloud was founded in January 2024 — originally operating under the name Lumen Orbit — by a trio of experienced technologists who had each spent time at the cutting edge of aerospace and technology. Philip Johnston, the company's CEO, is a former McKinsey & Company associate. Ezra Feilden, CTO, brings with him expertise from Airbus. And Adi Oltean, who serves as Chief Engineer, previously worked on SpaceX's Starlink programme. Together, they arrived at a thesis that is as audacious as it is logical: if energy constraints are the defining bottleneck for AI compute on Earth, why not take that compute to where energy is effectively unlimited? Their answer was orbital infrastructure — satellites equipped with high-performance GPUs, powered by continuous solar energy in low Earth orbit, and cooled naturally by the radiative properties of space itself.
The company's early momentum was remarkable. Within just three months of its founding, Starcloud closed a $2.4 million pre-seed round, followed by a $21 million seed round in late 2024. It graduated from Y Combinator's prestigious Summer 2024 batch and secured backing from In-Q-Tel, the strategic investment arm associated with the US intelligence community, the NVIDIA Inception Programme for AI startups, and the World Economic Forum's Technology Pioneers programme. Each of these endorsements added layers of credibility to what might otherwise have sounded like science fiction. The latest $170 million AI funding round now places Starcloud firmly in the company of the world's most well-capitalised AI infrastructure startups — and it has done so faster than almost any comparable company in history.
From Concept to Orbit: The Starcloud-1 Mission
The most powerful validation of Starcloud's thesis, however, has come not from its investor roster but from actual hardware orbiting Earth. In November 2025, Starcloud partnered with SpaceX to launch Starcloud-1, a 130-pound (approximately 60-kilogram) satellite roughly the size of a small refrigerator. What made this satellite extraordinary was what was packed inside it: an NVIDIA H100 GPU — the same class of chip that powers the world's most advanced AI data centres on the ground, now functioning reliably in the harsh vacuum of outer space.
The engineering challenge of achieving this cannot be overstated. Space presents an extraordinarily hostile environment for sensitive electronics. Cosmic radiation, extreme temperature fluctuations, the absence of convective cooling, and the mechanical stresses of launch all conspire to make high-performance computing in orbit extraordinarily difficult. To overcome these obstacles, Starcloud's team developed radiation shielding to protect the GPU from space's particle environment, and adapted a cooling system drawing on technology originally developed for the International Space Station to manage the intense heat generated by the H100 during computation. The result was a satellite that could offer what the company describes as 100 times more powerful GPU compute than any previous space-based operation — a remarkable benchmark in the history of orbital hardware.
The AI world sat up and took notice when Starcloud followed up the launch with an even more striking announcement: Starcloud-1 had not only run AI inference in orbit, it had trained a large language model in space for the first time. The model in question was NanoGPT, the educational language model created by Andrej Karpathy, one of OpenAI's founding researchers, and it was trained on the complete works of William Shakespeare while orbiting Earth. The satellite then demonstrated inference capabilities by running Google DeepMind's Gemma model on the H100 chip. These are not just symbolic milestones; they represent genuine proof-of-concept for the idea that orbital hardware can handle demanding, real-world AI workloads. As co-founder Philip Johnston described it, this was "the first LLM in space" and a tangible step toward harnessing "the near limitless energy of our Sun" for artificial intelligence. This achievement, widely discussed across AI funding news circles, confirmed that Starcloud had moved beyond theory into verifiable engineering reality.
Beyond these landmark demonstrations, Starcloud-1 is already performing commercial work. The satellite is processing satellite imagery on behalf of Capella Space, an Earth observation company, executing AI-powered analysis that can help identify lifeboats from capsized vessels at sea and detect forest fires in remote regions. These are applications with direct, immediate humanitarian value, and they illustrate that Starcloud's platform is not merely a research curiosity — it is a live, functioning orbital compute platform delivering results for paying customers.
The Space Advantage: Energy, Cooling, and Scale
One of the most compelling aspects of Starcloud's proposition — and one of the key reasons this AI funding round has attracted such interest from the global investment community — is the structural cost and sustainability advantage that space-based computing offers relative to terrestrial alternatives. To appreciate why this matters, it helps to understand how Earth-based data centres actually work and why they are becoming increasingly difficult to build and operate at the scale that AI demands.
On Earth, every kilowatt of compute power requires not only electricity to run but additional electricity to cool. The power usage effectiveness (PUE) ratio of even the most efficient terrestrial data centres means that significant energy overhead is devoted purely to keeping hardware cool. Water-based cooling systems consume enormous volumes of fresh water, creating both cost and environmental concerns. Land in suitable locations — close to power infrastructure, in cool climates, with reliable connectivity — is scarce and expensive. Permitting, grid connection timelines, and regulatory hurdles can delay the construction of new data centre capacity by years.
Space eliminates many of these constraints at a stroke. In low Earth orbit, solar energy is abundant and continuous, with satellites positioned to receive sunlight for up to 90% of their orbital period. The absence of atmosphere means solar panels in space operate at between five and eight times the efficiency of equivalent panels on the ground. The vacuum of space, rather than being a liability, becomes an asset for cooling: without air to conduct or convect heat, thermal management is achieved through radiative cooling, the emission of infrared energy into the cosmic background, with no need for water or energy-intensive refrigeration systems. Starcloud has claimed that this combination of free solar energy and passive radiative cooling could translate into electricity cost savings of up to 90% compared with conventional ground-based data centres. For any business tracking AI funding news and infrastructure economics, those numbers represent a potential step-change in the cost structure of AI compute.
The long-term vision articulated by Starcloud's founders is genuinely transformative in scale. The company has publicly outlined plans to develop a 5-gigawatt solar-powered orbital data centre spanning four kilometres — a structure that would dwarf any existing terrestrial facility and could eventually offer compute capacity on a scale that is simply not achievable on the ground given current energy and land constraints. One square kilometre of orbital solar collecting surface can theoretically generate approximately one gigawatt of continuous power, equivalent to a large terrestrial power plant but without fuel costs, emissions, or transmission losses. While such a vision remains distant, the trajectory from pre-seed to unicorn in just over two years suggests that Starcloud's team has the capability and the backing to pursue it seriously.
What the $170M Will Build Next
The immediate deployment of the freshly raised capital is focused on a clear objective: developing Starcloud's third satellite. The company's second satellite, already in progress, represents an upgrade on the capabilities demonstrated by Starcloud-1, and the third will take that further still. The next major launch milestone is currently planned for October 2026 and will feature multiple NVIDIA H100 GPUs as well as integration of NVIDIA's newer Blackwell platform, which is expected to deliver up to ten times the AI performance of the Hopper architecture that powers the H100. This roadmap reflects a deliberate strategy of incremental but rapid capability expansion — proving the technology at each step before scaling to the next order of magnitude.
Starcloud has also confirmed that its upcoming satellite will feature a module enabling customers to deploy and manage AI workloads from space using familiar cloud infrastructure tools, specifically through an integration with Crusoe's cloud platform. This is a significant commercial development because it lowers the barrier to entry for organisations wishing to leverage orbital compute — customers will not need to develop bespoke space-qualification workflows but will instead be able to manage their workloads through interfaces similar to those they already use with terrestrial cloud providers. For enterprises and AI labs tracking the latest AI funding news and evaluating their infrastructure options, this ease of access could prove to be a decisive commercial differentiator.
The broader context for this funding round is equally important. NVIDIA itself has recently launched a dedicated Space Computing initiative, with new modules designed for orbital deployment offering up to 25 times more AI compute performance than the H100 GPU for space-based inferencing. This institutional validation from one of the world's most influential semiconductor companies underscores that orbital AI infrastructure is no longer a fringe hypothesis but a mainstream technology trajectory being actively invested in by the industry's most powerful players. At The AI World, we believe this is precisely the kind of convergence — between AI, space technology, and venture capital — that will define the next decade of the global AI industry.
Investors, Credibility, and the Unicorn Milestone
Starcloud's elevation to unicorn status within roughly two years of its founding places it in rare company, but perhaps more revealing than the valuation itself is the composition of the investor group that has backed this round. Benchmark, the San Francisco-based venture firm whose portfolio includes Uber, Snap, Twitter, and Dropbox, is not known for making speculative bets. Its decision to lead this round — and to have a general partner take a board seat — reflects a conviction that Starcloud is addressing a real and scalable market opportunity. EQT Ventures, the venture arm of one of Europe's largest private equity firms, brings both capital and a global network that could prove critical as Starcloud seeks to expand its customer base beyond the United States.
The participation of Macquarie Capital, an infrastructure-focused investment bank with deep experience in long-duration, capital-intensive assets, is particularly telling. Infrastructure investors typically apply rigorous scrutiny to the economics and operational feasibility of the projects they back. The fact that Macquarie has chosen to participate in this round suggests a view that Starcloud's orbital compute model stacks up not just as a technological experiment but as a viable long-term infrastructure business. The involvement of angel investor Kevin Johnson, a sitting board member at Goldman Sachs, adds yet another dimension of financial credibility to a round that already reads like a who's who of serious capital.
For followers of AI funding news, this round is a reminder that the most consequential investments in artificial intelligence are increasingly being made not at the model or application layer but at the infrastructure level. The companies building the physical substrate on which AI runs — the chips, the data centres, the power infrastructure, and now the orbital platforms — are attracting some of the most significant capital flows in the technology industry. Starcloud's $170 million raise is both a reflection of this trend and an acceleration of it.
This is a story that The AI World will continue to follow closely. As Starcloud advances toward its next satellite launch, expands its commercial customer base, and moves along its roadmap toward gigawatt-scale orbital infrastructure, it represents one of the most fascinating and genuinely novel intersections of artificial intelligence, space technology, and venture capital that the industry has yet produced. The company that once existed as a twelve-person team in Redmond, Washington, dreaming of putting NVIDIA H100s into orbit, has now demonstrated that the dream works — and the world's top investors have voted with $170 million to say they believe in what comes next.