Nebius Raises $3.75B to Power Europe's AI Cloud
Nebius Group secures $3.75B in convertible notes to build AI data centres and expand GPU infrastructure, eyeing $7–9B revenue and 2GW capacity by end of 2026.
TL;DR
Nebius Group has raised $3.75B through convertible notes to fund new AI data centres and expand its GPU fleet. Backed by a $27B infrastructure deal with Meta and a $2B strategic investment from NVIDIA, the Amsterdam-based cloud firm is targeting $7–9B in annual revenue and 2 gigawatts of capacity by end of 2026, making it Europe's strongest challenger to US hyperscalers.
Nebius Raises $3.75 Billion in Convertible Notes to Power Europe's AI Cloud Ambition
The global race to dominate AI infrastructure is no longer just a Silicon Valley story. Amsterdam-based Nebius Group, a Nasdaq-listed AI cloud company that grew from the foundations of Russian tech giant Yandex, has just made one of the boldest financial moves in European tech history — announcing a $3.75 billion private offering of convertible senior notes. This major AI funding development arrives at a time when demand for GPU computing, enterprise AI services, and large-scale data centre capacity is hitting new peaks, and the world's biggest technology companies are scrambling to secure long-term infrastructure partnerships. At The AI World Organisation, we track the most consequential AI funding news so professionals and decision-makers can stay ahead of the curve, and this one is unmistakably a landmark.
Nebius is not a newcomer navigating its first funding round. It already carries the weight of a historic partnership portfolio — including a $27 billion infrastructure agreement with Meta and a strategic $2 billion investment from NVIDIA — and is now doubling down with fresh capital to cement its position as the definitive AI cloud provider in Europe and beyond. The $3.75 billion raise is structured as two tranches: $2 billion in convertible senior notes maturing in 2031 and $1.75 billion maturing in 2033, with an option for underwriters to purchase an additional $562.5 million in notes, potentially pushing the total capital raised even higher. This is not incremental funding — this is a foundational statement about where the future of AI infrastructure is heading.
From Yandex Roots to a Global AI Infrastructure Giant
To understand why this AI funding news matters so deeply, you have to trace Nebius back to its origins. The company was born out of Yandex N.V., the Russian internet and technology conglomerate that Arkady Volozh co-founded in 1997. For over two decades, Yandex built some of the most sophisticated search, cloud, and AI technologies in the world, rivalling the likes of Google in scope and technical ambition. When geopolitical pressures forced a strategic restructuring, the international AI infrastructure operations were separated and reborn as Nebius Group — a clean break that let the company chart its own global trajectory without the constraints of its former parent.
What Nebius inherited from Yandex was not just capital or brand recognition but something far more valuable: deep engineering talent with real-world experience building hyperscale systems. The teams that designed Yandex's cloud infrastructure, AI training pipelines, and GPU clusters are the same people now powering Nebius's global expansion. Arkady Volozh, who remains the guiding vision behind the company, has been clear about the mission — to build a full-stack AI cloud that developers and enterprises can rely on from the earliest stages of model training all the way through to production-scale deployment. The absence of legacy technical debt and the presence of battle-tested talent has given Nebius a structural advantage that more established cloud providers often struggle to match.
This background is directly relevant to understanding the current round of AI funding. Nebius is not raising capital to figure out its business model — it already has one, and it works. The fresh convertible notes are fuelling an expansion strategy that is already producing results, with signed contracts worth tens of billions of dollars and a growing roster of the world's most demanding AI customers on its books.
The $3.75 Billion Capital Structure and What It's Actually For
When companies raise billions through convertible notes, the specifics of how that capital will be deployed say a great deal about their actual priorities. In Nebius's case, the intended use of the proceeds is straightforward and strategically coherent: building new data centres, acquiring high-performance GPUs, and expanding the AI cloud platform. These are not vague corporate ambitions — they map directly onto the company's existing client commitments and its need to scale capacity in advance of delivering on them.
The data centre construction programme is already underway. Nebius has active projects in the United Kingdom, Israel, New Jersey, and at various other locations across the United States and Europe, with mid-2026 delivery targets for several of these facilities. The speed at which these sites need to be brought online is directly tied to the contractual obligations Nebius has signed with Meta and other major clients. When a company has committed to delivering $12 billion in dedicated AI computing capacity starting in 2027, the construction timelines become non-negotiable. This is precisely why the AI funding round has been structured at this scale and with this urgency.
On the GPU side, Nebius's shopping list is not modest. The company is building its fleet around NVIDIA's most advanced hardware, including the RTX PRO Blackwell, B200, B300, and GB300 accelerators — the same chips that AI labs and large language model developers are competing fiercely to access globally. By securing a strategic partnership and investment from NVIDIA itself, Nebius has positioned itself for early and preferential access to Rubin-generation hardware, NVIDIA's next platform after Blackwell, as well as Vera CPUs and BlueField storage systems. This is a significant competitive moat. Access to the latest NVIDIA silicon before the broader market gets it means Nebius can offer its clients performance advantages that rivals simply cannot match until much later.
The convertible notes structure also reflects smart financial engineering. By designing the notes with high effective conversion premiums, Nebius has sought to limit potential dilution for existing shareholders while still raising substantial capital at favourable terms. This kind of financial discipline in a high-growth AI funding context is unusual and speaks to the maturity of the management team running the company.
The Meta and NVIDIA Deals That Changed Everything
No discussion of this AI funding news is complete without understanding the two landmark partnerships that transformed Nebius's commercial standing in just the past few weeks. Together, these deals don't just validate Nebius's technology — they redefine the company's role in the global AI economy.
The Meta deal, announced just days before this convertible notes offering, is staggering in its scale. Meta Platforms has agreed to purchase up to $27 billion in AI infrastructure from Nebius over a five-year period. The first tranche of $12 billion covers dedicated capacity that Nebius will begin delivering in early 2027, with Meta holding options to claim up to an additional $15 billion in capacity that is currently being constructed. To fulfil this agreement, Nebius will build large-scale AI clusters using NVIDIA's Vera Rubin hardware, to which Meta will initially have exclusive access. For context, this single contract is one of the largest infrastructure deals Meta has ever signed, and it positions Nebius not as a niche European cloud player but as a critical component of the global AI supply chain.
The NVIDIA investment, announced on March 11, 2026, adds another dimension entirely. NVIDIA committed $2 billion into Nebius via pre-funded warrants, alongside an engineering partnership that gives Nebius early access to NVIDIA's most advanced computing platforms. Jensen Huang's confidence in Nebius is not purely financial — it reflects NVIDIA's strategic interest in having trusted infrastructure partners that can deploy its hardware at multi-gigawatt scale around the world. The partnership envisions Nebius deploying over five gigawatts of capacity by 2030, supported by a continuous pipeline of cutting-edge NVIDIA technology. This relationship essentially means Nebius has a guaranteed supply of the world's most sought-after AI chips at a time when GPU scarcity is one of the most significant bottlenecks in the entire AI industry.
These two deals — $27 billion from Meta and $2 billion from NVIDIA — together with a previously announced multi-billion dollar agreement with Microsoft, have fundamentally changed the risk profile of investing in or partnering with Nebius. The company now has revenue visibility and strategic backing that most AI infrastructure startups can only dream of. Against this backdrop, the current AI funding round becomes even more logical: the company isn't gambling on demand — it's investing in capacity to serve demand that has already been committed to in signed contracts.
The Technology Stack Powering Nebius's Competitive Edge
One of the things that distinguishes Nebius from the many GPU cloud startups trying to carve out market share is the depth and completeness of its technology offering. Rather than simply reselling computing resources, Nebius has built what it describes as a full-stack AI cloud — an end-to-end platform that covers everything from raw GPU compute to object storage, data management, serverless inference, and AI development tooling. This is a critical distinction in a market where enterprise clients increasingly want integrated solutions rather than a collection of disconnected infrastructure components.
The GPU infrastructure at the core of the platform is built around NVIDIA's latest generations of accelerators, giving clients access to hardware that can handle the most demanding training and inference workloads in production. Above that layer, Nebius provides the networking, storage, and orchestration systems needed to run those GPUs efficiently at scale. High-performance InfiniBand networking, Kubernetes and Slurm orchestration, and purpose-built storage systems work together to ensure that clients are extracting maximum performance from their AI workloads rather than bottlenecked by infrastructure inefficiencies.
What makes the engineering culture at Nebius genuinely differentiated is the legacy it has inherited. The engineers who built Yandex's own AI training infrastructure — which at its peak was processing billions of queries per day with sophisticated machine learning models — are the people designing Nebius's systems today. They understand, from direct experience, what it means to run AI at truly hyperscale. That institutional knowledge is not something a startup can acquire through hiring alone. It's baked into the design philosophy and technical decisions that define every layer of the Nebius platform.
The company has also made deliberate investments in areas beyond pure cloud computing. Through its subsidiaries and investments, Nebius has exposure to robotics cloud services through Avride, online education technology through TripleTen, and data management and analytics through ClickHouse and Toloka. These adjacent capabilities are not distractions from the core business — they represent deliberate extensions of the AI cloud into verticals where the convergence of infrastructure and application will create enormous value in the coming years.
The Road Ahead: Data Centers, Gigawatts, and a $7–9 Billion Revenue Target
The ambition Nebius is pursuing with this latest round of AI funding is not incremental — it is transformational. The company has publicly set itself a target of reaching two gigawatts of contracted capacity and generating between $7 billion and $9 billion in annual recurring revenue by the end of 2026. These are numbers that, if achieved, would place Nebius firmly in the conversation alongside the world's most significant cloud infrastructure providers, and well ahead of where most observers expected a Yandex spinoff to be at this stage of its development.
The data centre expansion plans give some indication of the physical scale involved. By mid-2026, Nebius expects to have operational facilities running across the UK, Israel, New Jersey, and multiple other US and European locations. Current capacity stands at around 800 megawatts, and the company is targeting one gigawatt of operational capacity by the end of 2026. Looking further out, the NVIDIA partnership envisions more than five gigawatts by 2030, a trajectory that would require a sustained and accelerating build programme over the next four years.
The most striking single element of the long-term expansion plan is the announcement of a 2.5-million-square-foot AI hub in Independence, Missouri, scheduled to open in 2028. This is a facility of genuinely enormous scale — larger than many of the world's biggest existing data centres — and its location signals that Nebius is committed to becoming a significant infrastructure player inside the United States, not just a European challenger peering in from outside. The site is linked to a municipal utility capable of supporting a 1.2-gigawatt campus, which means the power infrastructure to support the facility's AI workloads has already been planned and approved.
For the global AI ecosystem, this level of investment and infrastructure buildout has consequences that go well beyond Nebius's own financial performance. As more capacity comes online, it eases the GPU access bottleneck that has been slowing AI development across research institutions, enterprise teams, and startups alike. Companies that previously struggled to access the computing resources they needed to train large models or run demanding inference tasks at scale may find the landscape significantly more accessible as Nebius and its peers bring new capacity to market. This is exactly the kind of AI funding news that The AI World Organisation believes its global community of AI leaders needs to understand — because the infrastructure decisions being made today will shape the pace and direction of AI development for years to come.
As the world watches Nebius execute on its extraordinary pipeline of partnerships and capital commitments, the central question is no longer whether Europe can produce a true hyperscale AI cloud competitor. Nebius has already answered that question. The question now is how quickly it can build, how reliably it can deliver, and whether its deep engineering heritage can sustain the quality of its platform as it scales from hundreds of megawatts to multiple gigawatts of capacity worldwide.