Eridu Raises $200M+ to Fix AI's Network Wall
Eridu exits stealth with $200M+ in AI funding to revolutionise data centre networking for large-scale GPU clusters and next-gen AI infrastructure.
TL;DR
Eridu, a stealth-mode networking startup, has raised over $200M to fix one of AI's biggest hidden problems the internal network that connects thousands of GPUs in data centres can't keep up with modern AI demands. Rather than patching old systems, Eridu built its networking architecture entirely from scratch, promising faster, leaner, and massively scalable AI infrastructure for hyperscalers, AI labs, and sovereign cloud projects alike.
Eridu Emerges from Stealth with $200M+ in AI Funding to Demolish the "Network Wall" Choking Modern AI Infrastructure
The artificial intelligence industry has spent the better part of the last decade solving for compute power. Billions of dollars have poured into chip development, GPU clusters, and data centre construction — all in service of training and running increasingly sophisticated AI models. And yet, as the AI funding news landscape continues to evolve at a breathtaking pace, a glaring and largely underappreciated problem has been quietly building in the background: the network. Not the internet as we know it, but the internal networking fabric that connects thousands of GPUs inside a data centre — the very backbone upon which modern AI computation depends. For all the innovation in chips and software, the infrastructure connecting those chips has remained stubbornly rooted in architectural paradigms designed for a different era of computing entirely. That is, until now.
A company called Eridu has officially stepped out of the shadows, announcing a funding haul of more than $200 million and a mission to fundamentally redesign how AI data centres communicate at scale. This is not an incremental improvement on what already exists. Eridu is building its networking technology entirely from the ground up, purpose-built for the unique and demanding requirements of large-scale artificial intelligence workloads. The announcement has sent ripples through the AI infrastructure community, and for good reason — the problem Eridu is tackling sits at the very heart of what will determine how fast and how far AI can scale in the years ahead. At The AI World Organisation, we continue to track the most pivotal developments in global AI funding, and Eridu's emergence is one of the most significant stories to break in the AI funding news space this year.
The Silent Crisis Inside AI Data Centres
To understand why Eridu's arrival matters so deeply, it helps to first understand the scale of the problem it is trying to solve. When AI researchers and engineers talk about training a large language model or running inference at scale, the conversation almost always gravitates toward GPUs — how many you have, how fast they are, how efficiently they can be parallelised. But what rarely makes it into the headline is the role the network plays in holding all of those GPUs together into a coherent, high-performance computing fabric.
In a modern AI data centre, thousands of GPUs need to communicate with each other continuously and at extremely high speeds. Every time a model is being trained, gradients and parameters shuttle back and forth across the network millions of times per second. If the network cannot keep up — if it introduces latency, drops packets, or simply cannot handle the sheer volume of data being exchanged — the entire training run slows down. The GPUs, no matter how powerful, are left waiting. Efficiency craters. Costs balloon. And the gap between what AI researchers want to do and what the infrastructure can actually support grows wider.
This is what insiders have started calling the "network wall" — a ceiling on AI performance imposed not by compute limitations but by the networking infrastructure meant to bind compute together. It is a structural problem that has existed for years but has grown increasingly urgent as AI models have scaled from millions to billions to trillions of parameters, and as the GPU clusters required to train them have grown from dozens to thousands of units. Traditional networking systems, which were designed to serve the needs of cloud computing workloads — web traffic, database queries, file transfers — are simply not built for the intensity and pattern of communication that AI training demands. Retrofitting those legacy systems has produced diminishing returns. The AI funding news world has watched investment pour into chips, software, and data infrastructure — but the network layer has remained the elephant in the room.
Eridu Emerges: A $200M Bet on Purpose-Built AI Networking
Into this gap steps Eridu, a startup that has been working quietly in stealth mode to develop what it believes is the first networking platform built from scratch exclusively for the demands of generative AI. The company's official emergence from stealth, accompanied by a funding announcement of over $200 million, marks a defining moment in the AI infrastructure conversation. This is not a pivot from a legacy networking company trying to adapt its products for a new market. Eridu was conceived with a singular mission: to build the network that AI actually needs, not the network that already exists.
The company was founded by Drew Perkins, a seasoned figure in the networking technology space with a track record of building and scaling infrastructure businesses. Perkins and his team have spent years developing a networking architecture that specifically addresses the communication patterns of large-scale AI training and inference. Rather than layering additional hardware on top of existing switching and routing infrastructure, Eridu has gone back to fundamentals — reimagining how packets move, how switching decisions are made, and how the overall network topology is structured to support the kind of all-to-all communication that AI workloads require.
The $200 million-plus in AI funding announced alongside the company's public debut includes a Series A round led by Socratic Partners, a venture firm with deep roots in deep-tech and infrastructure investing. The round drew participation from a remarkably high-profile group of backers including legendary venture capitalist John Doerr, algorithmic trading firm Hudson River Trading, Capricorn Investment Group, and Matter Venture Partners. Additional participants in the round include Bosch Ventures, semiconductor giant MediaTek, Eclipse Ventures, Fusion Fund, and TDK Ventures — a coalition of investors that signals just how seriously the technology and investment community is taking this problem.
The diversity of the investor base is itself telling. When a round attracts participation from a range of players spanning venture capital, corporate strategic investment, and domain-specific deep-tech funds, it is usually a strong signal that the technology in question is being viewed not as a niche play but as foundational infrastructure. In the context of the current AI funding landscape, where capital continues to flow toward companies building the picks and shovels of the AI economy, Eridu's raise stands out as one of the most strategically significant.
The Architecture That Could Reshape AI Infrastructure
What makes Eridu's technology compelling is not just that it is new — it is that it takes a fundamentally different approach to the architectural problem at hand. Conventional data centre networks are built on a multi-layered hierarchy of switches: access switches that connect servers, aggregation switches that connect groups of access switches, and core switches that sit at the top of the hierarchy and route traffic between aggregation layers. This layered model, known as a fat-tree or Clos network, was designed for general-purpose cloud workloads and is effective for the kind of bursty, asymmetric traffic patterns typical of web applications and enterprise software.
AI training, however, generates a very different kind of traffic. When thousands of GPUs are working together on a distributed training run, they engage in dense, symmetric, all-to-all communication — every GPU needs to talk to every other GPU simultaneously and continuously. The traditional multi-layer hierarchical network introduces unnecessary hops, adds latency at each switching layer, and creates bottlenecks at the points where traffic from one part of the hierarchy converges with traffic from another. For AI workloads, this architecture is structurally mismatched.
Eridu's approach dramatically reduces the number of networking layers required in a large-scale AI data centre. By flattening the network topology and designing its switching and interconnect hardware specifically around the communication patterns of AI training, the company says it can achieve substantially lower latency, reduce the power consumed by networking equipment, and allow data centres to scale to GPU clusters far larger than what today's infrastructure can support effectively. The company has said its networking technology can support clusters of thousands of GPUs within a single network domain and, in distributed configurations, could scale to millions of GPUs working in concert across multiple facilities. That kind of scalability is not just an incremental improvement — it represents a step change in what AI infrastructure can realistically achieve.
Gregory Waters, managing partner at Socratic Partners and the lead investor in Eridu's Series A, captured the significance of the moment well. Speaking about the company's technology, Waters noted that the disruptive demands of AI create an urgent need for completely rethinking high-speed interconnect and packet switching. He described Eridu's novel architecture as dramatically improving throughput and characterised it as a platform that nearly all next-generation AI will depend on. Coming from the investor who led the round, that is a remarkable statement — and one that reflects the extent to which serious infrastructure investors view networking as the next major battleground in the AI infrastructure race.
A $200 Billion Market and the Race to Own AI's Plumbing
The ambitions behind Eridu's launch are matched by the scale of the market opportunity it is pursuing. The company has identified what it describes as a $200 billion AI networking market, a figure that reflects the massive and accelerating investment being made by hyperscalers, cloud providers, and AI research labs in the infrastructure required to train and operate next-generation AI models. This is not a speculative number. The world's largest technology companies — from Microsoft and Google to Amazon and Meta — are collectively committing hundreds of billions of dollars to AI infrastructure buildout over the next several years. A meaningful and growing portion of that capital will need to go toward networking equipment as GPU cluster sizes continue to expand.
Eridu's primary target customers are the organisations that sit at the very top of the AI infrastructure pyramid: hyperscale cloud providers building their own AI training and inference platforms, frontier AI research labs pushing the boundaries of model scale and capability, and a new generation of cloud providers that have been built specifically to serve AI workloads rather than general-purpose enterprise applications. These are organisations that are already acutely aware of the networking bottleneck and have been engineering custom solutions internally to try to address it — precisely the kind of customers who will understand and appreciate what Eridu is offering.
But the company's ambitions extend beyond just the hyperscaler tier. Eridu has also signalled that its networking technology is relevant for sovereign cloud projects — government-backed or nation-level cloud infrastructure initiatives that are increasingly being developed by countries seeking to maintain data sovereignty and build domestic AI capability. As AI moves to the centre of national economic and security strategy for governments around the world, the infrastructure needed to support sovereign AI computing will require the same high-performance networking solutions that hyperscalers demand. Enterprise AI data centres are also in scope, as large corporations continue to build out private AI infrastructure to support their internal model development and deployment efforts.
From the perspective of the broader AI funding news ecosystem, Eridu's emergence is a reminder that the most consequential investment opportunities in AI are not always found at the model layer. Foundation model development gets the bulk of the media attention, but the infrastructure layer — the hardware, networking, power, and cooling systems that make model development possible — is where some of the most durable and defensible businesses in the AI economy are being built. The AI World Organisation has consistently highlighted this dynamic through its global summits and research initiatives, observing that the AI funding story is as much about infrastructure as it is about algorithms.
What This Means for the Global AI Infrastructure Landscape
Eridu's emergence from stealth is part of a broader and accelerating trend in the AI funding landscape: the recognition that purpose-built AI infrastructure is not a luxury but a necessity. The era of adapting existing tools, protocols, and architectures for AI workloads is giving way to a new generation of companies that are designing infrastructure from first principles with AI as the primary design constraint. This shift is happening across multiple layers of the stack — in chip design, in cooling systems, in power delivery, and now, decisively, in networking.
The implications of this shift are profound. As AI models continue to scale and the GPU clusters required to train them continue to grow, the performance ceiling imposed by legacy networking infrastructure becomes not just a technical inconvenience but a genuine bottleneck to scientific and commercial progress. The companies that succeed in building the networking platforms that can support the next generation of AI infrastructure will occupy an extraordinarily important position in the AI economy — one that generates revenue not just from a single customer or use case, but from the foundational layer of a technology that is reshaping virtually every industry on the planet.
For the global community of AI leaders, investors, researchers, and builders who gather at The AI World Organisation's flagship summits and events, Eridu's story is a compelling case study in the kind of deep-tech, infrastructure-level innovation that deserves as much attention as the latest developments in model capabilities. The AI funding news flow of recent months has made clear that capital is following opportunity across the full stack of AI development — and the network layer, long overlooked, is finally getting its moment in the spotlight.
As the AI World Organisation continues to convene conversations among the world's most influential AI stakeholders — from the AI World Summit to the Global AI IIIAwards and beyond — the emergence of companies like Eridu serves as a powerful reminder that the future of AI will be built not just by those who write the algorithms, but by those who build the infrastructure on which those algorithms run. In an industry where the pace of change is measured in months rather than years, Eridu has arrived at exactly the right moment — with the right technology, the right team, and more than $200 million in AI funding to make its vision a reality.