
Eliyan’s $50M round for scalable AI connectivity
Eliyan raises $50M from AMD, Arm, Meta and others to scale NuLink/NuGear interconnects, easing memory & I/O bottlenecks in next-gen AI systems.
TL;DR
Eliyan has raised $50M in strategic funding from AMD, Arm, Meta, Coherent and existing backers Samsung Catalyst Fund and Intel Capital to speed up commercialization of its NuLink and NuGear interconnect tech. The goal: cut memory and I/O bottlenecks as AI hardware shifts to multi‑chip, power‑efficient systems spanning die‑to‑die, chip‑to‑chip and rack‑to‑rack links.
Eliyan raises $50m to advance scalable AI connectivity
Eliyan has secured $50 million in strategic investments to accelerate the development and commercialisation of high‑performance connectivity technologies built for AI and advanced computing systems. The round brings together major participants from across the AI and compute ecosystem, signalling that interconnect innovation is becoming as decisive as compute itself in next‑generation platforms.
The $50m round and who joined
The strategic funding includes participation from AMD, Arm, Coherent, and Meta, alongside continued backing from Samsung Catalyst Fund and Intel Capital. Eliyan positioned the round as a confidence vote in its mission to enable scalable, power‑efficient computing architectures as AI systems move beyond single packages and monolithic modules.
From the ai world organisation perspective, this is exactly the kind of infrastructure-layer momentum we track closely because it affects how quickly AI capabilities can scale across data centers, cloud platforms, and edge deployments. As ai world organisation events and the ai world summit conversations keep returning to performance-per-watt, cost, and real-world deployment constraints, interconnect and chiplet ecosystems increasingly sit at the centre of the roadmap.
At the ai world summit, we often hear that the “next leap” in AI is as much about moving data as it is about doing math. Eliyan’s raise underscores that the market is aligning around the same reality: faster models and larger clusters require better links, better packaging strategies, and tighter ecosystem coordination.
Why AI connectivity is becoming the bottleneck
As modern AI hardware scales, it’s no longer just one big chip doing the work. Instead, the industry is leaning into disaggregated designs—multiple dies, multiple chiplets, multiple packages—because that’s often the practical way to keep improving throughput while managing yield, cost, and power.
Eliyan highlights that interconnect requirements now span on‑package die‑to‑die links as well as off‑package, chip‑to‑chip and even rack‑to‑rack connectivity. When training and inference workloads fan out across accelerators, memory expanders, and heterogeneous compute, connectivity stops being a plumbing detail and becomes a platform decision.
This shift also changes how teams think about “system performance.” It’s not only about peak compute; it’s also about keeping compute fed with data, synchronising across devices, and avoiding stalls from memory and I/O constraints. That framing maps to what we at the ai world organisation consistently emphasise through ai conferences by ai world: winning architectures will be the ones that scale efficiently in the real world, not just on paper.
NuLink PHY: from 64G D2D to 224G and 448G
A core focus of the investment is to support the commercial rollout of Eliyan’s NuLink PHY and NuGear chiplet families, which the company says are designed to overcome “memory- and I/O-wall” limitations in next‑generation AI systems. In its NuLink portfolio, Eliyan points to silicon‑proven NuLink die‑to‑die at 64G and positions it as validated ahead of broader industry timelines.
Beyond die‑to‑die, the company also describes next‑generation NuLink chip‑to‑chip SerDes technology: 32G–64Gbps single‑ended (NuLink‑XS), 224Gbps differential, and an emerging 448G roadmap (NuLink‑XD) aimed at connecting beyond a single package and across substrates, boards, or systems. Eliyan also states that the NuLink‑X family is intended for large-scale disaggregated AI systems that require extreme bandwidth density and improved energy efficiency compared with alternative approaches.
The press release further ties NuLink’s direction to evolving memory interconnect needs, noting support for next‑generation memory interconnects including the emerging SPHBM4e standard (referenced as recently announced by JEDEC) and describing an aim to support next‑generation HBM5 bandwidths on standard packaging. In the same announcement, CEO and co‑founder Ramin Farjadrad framed the strategic backing as validation from leaders deploying AI infrastructure “at massive scale,” while also pointing to a roadmap spanning 224G/448G chip‑to‑chip interconnect and other scale-up targets.
For the ai world organisation, this matters because bandwidth roadmaps like 224G and 448G influence how quickly the ecosystem can build more composable, modular AI infrastructure—an angle we expect to see repeatedly in ai world summit 2025 / 2026 content tracks. This is also why ai world organisation events increasingly spotlight not only model innovation, but also the enabling stack—interconnects, memory systems, packaging, and the software that makes heterogeneous systems usable.
NuGear chiplets and scale-up connectivity targets
In addition to PHY and SerDes, Eliyan is advancing its NuGear chiplet families, which it describes as focused on scale‑up network connectivity. The company states that NuGear targets 1.6T to 12.8Tbps link bandwidths for demanding AI accelerator and memory expansion architectures, where latency, power, and reliability constraints can dominate system performance (including when complemented with optical engines).
These targets speak to a broader architectural direction: keep accelerators and memory expanding without turning the fabric into the limiting factor. When systems scale “out” and “up” at the same time, both classes of links matter—inside a package, between packages, and across a system—so chiplet-based strategies increasingly need cohesive connectivity across layers.
Arm’s Mohamed Awad, Executive Vice President of Arm’s Cloud AI Business Unit, connected this evolution to the need for scalable, energy‑efficient architectures and strong ecosystem collaboration. That emphasis on collaboration is consistent with how the ai world organisation positions the ai world summit: progress at scale happens when silicon, systems, and platform stakeholders align on deployable solutions, not just benchmarks.
What the funding enables—and what to watch next
Eliyan says proceeds from the round will be used to accelerate manufacturing and qualification of its next‑generation interconnect IP and chiplet products, expand ecosystem partnerships, and support deployments across AI infrastructure, high‑performance computing, and edge applications. Intel Capital’s Srini Ananth described Eliyan’s work as addressing a critical challenge in scaling AI and advanced computing—delivering efficient, high‑bandwidth connectivity for chiplet-based architectures—while positioning the capability as foundational across data center, cloud, and edge markets. Samsung Catalyst Fund’s Dede Goldschmidt also pointed to rapid AI infrastructure growth and framed the new investor mix as evidence of execution momentum across Eliyan’s PHY and chiplet businesses.
From the ai world organisation standpoint, the next signals to track are straightforward: which platforms adopt these interconnect approaches first, how quickly manufacturing and qualification milestones translate into broad availability, and how ecosystem partnerships evolve. Those are also the practical questions founders, architects, and enterprise buyers bring to ai conferences by ai world—especially when deciding between incremental upgrades and more disruptive disaggregated designs.
If you’re following this theme through the ai world summit lens, keep an eye on 2026’s event cycle where infrastructure scaling, energy efficiency, and composable architectures will likely remain dominant topics across ai world organisation events. The AI World Organisation also highlights upcoming global summits on its site, including an AI World Summit 2026 Asia edition page. More broadly, theaiworld.org positions its “Summits” portfolio as a set of event properties under The AI World Organisation umbrella.