SambaNova Raises $350M, Unveils SN50 AI Chip
SambaNova secures $350M Series E and launches the SN50 AI chip 5x faster than competitors in a bold bid to challenge Nvidia's AI hardware dominance.
TL;DR
SambaNova Systems has raised $350 million in Series E funding and launched the SN50 AI chip, which runs 5x faster than competing chips at 3x lower cost. Backed by Intel, Vista Equity Partners, and others, the chip can connect up to 256 units and handle 10 trillion+ parameter models. SoftBank Japan is already its first enterprise customer, signaling strong real-world confidence in the technology.
SambaNova Raises $350M in Series E Funding, Launches SN50 AI Chip That Outpaces Nvidia by 5x
The AI chip race just got a whole lot more competitive. SambaNova Systems, one of the most closely watched names in enterprise AI infrastructure, has announced a landmark $350 million Series E funding round alongside the unveiling of its most powerful chip to date — the SN50. This development represents a significant turning point in the broader AI hardware landscape, where the dominance of traditional GPU-based systems is increasingly being challenged by purpose-built inference architectures. In the world of AI funding news, this deal stands out not just for its size, but for the bold technological promise attached to it — a chip that the company claims runs five times faster than competing solutions at a fraction of the cost.
The funding round was led by Vista Equity Partners and Cambium Capital, with notable participation from Intel Capital, Battery Ventures, and investment accounts advised by T. Rowe Price Associates, Inc. The strong roster of investors signals growing confidence in the idea that AI inference — rather than model training — is where the next wave of enterprise value will be created. As AI funding continues to accelerate across the global tech ecosystem, SambaNova's latest round reinforces the narrative that specialized hardware companies are becoming mission-critical players in the AI stack, not just supporting actors behind the more glamorous names in foundation model development.
What the $350M AI Funding Round Means for the Industry
To understand why this round of AI funding is generating such buzz, it helps to zoom out and look at what is happening across the broader AI infrastructure market. For the past several years, the dominant narrative in AI hardware has revolved around Nvidia and its GPU-based platforms, which became the default choice for training and deploying large language models. However, as more companies have moved beyond the research-and-development phase and into full-scale production deployments, the limitations of GPU-centric infrastructure have started to show. The cost of running AI workloads at scale — particularly inference workloads where models respond to live user queries in real time — has become a pressing concern for enterprises, model providers, and governments alike.
SambaNova's $350 million Series E arrives at precisely this inflection point. The company has articulated a clear thesis: the future of AI in data centres belongs to inference-optimized hardware, and the SN50 is their answer to that challenge. The funds raised will be deployed across three main areas — scaling up production of the SN50 hardware, expanding cloud services to reach more enterprise customers, and deepening software integrations to make it easier for businesses to plug SambaNova's infrastructure into their existing AI pipelines. This is not a company burning capital on speculation; it is a company that has identified a structural gap in the market and is moving aggressively to fill it. In the context of recent AI funding news, this is one of the most strategically coherent fundraises of 2026 so far.
The round also speaks volumes about the appetite of institutional investors for infrastructure-layer AI bets. While much of the AI funding headlines over the past two years have been dominated by foundation model companies and application-layer startups, there is a quiet but powerful shift underway toward the hardware and systems layer. Investors are beginning to realize that no matter how impressive a frontier model becomes, it is only as useful as the infrastructure that can serve it to users quickly, reliably, and affordably. SambaNova is positioning itself at the center of that realization, and its latest funding round is a clear vote of confidence from some of the most sophisticated players in the venture and growth equity world.
The SN50 Chip: Built for the Age of Agentic AI
At the heart of this story is the SN50 itself — a chip that SambaNova describes as purpose-built for the demands of large-scale, real-time AI inference. Unlike general-purpose GPUs that were originally designed for graphics rendering and later adapted for AI workloads, the SN50 is designed from the ground up around the specific computational requirements of modern AI inference tasks. The chip delivers up to five times more computing performance compared to competing chips on the market, and it offers four times greater network bandwidth than SambaNova's previous generation hardware. These are not incremental improvements — they represent a generational leap in what is possible for organizations that need to run AI at scale.
One of the most striking technical specifications of the SN50 is its ability to connect up to 256 chips through a high-speed interconnect fabric. This means that rather than being limited by the performance of a single chip, enterprises can build massively parallel inference clusters that distribute workloads intelligently across hundreds of processors simultaneously. The result is dramatically reduced latency, higher throughput, and the ability to handle much larger AI models with far longer context windows. Specifically, the SN50 can handle models with over 10 trillion parameters and context lengths exceeding 10 million tokens — capabilities that were simply not feasible at acceptable cost or speed on conventional GPU infrastructure.
The timing of this product launch is also telling. AI is entering what many in the industry are calling the "agentic era," where instead of AI simply responding to individual queries, systems are expected to autonomously plan, reason, and execute complex multi-step tasks. Agentic AI workloads are significantly more demanding than traditional inference tasks — they require sustained, low-latency performance over longer interaction chains, and they place enormous pressure on infrastructure to remain responsive under high concurrency. The SN50 has been designed with exactly these requirements in mind. According to SambaNova, its new chip can run agentic AI workloads at roughly three times lower cost than comparable GPU-based systems, a claim that — if it holds up at scale — could make it one of the most compelling value propositions in enterprise AI infrastructure today. Rodrigo Liang, co-founder and CEO of SambaNova, captured this vision succinctly: "The real race is about who can light up entire data centres with AI agents that answer instantly, never stall, and do it at a cost that turns AI from an experiment into the most profitable engine in the cloud."
Intel Partnership Signals a New Competitive Dynamic in AI Hardware
Perhaps the most strategically significant element of this announcement is not the funding round itself, nor even the impressive specs of the SN50 — it is the multi-year strategic collaboration between SambaNova and Intel. In an industry where Nvidia has long enjoyed near-monopolistic dominance in AI hardware, the emergence of a well-funded, technically credible alliance between SambaNova and one of the world's most established chip companies is a development that deserves serious attention. This partnership is not simply a co-marketing arrangement or a superficial integration exercise — it is a deep, infrastructure-level collaboration aimed at building a compelling alternative to the GPU-dominated AI stack.
Under the terms of the partnership, Intel will make a direct investment in SambaNova as part of this funding round, channeled through Intel Capital. The two companies will work together to develop an AI cloud powered by Intel technology, combining SambaNova's full-stack AI systems and inference cloud platform with Intel's leadership across compute, networking, and memory. The collaboration will specifically focus on integrating SambaNova's inference architecture with Intel Xeon processors, while also leveraging Intel's global marketing and distribution channels to accelerate go-to-market reach. The broader vision is to build advanced AI data centres that blend multiple types of hardware — CPUs, GPUs, and specialized inference accelerators — creating a heterogeneous compute fabric that can serve the diverse needs of AI-native organizations, model providers, and government customers.
Kevork Kechichian, Executive Vice President and General Manager of Intel's Data Center Group, articulated the strategic logic clearly: "Customers are asking for more choice and more efficient ways to scale AI. By combining Intel's leadership in compute, networking, and memory with SambaNova's full-stack AI systems and inference cloud platform, we are delivering a compelling option for organisations looking for GPU alternatives to deploy advanced AI at scale." This statement is important because it signals that Intel — a company that has struggled to find its footing in the AI chip race over the past few years — is now making a deliberate and well-resourced bet on inference-optimized hardware as the path to regaining relevance in the AI infrastructure market. For the broader AI ecosystem, this partnership introduces meaningful new competitive pressure on Nvidia, which has benefited enormously from the lack of credible alternatives at enterprise scale.
SoftBank Becomes First SN50 Customer, Eyes Asia-Pacific Deployment
The credibility of any new hardware platform depends enormously on who is willing to bet their production infrastructure on it, and SambaNova has secured a flagship customer that would make any chip company envious. SoftBank Corp., the Japanese telecommunications and technology giant with one of the most aggressive AI investment programs in the world, has committed to becoming the first enterprise customer to deploy the SN50 in its next-generation AI data centres in Japan. This is a deployment that will directly serve sovereign and enterprise customers across the entire Asia-Pacific region — a market of enormous scale and strategic significance.
SoftBank's decision to standardize on the SN50 is not a casual pilot project. The company is building what it describes as an "AI inference fabric for Japan" a national-scale AI infrastructure designed to deliver low-latency inference services for both open-source and proprietary frontier models while meeting strict performance and data sovereignty requirements. This is precisely the kind of mission-critical, high-stakes deployment that validates SambaNova's technical claims at the highest possible level. If the SN50 can deliver on its promised performance and cost advantages in SoftBank's demanding production environment, it will be a powerful proof point that the technology is ready for the most demanding enterprise workloads on the planet.
Hironobu Tamba, Vice President and Head of the Data Platform Strategy Division at SoftBank Corp., explained the rationale: "By standardizing on SN50, we gain the ability to deliver world-class AI services on our own terms — with the performance of the best GPU clusters, but with far better economics and control." That phrase — "far better economics and control" — encapsulates exactly why this moment matters. For large enterprises and national governments deploying AI at scale, cost and sovereignty are not secondary concerns. They are primary requirements. The SN50's combination of superior performance, lower operating costs, and integration into a broader Intel-backed ecosystem makes it an increasingly attractive choice for organizations that need to move beyond the limitations and dependencies of a single GPU vendor. As the global AI infrastructure market matures and competition intensifies, SambaNova's latest moves — backed by $350 million in fresh AI funding — position it as one of the most important companies to watch in the months and years ahead. For those following AI funding news closely, this is a story that signals not just a single company's ambitions, but a broader realignment in the global race to power the next generation of intelligent systems.