Hosted.ai Raises $19M to Fix GPU Waste Crisis
Hosted.ai secures $19M seed in AI funding from Creandum to unlock 5x GPU utilization gains for neoclouds and regional AI infrastructure providers.
TL;DR
Hosted.ai, founded by ex-Nvidia veterans, has raised $19M in seed funding led by Creandum to tackle a largely ignored crisis — GPU waste. On average, 60% of paid GPU capacity sits idle during AI workloads. Their software platform fixes this by pooling GPUs, enabling multi-tenant workload placement, and allowing overcommit — delivering up to 5x better utilisation for neocloud operators without touching a single piece of hardware.
Hosted.ai Raises $19M in Seed Funding from Creandum to Solve the GPU Waste Crisis Holding Back AI Infrastructure
The artificial intelligence revolution runs on graphics processing units but the infrastructure powering that revolution is shockingly inefficient. For every dollar spent on GPU compute today, a significant portion is burned on idle capacity that no workload ever touches. Hosted.ai, a San Jose-based startup founded by veterans from Nvidia and other top infrastructure companies, has set out to change that reality. On March 19, 2026, the company announced it had closed a $19 million seed funding round led by Creandum, with participation from Repeat VC, People Ventures, Z21 Ventures, Golden Sparrow, Hersir Ventures, and Tekton. This AI funding milestone marks a bold step toward rewriting the economics of the GPU cloud industry — and it comes at a moment when the world can least afford to ignore the problem.
The GPU Waste Problem No One Is Talking About Loudly Enough
If you follow AI funding news, you have heard an awful lot about GPU scarcity. The narrative has dominated headlines for years: there are not enough GPUs to go around, chips are backlogged, and data centres cannot keep up with demand. But according to Ditlev Bredahl, the CEO of Hosted.ai, that story misses the real crisis entirely. "The GPU market has a waste problem, not a scarcity problem," Bredahl said in a statement accompanying the funding announcement. "Customers are forced to rent GPU as a static resource, because that is the only way operators can sell it — and AI workloads leave about 60% of that GPU idle, on average."
That figure is remarkable. On average, 60% of GPU capacity purchased and paid for by AI developers and enterprises simply sits unused during active workloads. This waste compounds dramatically at the infrastructure layer. Cloud operators known as neoclouds — companies that have poured tens of millions of dollars into building data centres full of high-end GPUs for AI workloads — are watching enormous portions of their capital investment generate zero return. When hardware can cost upwards of $30,000 per GPU card, and when entire racks go underutilised for hours every day, the financial damage is enormous. Neoclouds are losing significant amounts of money on idle compute, and that loss ultimately flows back to customers in the form of higher prices and back to operators in the form of narrowing margins that threaten the viability of the entire neocloud sector.
The existing structure of the GPU cloud market forces this inefficiency. Today, GPUs are rented at the card, server, and rack level — a paradigm inherited from the early days of compute, when workloads were far more homogeneous than the spiky, bursty, inference-heavy jobs that modern AI applications generate. For large language model inference, real-time AI API services, fine-tuning jobs, and enterprise AI pipelines, the compute requirements fluctuate wildly within seconds. A static resource model simply cannot serve these patterns efficiently, and yet the entire industry has been stuck with it. Hosted.ai's founding insight was that fixing this was not primarily a hardware problem — it was a software problem waiting for the right team to solve it.
How Hosted.ai Builds the Operating System for the GPU Economy
Hosted.ai describes its platform as "the operating system for the GPU-powered AI economy" — and that framing captures exactly what the company is building. Rather than competing directly with hyperscalers or building new hardware, Hosted.ai provides the software layer that sits between physical GPU infrastructure and the workloads running on top of it. The platform's core technology centres on three interlocking capabilities: GPU pooling, optimised multi-tenant workload placement, and GPU overcommit.
GPU pooling allows multiple physical GPUs to be managed as a unified, elastic resource rather than individual, siloed units. Workloads can be dynamically spread across that pool based on real-time availability, priority, and efficiency requirements. The multi-tenant workload placement engine then handles the complex scheduling logic of running multiple customer jobs on shared hardware simultaneously — securely and without interference. Finally, GPU overcommit enables operators to sell more GPU capacity than is physically present on hardware at any given moment, in the same way that cloud providers have long oversold traditional compute based on statistical utilisation averages. Together, these three mechanisms combine to deliver up to a five-times improvement in GPU utilisation compared to the static renting model that dominates the market today.
The implications for neocloud economics are transformative. A neocloud that achieves five-times better utilisation on the same physical hardware can either serve five times as many customers, or reduce its CAPEX requirements by 80% while maintaining the same revenue. For a market in which the barrier to entry has historically been a nine-figure investment in hardware alone, this kind of leverage is genuinely disruptive. Hosted.ai's platform does not just improve margins — it completely restructures the business model of running a GPU cloud, making it accessible to regional service providers who previously could not justify the capital commitment required to compete.
Beyond the core platform, Hosted.ai is also developing a complementary product called packet·ai — a neocloud built directly on the optimised GPU infrastructure of the company's own customers. Packet·ai delivers GPU-as-a-Service at market-leading prices by drawing on spare capacity from the hosted·ai network, while simultaneously generating direct demand for partner GPUs. The company also plans to offer an open-source neocloud portal that any operator can deploy immediately, dramatically reducing time-to-market for new entrants into the AI cloud space.
Creandum Backs the Infrastructure Layer of AI's Next Chapter
For Creandum, the leading European venture capital firm behind companies like Spotify and Klarna and a growing portfolio of AI infrastructure plays, the decision to lead Hosted.ai's seed round reflects a clear thesis about where value is being created in the AI stack. Rather than betting on another application-layer AI product or foundation model, Creandum is investing in the pipes and engines that every AI product ultimately depends on. Carl Fritjofsson of Creandum noted that the "GPU shortage" narrative misses a much larger efficiency opportunity at the infrastructure layer — and that Hosted.ai's technology is positioned to capture that opportunity at scale.
This AI funding round signals something important for the broader AI funding news landscape as well. Investors are increasingly looking beyond the headline-grabbing model companies and towards the infrastructure startups that make deploying AI economically viable for businesses of all sizes. Hosted.ai's $19 million seed is a bet that the real long-term winners in the AI era may not be the companies writing the most impressive prompts — but the companies ensuring that the hardware powering those prompts is not being thrown away by the minute. As interest in AI funding news continues to surge globally, deals like this one highlight a growing maturity in how the venture community thinks about the AI value chain.
The round also drew support from a strong network of specialist investors. Repeat VC, People Ventures, Z21 Ventures, Golden Sparrow, Hersir Ventures, and Tekton all joined as participants, reflecting both the broad conviction behind Hosted.ai's thesis and the diversity of the investor community watching the GPU infrastructure space closely. For a seed round, the syndicate represents an unusually rich mix of repeat enterprise software investors and hardware-adjacent venture partners — a strong signal that the company's founding team commands significant credibility across the investment community.
The Founding Team: 25 Years of Infrastructure DNA
Hosted.ai was founded by a team with deep roots in building and scaling infrastructure software for service providers. The four co-founders — Ditlev Bredahl, Julian Chesterfield, Narendar (Naren) Shankar, and James Withall — bring a combined experience of over 25 years across Nvidia, VMware, Expedia, XenSource, OnApp, Sunlight, and UK2. The ex-Nvidia thread in the founding team's background is particularly relevant: having built and operated at Nvidia's scale means these founders understand both the technical architecture of modern GPU systems and the commercial realities facing the operators who deploy them at enterprise level.
Bredahl, as CEO, has been the most visible public voice for the company, making the case consistently that the GPU economy needs smarter software around the chips that already exist, not just more chips. Shankar, as Chief Commercial Officer, has driven the company's partnerships and go-to-market strategy, including a notable partnership with Maerifa that bundles hardware sourcing, financing, and Hosted.ai's software platform in a single end-to-end solution for new neocloud builders. Withall and Chesterfield round out the founding team's technical and operational capabilities, ensuring the platform's engineering roadmap keeps pace with the rapidly evolving demands of AI infrastructure clients globally.
The company was founded in 2024 and launched publicly in 2025, meaning it has moved quickly from concept to working product to funded growth stage in under two years. With teams now distributed across the United States, EMEA, and Asia-Pacific, Hosted.ai has already taken on a global posture well ahead of its Series A — a reflection of the genuinely international market opportunity the company is pursuing. The speed of that growth trajectory is itself a testament to how acute the GPU efficiency problem has become in the eyes of both operators and enterprise AI customers worldwide.
Competing in a Maturing GPU Marketplace and What Comes Next
Hosted.ai is entering a market that already has established names. Platforms like Together Computer, Runpod, and Vast.ai have each built meaningful positions in the on-demand GPU compute space, and all three are well-capitalised and actively growing. Hosted.ai's competitive differentiation lies in the layer at which it operates: rather than targeting end developers directly as its primary customer, Hosted.ai sells to the neoclouds and service providers who themselves serve those developers. This B2B2C model means Hosted.ai's technology is multiplied across every customer its clients serve, and its success is tied to the health and growth of the regional and independent cloud ecosystem as a whole.
The company's next major product milestone is GPU Mesh — a federated resource exchange that allows service providers to publish spare GPU capacity and subscribe to capacity from other providers in the network. This mechanism extends the geographic reach of any individual neocloud without requiring additional hardware investment, and effectively creates a liquid spot market for GPU compute across a decentralised network of providers. For European companies and public sector organisations with strict requirements around data sovereignty and regional data processing, GPU Mesh offers a compelling alternative to the hyperscaler-dominated landscape — delivering cost-effective, low-latency compute without requiring data to cross jurisdictions.
In a world where AI funding news is increasingly dominated by billions flowing into model training and hyperscaler infrastructure, Hosted.ai's story serves as a powerful reminder that some of the most consequential bets in the AI economy are being made at the software optimisation layer. Making existing hardware five times more efficient is, in practical terms, equivalent to deploying five times as many data centres — at a fraction of the cost and carbon footprint. As the AI World Organisation continues to track and highlight transformative developments across global AI markets — from investment rounds and infrastructure breakthroughs to policy shifts and startup ecosystems — the emergence of companies like Hosted.ai represents exactly the kind of foundational innovation that will determine whether the AI economy can scale sustainably and equitably into the next decade.