
C2i Semiconductors $15M for AI Power
C2i Semiconductors raised $15M to cut AI data-center power losses. Learn what it means and join the ai world summit 2026 by the ai world organisation.
TL;DR
Bengaluru-based C2i Semiconductors raised $15M led by Peak XV, with TDK Ventures and Yali Deeptech joining, to build power-management tech that reduces conversion losses in AI data centres as workloads push power limits. It plans a U.S. presence, Taiwan applications engineering, and near-term chip tapeouts for enterprise server platforms and cloud operators.
$15M round and why it matters
C2i Semiconductors’ new $15 million financing round is led by Peak XV Partners, with Yali Deeptech and TDK Ventures also participating. The Bengaluru-based startup had earlier raised $4 million from Yali Capital in November 2024, and this new round is positioned as the fuel for a much bigger push—product execution plus global go-to-market.
The immediate use of funds is tightly linked to customer proximity and deployment readiness: C2i has said the proceeds will support global expansion, including setting up a U.S. office to stay close to customers and key decision-makers, and later creating an applications and systems engineering team in Taiwan to support ODMs and protect design wins. In data-centre infrastructure, these “close to the customer” teams are often what turns promising silicon into qualified, repeatable deployments, because validation cycles and platform roadmaps move with the hyperscaler and OEM calendar, not with a startup’s preferred timeline.
C2i is aiming at a segment that is suddenly in the spotlight: the energy conversion path inside AI data centres. A core argument highlighted in public discussions around the company is that a lot of energy gets lost while converting high-voltage power down to the low-voltage levels GPUs actually use, and that this conversion overhead is now too large to ignore at scale. In plain terms, every wasted watt is paid for multiple times—once in electricity, again in cooling, and again in capacity planning that limits how many accelerators a facility can run without expensive electrical upgrades.
For enterprise leaders tracking AI infrastructure strategy, this kind of funding round is a signal that “power semiconductors + system architecture” is returning to centre stage alongside GPUs, networking, and cooling. That’s also why this story fits naturally into the conversation themes we see across the ai world organisation community—where builders and operators increasingly discuss end-to-end AI deployment constraints, not just model performance. It’s also a timely agenda thread for the ai world summit and ai world summit 2025 / 2026 discussions, where infrastructure efficiency and real-world scaling challenges tend to attract strong interest from both technical and business audiences.
What C2i is building: grid-to-processor power delivery
C2i was co-founded in June 2024 by Ram Anant, Vikram Gakhar, Preetam Tadeparthy, Dattatreya Suryanarayana, Harsha S B, and Muthusubramanian N V. The company is developing power management solutions for AI data centres and cloud infrastructure, with a stated focus on system-level innovations that rethink how power flows from the grid to the processor core.
One way to understand the “system-level” framing is to contrast it with point optimisations. Many approaches try to improve one stage of conversion, one board, or one regulator. C2i’s message, instead, is about redesigning power delivery as a coordinated stack—conversion plus control plus packaging and integration—so the total path from incoming power to GPU/processor demand is treated as one engineered system. This is also why TechCrunch notes C2i’s “grid-to-GPU” positioning and describes the company as building plug-and-play, system-level power solutions designed to reduce energy losses.
The technical pain point is easy to relate to if you think in voltage terms: power entering a data centre at very high voltage must be converted down to very low voltage for accelerators and other compute components. Each conversion step can introduce losses and transient instability, and at AI scale the cumulative effect becomes a business issue, not just an electrical one. When organisations run large training jobs or high-throughput inference, they often experience spikes and swings in demand that stress power delivery; if power is less stable or less efficient, utilisation drops and operating cost rises.
C2i has also been explicit about why this matters specifically for modern AI infrastructure: the company’s focus is to improve efficiency and performance outcomes in environments where GPUs dominate both power draw and value creation. TechCrunch similarly emphasises that much of the strain in data centres comes from converting electricity efficiently inside facilities, and cites an estimate that this conversion process wastes roughly 15% to 20% of energy today. If that order-of-magnitude loss is even directionally right in real deployments, it explains why investors and operators see power delivery as an “AI scaling lever” with immediate ROI potential.
Execution plan: tapeouts, fabs, and early customer pull
C2i has said it intends to move quickly from design to silicon, with its first product scheduled for tapeout in April and a second product following in July. The company has also indicated that one chip will be manufactured at Tower Semiconductor in Israel, while another will be fabricated at GlobalFoundries, either in Singapore or Dallas. For a young semiconductor startup, publishing this level of manufacturing intent is meaningful because it shows the roadmap is organised around real production timelines, not just lab prototypes.
On the customer side, C2i has said it is already in discussions with three to four enterprise server customers to define components for their next-generation platforms, and that it is targeting global enterprise players. That matters because server and data-centre power design is deeply intertwined with platform qualification; it’s not enough to have a better chip if it can’t be engineered into racks, power shelves, and board designs that meet reliability and serviceability requirements.
TechCrunch also points to a near-term validation moment, noting that C2i expects its first two silicon designs to return from fabrication between April and June, after which the company plans to validate performance with operators and hyperscalers that have requested data reviews. Peak XV’s Rajan Anandan is quoted describing the “feedback loop” as relatively short and suggests the market will know more within about six months, anchored around the upcoming silicon results and customer validation. In other words, this is not a story that will take years before any concrete signal emerges; there are clear check-points in 2026.
The company’s operating footprint is also expanding in a way that mirrors how infrastructure hardware companies scale. TechCrunch reports the Bengaluru-based startup has built a team of about 65 engineers and is setting up customer-facing operations in the U.S. and Taiwan as it prepares for early deployments. This matches C2i’s own stated plan to establish a U.S. office and later build an applications and systems engineering team in Taiwan. For buyers, this kind of footprint matters because support, integration, and on-site collaboration can be the difference between a promising pilot and a production design win.
From an ecosystem lens, this also connects to broader platform conversations that show up repeatedly at the ai world summit: real adoption depends on execution under constraints—power budgets, procurement cycles, qualification test plans, supply chains, and services. The ai world organisation’s event programs, including multiple summits across regions, are designed around exactly these cross-functional realities, bringing engineering, operations, and leadership into the same room. That is why “ai conferences by ai world” and ai world organisation events can be a practical place to track how infrastructure bottlenecks evolve, and how solution approaches move from claims to proofs.
The economics: efficiency, performance, and ROI
C2i’s public claims focus on measurable operational outcomes. The company says its system-level architecture can recover 8% to 10% efficiency, improve GPU performance by about 3%, and significantly extend server lifetimes. It also states that a 10% efficiency gain can translate into nearly 1 kW saved per server tray, scaling to hundreds of kilowatts across large deployments and improving ROI for operators.
TechCrunch adds a similar framing from the platform perspective: by integrating conversion, control, and packaging as one platform, C2i estimates it can cut end-to-end losses by around 10%, described as roughly 100 kilowatts saved for every megawatt consumed, with knock-on effects for cooling, utilisation, and overall data-centre economics. This is the kind of improvement that can change planning assumptions—how much compute you can fit per rack, how close you can run to facility limits, and how quickly you hit the “need a new substation” moment.
It’s also important to interpret these numbers in the context of AI’s broader capex cycle. The Entrackr report cites market research suggesting AI infrastructure capex could reach $500–$600 billion over the next 12–18 months and potentially grow to $1 trillion by 2030. If investment flows anywhere near those ranges, even marginal improvements in power conversion and delivery become a huge economic lever for cloud providers, data-centre operators, and large enterprises modernising their stacks.
For operators, the logic is not just saving electricity; it’s protecting performance consistency under real workload conditions. If power delivery is unstable or inefficient, you can lose effective throughput and pay more per unit of useful compute. That is why the discussion is increasingly about “grid-to-chip” thinking rather than isolated component swaps, and why investors are willing to fund approaches that tackle the full chain.
From the perspective of enterprise AI buyers, there is also a strategic angle: power constraints can become a procurement constraint. Many organisations are discovering that “getting more GPUs” is only half the battle; the other half is whether their facility can power and cool the systems reliably at the density and utilisation AI requires. This is one reason power delivery innovation is becoming a mainstream topic at forums and leadership gatherings, including the ai world summit 2025 / 2026 tracks that increasingly blend infrastructure, governance, and business ROI into one narrative.
What this signals for India’s deeptech and the AI World agenda
C2i’s story also fits a larger pattern: India is producing more companies that start with deep technical advantages and target global infrastructure categories. TechCrunch explicitly frames C2i’s momentum as part of the maturing semiconductor design ecosystem in India and notes that government-backed design-linked incentives have lowered the cost and risk of tape-outs, contributing to the viability of building globally competitive semiconductor products from India. Peak XV’s Rajan Anandan is also quoted drawing an analogy for semiconductors in India as being “like 2008 e-commerce,” indicating an early but accelerating phase.
For the ai world organisation, this is exactly the kind of development that belongs in a broader “AI impact on ground” conversation—because AI’s impact is not just models and apps, but also infrastructure, energy, supply chains, and talent. The organisation’s programming spans multiple flagship event formats, including AI World Summit editions across regions, and other summit properties listed on its site. On the events calendar, for example, the AI World Organisation lists AI World Summit 2026 Asia scheduled for 28th May, 2026 in Singapore, alongside other upcoming events and cities.
For readers who follow ai conferences by ai world, this funding round is a good case study to watch through 2026: it has near-term tapeout milestones, identified manufacturing plans, and a clear hypothesis about measurable gains in efficiency and GPU performance. It also illustrates a broader theme that keeps surfacing in operator conversations: the next wave of AI advantage may come from “boring” infrastructure improvements that unlock more useful compute per megawatt, rather than from compute alone.
If you’re an enterprise leader, founder, or infrastructure architect, this is a moment to update your mental model of the AI stack. GPUs still matter, but so do power shelves, converters, telemetry, packaging, and system-level coordination—because those layers can decide whether your AI investment performs as planned in production. This is why the ai world organisation continues to position its platform and ai world organisation events as a place to connect the dots between investment signals, engineering realities, and go-to-market execution, and why “ai conferences by ai world” are increasingly relevant to hardware and infrastructure builders—not just software teams.