
OPAQUE raises $24M for Confidential Enterprise AI
OPAQUE raises $24M Series B at ~$300M valuation to push Confidential AI for enterprise teams, enabling verifiable privacy, governance, and auditability.
TL;DR
OPAQUE raised $24M in a Series B round at a ~$300M valuation to speed up its Confidential AI platform for enterprises. The focus: keeping sensitive data and models protected during runtime with verifiable privacy, policy enforcement, and audit-ready controls—so teams can move from AI pilots to secure production deployments, even in regulated cloud environments.
OPAQUE has announced a $24 million Series B round at an approximately $300 million post-money valuation, positioning “Confidential AI” as a practical trust layer for enterprises that want to run advanced AI on sensitive, regulated, or proprietary data without losing governance and control. This round was led by Walden Catalyst, included returning investors such as Intel Capital, Race Capital, Storm Ventures, and Thomvest, and added Advanced Technology Research Council (ATRC) as a new investor and strategic partner, taking OPAQUE’s total funding to $55.5 million.
At the ai world organisation, we track signals like this because they reveal what’s becoming “table stakes” for enterprise AI adoption, especially for the leaders and practitioners who join ai conferences by ai world and the ai world summit to benchmark real-world deployment readiness. As the market shifts from experimentation to production, themes like runtime verification, cryptographic proof, and audit-ready policy enforcement are moving from “nice to have” to business-critical—exactly the kind of governance-by-design conversation that will shape agendas for the ai world summit 2025 / 2026 and other ai world organisation events.
Confidential AI platform momentum is accelerating quickly, and OPAQUE is framing its approach as a response to the “trust gap” that stalls many enterprise AI programs before they reach production scale. In OPAQUE’s view, enterprises want AI agents and models running on their most valuable data, but concerns about data leakage, compliance exposure, and limited auditability keep CISOs, compliance teams, and legal stakeholders cautious.
The $24M raise and what it signals for enterprise AI
OPAQUE describes itself as a Confidential AI company focused on establishing a trust layer for enterprise AI, and it is using this Series B announcement to emphasize that the critical blocker to scaling AI is not model capability alone—it’s whether an organization can prove its data and models stayed protected and governed during execution. The company says its latest financing will accelerate the delivery of its Confidential AI platform and expand its ability to provide verifiable evidence of privacy and policy enforcement before, during, and after runtime.
The investor mix also tells an important story about where Confidential AI is headed. Alongside a venture lead (Walden Catalyst) and returning institutional backers (Intel Capital, Race Capital, Storm Ventures, Thomvest), OPAQUE highlighted ATRC as both a new investor and a strategic partner—an indicator that confidentiality and sovereign-aligned governance are becoming part of the broader enterprise AI infrastructure conversation.
Young Sohn of Walden Catalyst is cited in the announcement emphasizing that enterprises will struggle to bring AI into production without verifiable guarantees for sensitive data and models, and he frames OPAQUE’s capabilities—privacy, policy enforcement, and model integrity—as rapidly becoming non-negotiable. That phrasing matters because it mirrors what many enterprise decision-makers already experience: AI pilots can be easy to launch, but the step from pilot to production becomes difficult when governance and security can’t be demonstrated to stakeholders who carry regulatory and legal accountability.
From an ecosystem standpoint, OPAQUE is leaning into the idea that Confidential AI has moved quickly from “emerging theory” to “enterprise mandate,” and it points to adoption and endorsements from companies such as NVIDIA, AMD, Intel, and Anthropic, as well as major hyperscalers including Google, Microsoft Azure, and AWS. OPAQUE also references Gartner’s view that Confidential AI techniques are essential for securing GenAI workflows and protecting sensitive data used by AI models, reinforcing the notion that confidentiality mechanisms are becoming a foundational control plane rather than a niche add-on.
For enterprise leaders attending the ai world summit and other ai world organisation events, the signal is clear: the next phase of AI advantage will depend as much on trust engineering as on model selection. In practical terms, this means the buying conversation will increasingly include questions like “Can we prove what happened at runtime?” and “Can we verify policy enforcement without relying on assumptions?”—not just “Can the model answer well?”
Why “Confidential AI” is suddenly urgent, not optional
Enterprise AI adoption is accelerating, but the biggest barrier is often not a shortage of use cases or budgets—it’s governance risk attached to sensitive inputs and outputs. OPAQUE describes a widening “trust chasm” where organizations want to deploy AI agents on proprietary data to drive productivity and competitive advantage, yet they pause deployments because the risk of leakage or compliance violations is difficult to eliminate and difficult to prove away.
This tension is especially acute in regulated sectors and data-rich industries. Financial services, insurance, healthcare, and high-tech enterprises frequently sit on large volumes of sensitive or proprietary data, and those datasets can unlock meaningful value if AI can use them safely—yet the consequences of mishandling are immediate (regulatory action, contractual issues, reputational damage, and operational disruption).
OPAQUE’s messaging aligns with a broader market realization: security controls that stop at “data at rest” and “data in transit” are no longer sufficient when AI workloads process data in memory and generate outputs that must remain governed. The enterprise question is shifting from “Is it encrypted?” to “Can we continuously verify confidentiality and policy compliance during computation?”
The platform’s emphasis on proof is important because many organizations are trying to modernize governance without slowing down innovation. In high-velocity AI roadmaps, it’s common for teams to build prototypes quickly, only to discover that they cannot satisfy internal assurance requirements when it’s time to integrate with sensitive systems or share outputs across business units.
This is exactly where Confidential AI becomes a strategic lever instead of a defensive control. If an organization can demonstrate runtime confidentiality and governance controls, it can often expand the scope of AI usage to higher-value datasets, higher-impact workflows, and more autonomous agentic systems—without taking an unacceptable risk posture.
From the ai world organisation perspective, this trend is likely to shape enterprise AI playbooks showcased across the ai world summit 2025 / 2026 cycle. The executive conversation will increasingly focus on how to unlock “sensitive data AI” while maintaining verifiable protections, and how to measure trust and compliance as engineering outputs rather than policy documents.
What OPAQUE says its platform delivers: proof-driven governance
OPAQUE positions its Confidential AI platform as secure and auditable infrastructure designed to give enterprises verifiable proof that privacy, model integrity, and policy enforcement are demonstrable across an AI system’s lifecycle and runtime. The company states that the goal is to move from trust-by-assumption to trust-by-evidence—showing that sensitive data remains confidential during computation, model parameters are not exposed, and policies are enforced as written.
In the announcement, CEO Aaron Fulkerson is cited arguing that trust is the real barrier to scaling enterprise AI, and that the key is verification rather than assumption. He ties Confidential AI to runtime proof—evidence that the data remained private during processing, the model weights were not revealed, and organizational policies were enforced correctly throughout execution.
OPAQUE also frames Confidential AI as a pathway to help organizations overcome the “pilot trap,” where programs stay in limited trials because stakeholders cannot sign off on privacy and compliance posture at production scale. The company says it enables teams to deploy AI with confidence and transparency while meeting rising compliance expectations.
A particularly notable detail is OPAQUE’s claim that its approach extends traditional data governance tooling with real runtime verification, and that this can enable teams to move from pilot to production 4–5 times faster. Whether or not every enterprise experiences the same acceleration, the underlying idea is compelling: reducing the cost of assurance reduces the friction of scaling, which can transform governance from a bottleneck into an accelerator.
OPAQUE’s “About” section also offers background that helps explain its credibility claims in privacy and security engineering. The company says it was born from UC Berkeley’s RISELab and that its mission is to solve the security and compliance concerns that block AI adoption at scale, specifically around preventing data leaks and compliance violations when AI touches sensitive data.
On ecosystem traction, OPAQUE lists customers and partners including ServiceNow, Anthropic, Encore Capital, Accenture, and leaders across high tech, financial services, insurance, and healthcare. This customer set reinforces that the target market is not just research labs, but organizations trying to operationalize AI in environments where governance requirements are strict and the data is commercially or legally high-stakes.
For practitioners attending ai conferences by ai world and sessions at the ai world summit, the practical takeaway is that “Confidential AI” is best evaluated as an operational capability, not just a security slogan. Enterprise buyers will want to ask how proof is generated, what is auditable, what policies can be enforced, how workflows integrate with existing governance and compliance frameworks, and what the operating model looks like for security, legal, and engineering teams working together.
Where OPAQUE plans to expand next: post-quantum, training, sovereign cloud
OPAQUE states that it is expanding into post-quantum security, confidential AI training, and sovereign cloud environments, framing these as essential next steps for scaling AI across sensitive enterprise workloads. These three areas represent a broader “future-proofing” posture: preparing for evolving cryptographic threat models, enabling confidentiality beyond inference into training workflows, and supporting deployments where data residency and sovereignty controls are core requirements.
The mention of sovereign cloud is especially notable because it ties Confidential AI to geopolitical and regulatory pressures, not just technical security needs. In many regions and sectors, sovereignty requirements influence cloud procurement, cross-border data movement, and even AI model governance, and Confidential AI technologies can be positioned as enabling controls for these constraints.
OPAQUE includes commentary from Dr. Najwa Aaraj, CEO of the Technology Innovation Institute (TII), emphasizing that “sovereign AI” requires verifiable guarantees for how data, models, and policies are protected and governed. In the announcement, she positions OPAQUE’s foundation as cryptographically verifiable and supportive of operating AI systems with stronger sovereignty assurances, aligning with why ATRC is described as both investor and partner.
OPAQUE also connects this new funding to a recent product milestone: the launch of OPAQUE Studio. The company describes OPAQUE Studio as a development environment that helps enterprises build and deploy Confidential AI agents with runtime-verifiable privacy, policy compliance, and auditability.
This product emphasis matters for the enterprise adoption curve because developer workflows often determine whether security and governance are “baked in” or bolted on later. By focusing on a studio environment and agent deployment pathway, OPAQUE is implicitly acknowledging that the next enterprise AI wave is not only LLM chat interfaces, but operational agents and workflows that act on sensitive systems and data.
In the language enterprises use, this is where “risk” becomes tangible: agents can fetch, transform, and generate outputs that may trigger downstream actions, making auditability and policy enforcement essential. If Confidential AI can make these controls verifiable at runtime while preserving developer velocity, it becomes attractive not only to security leadership but also to engineering leaders judged on time-to-value.
For the ai world organisation community, this evolution is directly relevant to how enterprise AI maturity is defined. As ai world organisation events bring together buyers, builders, and governance leaders, the expectation will be that “secure AI” includes enforceable controls and measurable proof, and that the approach extends across inference, training, and cross-cloud deployment scenarios—exactly the direction OPAQUE is signaling.
What this means for enterprise leaders—and why it belongs on AI World Summit agendas
This Series B announcement underscores a broader reality: enterprise AI success increasingly depends on trust architecture, not just model performance. OPAQUE is explicitly framing “verifiable guarantees” as the missing piece that helps organizations move beyond pilots when sensitive data and strict compliance requirements are in play.
For enterprise leaders, there are several practical implications. First, governance and security should be evaluated as engineering capabilities that can be proven at runtime, not as a checklist completed once at procurement time. Second, the organizations that figure out how to safely unlock proprietary data for AI are likely to widen their advantage, because data depth and uniqueness often matter more than marginal differences between foundation models.
Third, the rise of Confidential AI suggests that many enterprise programs will shift to architectures that assume data sensitivity by default. Instead of limiting AI to sanitized datasets or narrow use cases, teams will aim to operate on real, high-value data—while maintaining privacy, governance, and auditability as measurable outputs.
This is precisely why the ai world summit and the ai world summit 2025 / 2026 roadmap should include deeper sessions on Confidential AI, confidential computing patterns, governance-by-design, and how to operationalize proof-based compliance for AI agents. At the ai world organisation, we see this as a bridge between strategy and implementation: boards and executives want AI value, but security and compliance leaders need defensible assurance, and engineering teams need tools that keep velocity high.
For readers planning their year of learning and networking, ai conferences by ai world and broader ai world organisation events are designed to connect these stakeholders—CXOs, CISOs, product leaders, data teams, and AI engineers—around what it takes to go from prototype to production responsibly. The AI World Organisation’s events and summits focus on convening the ecosystem around practical adoption, innovation, and future-facing governance conversations, which is exactly where Confidential AI fits as the enterprise trust layer matures.
If you’re building on sensitive data—or want to—this is the moment to treat confidentiality and runtime verification as first-class product requirements. The companies that succeed will likely be the ones that can demonstrate not only performance, but proof: proof of privacy controls, proof of policy enforcement, and proof of governance across the AI lifecycle.