
Overmind raises €2.3M to secure AI agents
Overmind, founded by a former UK intelligence AI builder, raised €2.3M to monitor and secure AI agents in regulated sectors like fintech and healthcare.
TL;DR
London-based Overmind, founded by former MI5 AI specialist Tyler Edwards, raised a €2.3M seed round led by Osney Capital to build a supervision layer for AI agents. The product monitors agent actions in live systems, detects behaviour drift and adversarial abuse, and helps teams intervene—targeting regulated sectors like legal, healthcare and FinTech.
Introduction: Agentic AI is moving into production—fast
Over the last year, “agentic AI” has shifted from demos to real deployments, where software agents don’t just generate text but also take actions: querying tools, touching customer data, triggering workflows, and interacting with critical internal systems. Once an agent is allowed to act, the security conversation changes—because the risk is no longer only about a model producing a wrong answer, but about a system doing the wrong thing at the wrong time, in the wrong place, with the wrong permissions.
That’s the gap Overmind is aiming to address as it scales up following a newly announced €2.3 million (£2 million) seed round. The company positions itself as the “deployment-layer infrastructure” that helps businesses observe and control what agents do in live environments, rather than focusing only on model-level attacks and prompts.
For teams in legal services, healthcare, and FinTech, the promise of agentic AI is obvious: faster case research and drafting, streamlined patient and clinical operations, quicker compliance checks, and better customer service automation. But these are also the domains where regulatory compliance and data privacy expectations are highest, making real-time supervision and auditability a prerequisite for adoption—not a “nice to have.”
The funding round: Who invested and what it’s for
Overmind says the seed financing will be used to expand technical teams, speed up product development, and scale go-to-market work in legal, healthcare, and FinTech. The round was led by specialist cybersecurity investor Osney Capital, with participation from 14Peaks, Portfolio Ventures, Antler, and Endurance Ventures.
The company’s pitch is built around a blunt observation: AI models can be probed, tricked, and manipulated, and adversarial inputs are an enduring part of how these systems behave in the wild. Overmind’s CEO argues that the industry spends too much energy trying to “fix” an unfixable property—because model vulnerability is not something teams can eliminate entirely—while missing a more urgent issue: what happens when an agent is live in production, interacting with real systems, and its behaviour begins to drift.
In that scenario, the consequences can be concrete: data mishandling, policy violations, operational disruption, and reputational risk—often before humans even realize the system has started to deviate from expectations. Overmind’s stated goal is to provide a monitoring-and-intervention layer that gives teams visibility into agent interactions and the ability to step in before damage occurs.
From a market perspective, this seed round is also another signal that “agent security” is becoming a defined category rather than a feature bolted onto existing security tooling. As the ai world organisation continues to highlight practical, business-facing AI adoption at the ai world summit and through ai world organisation events, this is exactly the kind of category that deserves deeper attention—because it sits at the intersection of engineering, risk, compliance, and leadership decision-making.
What Overmind is building: Supervision as a devtool
Overmind describes itself as a devtool that helps AI learn from production data, turning real-world agent behaviour into continuous improvement. A core element of the approach is “pattern of life analysis,” which—put simply—means establishing what normal behaviour looks like for an agent in its real environment and then detecting deviations as they occur.
This matters because agents are not static scripts; they are systems that can behave differently depending on context, tool availability, data changes, and the evolving complexity of user requests. In practice, many organisations struggle to answer basic questions once an agent is deployed: What did the agent try to access, what tools did it call, how did its actions differ from what we expected, and where is the audit trail when compliance asks for proof?
Overmind’s platform is positioned as a way to achieve “complete visibility” into agent behaviour, with detection and prevention of deviations in real time—faster than humans could react in the moment. The company also claims its system doesn’t only keep agents safe but makes them better, using reinforcement learning to improve performance and accuracy over time.
To be clear, none of this removes the need for strong fundamentals—like least-privilege access, secure data handling, and robust internal controls—but it responds to a real operational pain point: even well-designed agents can drift when the environment changes. That’s why supervision becomes a product layer of its own, rather than a single toggle inside an LLM framework.
From the perspective of the ai conferences by ai world ecosystem, the Overmind story is a timely case study because it reframes “AI safety” from a philosophical debate into a deployment checklist for leaders who want agents to deliver business value without creating silent operational liabilities. It also fits the ai world summit 2025 / 2026 narrative: moving beyond hype into the practical, tactical questions that founders, agencies, and enterprises need answered to scale results responsibly.
Founders and team: Intelligence background meets scale-up experience
Overmind was founded in 2025, and it’s led by Tyler Edwards as co-founder and CEO. Edwards previously spent eight years building AI systems for British intelligence agencies, including MI5, MI6, and GCHQ—experience that strongly shapes the company’s “intelligence-grade” positioning for agent supervision in production.
The leadership team also includes CTO Akhat Rakishev, who previously led machine learning infrastructure at Monzo and Lyst. On the commercial side, Overmind’s CRO Sam Brunt is described as having scaled go-to-market at three unicorns: Funding Circle, Pipe, and Vertice.
This mix—deep security and operational context from intelligence plus “build-and-scale” execution from high-growth technology companies—helps explain why specialist cybersecurity investors were willing to lead the round. An investor from Osney Capital frames the thesis in terms of competitive advantage: in autonomous AI, agent security, performance, and execution become decisive differentiators, and Overmind’s tooling is meant to let teams scale deployment with confidence while iteratively improving performance.
Another investor, Antler, similarly emphasizes the category-level importance: as autonomous agents become more common, supervision and security become a major bottleneck, and Overmind’s team is positioned to help define the standard for safe deployment in production. Whether or not Overmind becomes the standard, the narrative is consistent: the market is starting to treat supervision as essential infrastructure for the next phase of AI adoption.
The wider funding context: A fast-forming category in Europe
Overmind’s seed round lands in a broader European funding moment for agentic AI and AI security/governance tooling. In the same period, several other startups raised capital that illustrates how quickly investors are backing guardrails, monitoring, and adjacent infrastructure for autonomous systems.
Examples cited alongside Overmind include: London-based Archestra securing €2.8 million in pre-seed funding focused on safety guardrails and safe connections to internal data; Italy-based Equixly raising €10 million to scale an AI-driven platform for automated API security testing; and France-based Qevlar AI raising €9.1 million to build agentic AI for security operations centres. The report also notes adjacent funding in broader AI adoption support—such as Ranketta’s €1 million for analytics around brand visibility in LLM outputs and Omnia’s €3.5 million for agent-driven marketing platforms—showing that the capital inflow is not limited to pure security products.
Taken together, the rounds referenced are described as representing more than €26 million invested into startups addressing autonomous agents or the security and governance challenges they create. This kind of aggregation matters because it signals “category certainty”: multiple teams, multiple approaches, and sustained investor interest, all converging on the same conclusion that agentic AI must be supervised to be scaled.
There’s also a practical business driver behind this momentum: organisations are excited by automation, but leaders need assurance that agents won’t quietly introduce compliance failures or operational incidents. In fact, Gartner is cited as estimating that 40% of agentic AI projects will be cancelled by 2027, with inadequate risk controls as a major factor. Even if a company believes the percentage is directionally high or low, the implication is clear: risk controls and governance can decide whether agentic AI becomes mainstream or stalls under the weight of preventable incidents.
For the ai world organisation, this is a strong editorial opportunity—because it’s not merely “startup raises funding,” it’s “here’s the missing operational layer the market is now paying for.” And for the ai world summit and broader ai world organisation events, it maps cleanly to agenda themes that audiences care about: deploying agents responsibly, building for regulated industries, and aligning innovation with compliance and trust.