ManaMind Raises €1.2M to Automate Game QA with AI
ManaMind secures €1.2M pre-seed AI funding led by Sure Valley Ventures to build autonomous AI agents that automate game quality assurance testing.
TL;DR
London-based startup ManaMind has raised €1.2 million in pre-seed funding, led by Sure Valley Ventures, to build AI agents that autonomously test video games and detect bugs — just like a human tester would, but at machine speed. The platform auto-generates bug reports too, promising up to 80% faster QA for game studios tired of slow, costly manual testing.
ManaMind Secures €1.2 Million in Pre-Seed Funding to Revolutionise AI-Powered Game Quality Assurance
The global gaming industry has long wrestled with one of its most persistent behind-the-scenes challenges — quality assurance. For every visually stunning open world or fast-paced multiplayer title that reaches players, there are thousands of hours of painstaking manual testing that most people never see or think about. Bugs, glitches, gameplay inconsistencies, and performance failures all need to be caught before a game ships, and traditionally, that responsibility has fallen on human testers sitting through the same sequences repeatedly, documenting the same types of errors, and submitting reports that developers then have to sift through. It is time-consuming, expensive, and frankly, one of the least glamorous parts of building a video game. That is exactly the problem that London-based startup ManaMind has set out to fix — and it just raised €1.2 million (approximately $1.5 million) in pre-seed AI funding to push that vision forward.
The round, which closed in late April 2026, was led by Sure Valley Ventures (SVV), a well-regarded technology-focused venture capital firm. Participating alongside SVV were EWOR, Ascension, SyndicateRoom, and Heartfelt — a diverse mix of investors whose involvement signals growing confidence in the AI-powered game testing space. This latest AI funding news arrives at a moment when autonomous agents are becoming a central talking point across every technology vertical, and ManaMind's approach represents one of the more compelling real-world applications of agent-based AI to date.
What ManaMind Actually Does — And Why It Matters
At its core, ManaMind builds autonomous AI agents that are trained specifically to play video games and identify problems within them. Rather than relying on scripted test sequences or traditional automation frameworks that require developers to manually define test parameters, ManaMind's agents observe, interact with, and make decisions inside a game's environment using only video and audio inputs — exactly the way a real human player would experience it.
This is a meaningful technical distinction. Most existing game testing automation tools are brittle — they work within narrow pre-defined conditions and fail the moment a game's environment changes. ManaMind's agents, by contrast, adapt to the game dynamically. They explore, make gameplay decisions, and identify anomalies the same way an experienced human tester would, but they do so at machine scale, meaning they can run continuously, across multiple sessions simultaneously, without fatigue, without distraction, and without the need for shift rotations.
The platform does not just find bugs — it also generates detailed, actionable bug reports automatically. For development studios, this is arguably just as valuable as the detection itself. A bug report that is vague or poorly documented can waste just as much time as finding the bug in the first place. ManaMind's system produces reports that are structured, readable, and immediately usable, allowing developers to skip the documentation phase and move directly into resolving the issue. According to the company's own estimates, the platform can deliver up to an 80% improvement in overall QA efficiency — a number that, if validated at scale, would represent a fundamental shift in how games are brought to market.
The company was founded in 2025 by Emil Kostadinov and Sabtain Ahmad, and in a short period it has already established design partnerships with Included Games and Crazy Labs, two companies with significant footprints in the mobile gaming space. These early partnerships are important proof points for ManaMind's go-to-market strategy — they indicate that studios are not just interested in the concept but are actively investing time and resources into integrating the platform into their development workflows.
The Problem With Manual QA in Modern Game Development
To appreciate why this AI funding news matters, it helps to understand just how challenging quality assurance has become for modern game developers. Games today are not the contained, linear experiences they once were. Open-world titles can have hundreds of square kilometres of explorable terrain. Live-service games are constantly updated with new content, new mechanics, and new systems that all need to be tested before they reach players. Mobile games face fragmentation across thousands of device configurations, screen sizes, and operating system versions. The scope of what needs to be tested has grown exponentially, but the tools and processes used to do it have struggled to keep pace.
Manual QA remains the industry standard for a large portion of the market, particularly among mid-sized and smaller studios that cannot afford the infrastructure or headcount that the biggest publishers maintain. For these teams, QA is often a bottleneck. Deadlines get pushed. Launches get delayed. Patches ship with known issues because there simply was not enough time or manpower to catch everything before release. The human cost is also real — QA testers are among the most overworked and underpaid members of game development teams, frequently working on short-term contracts with little job security.
This is the landscape into which ManaMind is entering, and it is one where the demand for a better solution is genuine and urgent. The broader AI funding news ecosystem has seen significant investment in automation tools across software development, but the gaming sector has specific requirements that make general-purpose testing solutions a poor fit. Games are visual, interactive, and dynamic in ways that typical enterprise software simply is not. Building AI agents that can genuinely navigate and evaluate these environments requires a purpose-built approach, which is exactly what the ManaMind founding team has focused on from the outset.
Emil Kostadinov, CEO and co-founder of ManaMind, has spoken directly about the company's philosophy. In his view, game development should be fundamentally a creative endeavour, and the repetitive, mechanical aspects of QA should not be consuming the time and attention of human developers. "We're automating the manual, time-consuming parts so studios can focus on building amazing worlds," he has said, adding that the company developed its own proprietary visual model specifically designed for virtual environments because the precision demanded by gaming requires a level of specialisation that general-purpose models cannot provide.
The Technology Behind the Agents
What makes ManaMind technically interesting is the architecture of its AI system. The platform uses three AI agents working in coordination to test games effectively. Each agent plays a distinct role in the testing process — one handles gameplay navigation and exploration, another focuses on identifying deviations from expected behaviour, and the third generates the structured reports that developers ultimately receive. This division of responsibility allows the system to cover more ground more efficiently than a single monolithic model could.
The proprietary visual model that the company has built in-house is central to the platform's performance. In internal evaluations, this model reportedly outperformed models from major AI labs in the specific task of detecting in-game bugs — a finding that speaks to the value of domain-specific training data and fine-tuning. General-purpose vision models are not trained on the visual language of video games — the way environments are rendered, the way characters move, the way UI elements behave — so they miss things that a purpose-trained model would catch.
This technical foundation is what gives ManaMind a defensible competitive position in a space that larger players could theoretically enter. Building a model that genuinely understands game environments takes time, proprietary data, and domain expertise that is not easily replicated. The team's decision to develop this capability in-house rather than building on top of commercially available models reflects both ambition and a clear-eyed understanding of where the technical moat in this business actually lies.
From a broader AI perspective, this is also a compelling case study in applied autonomous agent design. The challenges of building agents that can operate reliably in complex, unpredictable visual environments are relevant far beyond gaming, and ManaMind's work is directly contributing to the state of the art in this area. At The AI World Organisation, we have been tracking the rapid maturation of autonomous AI agents across verticals, and ManaMind represents exactly the kind of focused, domain-specific application that tends to produce durable value rather than hype.
What the Funding Will Be Used For — And What Comes Next
With €1.2 million in pre-seed AI funding secured, ManaMind has laid out a clear set of priorities for deploying the capital. The company plans to expand its engineering team, bringing in additional talent to accelerate the development of its proprietary models and deepen the platform's capabilities. It will also use the funds to support expansion into key international markets — a logical next step given that the gaming industry is a global one, with major studios and publishers concentrated in the UK, continental Europe, North America, and Asia.
The design partnerships already in place with Included Games and Crazy Labs will play an important role in this phase of development. These relationships give ManaMind direct access to real-world testing environments, real development workflows, and real feedback loops that will shape how the product evolves. For a company at the pre-seed stage, having paying or engaged design partners is a significant indicator of product-market fit and commercial viability.
Looking further ahead, Kostadinov has made clear that gaming is a launchpad rather than a ceiling for ManaMind's ambitions. The company's stated long-term vision is to build the autonomous testing layer not just for video games but for all software — and ultimately for robotics. This is a bold but logical progression. The core capability that ManaMind is building — AI agents that can reliably navigate complex visual environments and identify failures within them — has obvious applications in any context where traditional automated testing falls short. Software interfaces, physical robot systems, and any environment that is too dynamic for scripted test automation all represent potential future markets.
This kind of platform thinking, where an initial vertical use case serves as the proving ground for a broader horizontal capability, is a well-trodden path in enterprise software. What makes ManaMind's version of it credible is that the gaming use case is genuinely hard. If the company can build agents that reliably test games, it will have demonstrated a level of visual reasoning and adaptive behaviour that translates directly into adjacent industries. The €1.2 million raised in this round is the fuel for that foundational work.
For anyone tracking AI funding news in the European startup ecosystem, ManaMind is a name worth keeping on the radar. The combination of a clear pain point, a differentiated technical approach, strong early investor backing from Sure Valley Ventures, and a founding team that appears to have both the vision and the domain expertise to execute makes this one of the more interesting pre-seed stories to emerge from the UK tech scene in 2026. The games industry has needed a genuinely autonomous testing solution for years. ManaMind may well be the team that delivers it.