
Blockbrain raises €17.5M for knowledge AI agents
Blockbrain secures €17.5M Series A to build secure knowledge bots and digital twins, helping enterprises retain expertise with compliant AI agents.
TL;DR
Blockbrain raised €17.5M in Series A funding to scale its ‘knowledge AI agents’—tools that capture employee expertise as secure, governed knowledge bots, often set up in weeks. The aim is to stop critical know-how from walking out the door, help new hires ramp faster, reduce time wasted searching for answers, and roll out compliant AI across teams as it expands in Europe and the UK.
The real cost of “knowledge drain” in modern enterprises
In many organizations, the most valuable operational advantage isn’t just a dataset or a patent—it’s the lived, hard-earned expertise people build over years: how decisions are made, why certain trade-offs work, which steps prevent failure, and what “good” looks like in a regulated environment. When experienced employees move on, that tacit knowledge often leaves with them, and teams end up rebuilding methods, workflows, and decision logic from scratch—losing time, lowering productivity, and introducing risk right when businesses can least afford it.
This challenge is especially acute in knowledge-intensive and highly regulated industries, where internal know-how directly shapes quality, speed, and compliance outcomes, and where talent shortages make replacement cycles slower and more expensive. It’s also one of the reasons enterprise leaders are looking beyond “general-purpose” AI assistance and toward specialized systems that can preserve institutional memory, enforce governance, and deliver reliable outputs for high-stakes work.
From the perspective of the ai world organisation, this is the kind of practical, operational AI story that matters: not AI as a demo, but AI as infrastructure for real teams with real accountability.
What Blockbrain is building: knowledge bots and digital twins
Blockbrain is a Stuttgart-based company building an AI platform designed to create “digital knowledge twins” and AI agents—referred to as Knowledge Bots—based on expert input, with the goal of sharing that expertise across teams while keeping information accurate and secure. The core idea is straightforward: systematically digitize specialist knowledge and experience, then make it usable across the organization as knowledge bots that can store ways of thinking, decision-making logic, methods, and repeatable processes.
A key promise is speed to value: knowledge twins can be set up within weeks and then used across the organization, turning otherwise hidden expertise into an asset that can be made “permanently available,” continuously improved, and even automated. In at least one example cited, Seifert Logistics reports weekly time savings of up to 15% by using a digital knowledge twin, which highlights how “knowledge capture” can translate into measurable operational impact rather than remaining a theoretical benefit.
Blockbrain positions this as a shift in the automation journey: after years of focusing on transactional automation (for example, RPA), the next frontier is augmenting and automating knowledge work—pulling value from both structured and unstructured internal information to create competitive advantage. The company’s approach is no-code oriented, aiming to help enterprises deploy AI agents without turning every knowledge-management initiative into a long, expensive custom engineering project.
Why trust, security, and governance are central to the product strategy
In enterprise environments, especially regulated ones, adoption doesn’t hinge only on whether a system can answer questions—it hinges on whether leaders can trust it, govern it, and defend it under audit. Blockbrain emphasizes its security and governance architecture, stating that the platform is GDPR-compliant, ISO 27001-certified, and designed to be ready for EU AI Act requirements.
Another element is data sovereignty and vendor independence: the platform is described as multi-model and model-agnostic, allowing customers to choose the AI models that fit their needs while meeting baseline requirements, and avoiding dependence on a single AI provider. The reporting also notes that, if needed, customer data can be stored in highly secure, regional cloud environments controlled by the customer organization—an important point for enterprises with strict residency or security mandates.
Blockbrain also highlights work done “before the LLM,” including knowledge extraction, indexing, and retrieval, positioning that layer as critical IP for reducing hallucinations and improving answer quality. Reliability features are part of the roadmap as well, including automated replacement and fallback processes intended to improve response consistency as systems scale across more teams and use cases.
Security posture is backed (at least in part) by an external benchmark mentioned in the coverage: a test by Giesecke+Devrient in which Blockbrain reportedly scored 92 out of 105 points, compared with a next-best score of 58 for other enterprise AI solutions. Whether or not a single benchmark settles the broader debate, the intent is clear: Blockbrain wants buyers to see its product as “enterprise-grade AI infrastructure” rather than a lightweight overlay on top of generic assistants.
The €17.5M Series A: who backed it, and what traction looks like
Blockbrain has now raised €17.5M ($18.5M) in Series A funding, with the round led by Alstin Capital and 13books Capital, and participation/support noted from HARTING Family Foundation, Giesecke+Devrient Ventures, LBBW Ventures, and Mätch VC. The company plans to use the new capital to accelerate product development, sales, and scaling—particularly across Europe and the UK—while also expanding the team, including forward-deployed AI engineers who can help customers implement and operate AI agents in real environments.
On the commercial side, Blockbrain is described as having quintupled revenue in 2025, and it has added clients including Bosch, Roland Berger, and Kärcher, among others. The customer base highlighted spans knowledge-intensive and regulated sectors, which is consistent with the platform’s focus on governance, compliance, and controlled deployment patterns rather than “move fast and experiment” usage.
The company was founded in 2022, and the coverage identifies the founding team as including Antonius Gress (CEO, ex-Bosch), Mattias Protzmann (CTO, previously a co-founder of Statista), and Honza Ngo (with VC experience, including Antler and WHU). While funding headlines often focus on the number, the combination of stated traction, regulated-industry customers, and a governance-first posture helps explain why investors are leaning into “knowledge AI” as a category: it targets a persistent enterprise pain point with a clear ROI narrative.
The next phase is also described as product-led expansion: ramping security and compliance, rolling out more specialized research automation agents, exploring broader use cases such as sales, manufacturing, and onboarding, and expanding across the EU and UK. In other words, the company appears to be moving from “prove the model” to “scale distribution and deepen product capabilities,” which is where many enterprise AI vendors either mature into durable platforms or stall under integration and trust requirements.
What this signals for enterprise AI—and why it matters to AI World audiences
This funding story is a reminder that the enterprise AI market is segmenting quickly: alongside general assistants, a growing class of vendors is being built around narrow, high-value outcomes—like preserving institutional knowledge—with clear governance guarantees and implementation support. If Blockbrain’s model continues to work, it suggests “knowledge capture” may become a standard pillar of enterprise AI strategy, similar to how CRM became a default system of record for customer relationships—except here the asset is decision logic and expertise, not just contact history.
It also underscores a practical point many leaders are learning the hard way: the biggest challenge isn’t always model capability; it’s how you extract, structure, retrieve, and operationalize knowledge so that outputs are accurate, traceable, and safe to use in production workflows. That’s why themes like retrieval quality, indexing, fallback mechanisms, and multi-model governance are gaining importance, especially for organizations that must comply with privacy and security obligations while still moving fast.
For readers and builders connected to the ai world organisation, this is exactly the kind of case study that belongs in the broader dialogue around the ai world summit: the shift from “AI adoption” as an experiment to “AI operations” as a disciplined, auditable capability. That lens also fits naturally into ai world organisation events, ai world summit 2025 / 2026 programming conversations, and the global community of ai conferences by ai world—because enterprise teams don’t just want inspiration; they want repeatable playbooks, governance patterns, and proof that the ROI is real.