Potpie Raises $2.2M for AI Code Knowledge Graph
Potpie secures $2.2M in pre-seed AI funding to build a knowledge graph for code, enabling AI agents to operate effectively in complex enterprise systems.
TL;DR
Potpie, a startup that builds a structured knowledge graph connecting code repositories, logs, issue trackers, and docs, has raised $2.2M in pre-seed funding led by Emergent Ventures. The platform helps AI agents actually understand complex enterprise codebases — cutting root-cause analysis from days to minutes — rather than just generating code blindly.
Potpie Raises $2.2M to Build a Knowledge Graph for Code and Deploy Smarter AI Agents
The world of enterprise software development has been evolving at a pace that very few tools have genuinely been able to match. As engineering teams grow larger and codebases become more sprawling and interconnected, the gap between what AI agents can theoretically accomplish and what they can practically deliver inside real production environments has remained frustratingly wide. A young but technically ambitious startup called Potpie is setting out to close that gap—not by building yet another AI coding assistant layered on top of a large language model, but by creating the foundational infrastructure that allows AI agents to truly understand how complex software systems are built, how they behave, and how they change over time. In what stands as a significant piece of AI funding news from the enterprise developer tools space, Potpie has officially announced the close of a $2.2 million pre-seed funding round. The capital will be directed toward accelerating enterprise deployments, growing the engineering team, and continuing to develop the core context and agent infrastructure that defines the platform's value proposition.
The round was led by Emergent Ventures, with participation from All In Capital, DeVC, and Point One Capital. This AI funding milestone carries broader significance beyond the numbers themselves. It reflects a growing recognition within the venture community that the most critical bottleneck in deploying AI agents inside large engineering organizations is not the raw intelligence of the underlying model—it is the chronic absence of structured, unified, and usable context. Potpie's architecture is designed precisely to solve this problem, by pulling together information from across the entire engineering toolchain and making it accessible to both human engineers and autonomous agents working alongside them.
The Core Problem: Why AI Agents Struggle in Enterprise Codebases
To appreciate why Potpie's work carries such significance, it helps to think carefully about what makes enterprise software development genuinely difficult for AI systems operating in the real world. Unlike most other knowledge workers—designers who spend their days in design tools, finance professionals who live in spreadsheets, or sales teams who rely on customer relationship management platforms—software developers are unique in that their work spans an enormous and fragmented array of tools simultaneously. A single feature might require writing and reviewing code in a version control repository, investigating runtime errors in a logging and monitoring platform, coordinating tasks in a project management system, referencing architecture decisions in a documentation tool, and communicating with colleagues across multiple channels. Every one of these systems holds a fragment of the broader picture, but none of them individually holds the complete story.
The connective tissue that binds all of this information together has historically lived not in any tool but inside the minds of the most experienced engineers on a team. These are the people who know why a particular architectural decision was made several years ago, what a seemingly obscure microservice actually does in the context of the overall system, and how a change in one module will propagate through dozens of downstream dependencies. This institutional knowledge is extraordinarily valuable—and extraordinarily fragile. It cannot be transferred by writing documentation alone, it erodes rapidly during periods of team turnover, and it simply cannot be handed to an AI agent that has no structural understanding of the system it is supposed to be working inside.
This is the structural gap that Potpie was founded to address. Co-founded by Aditi Kothari and Dhiren Mathur, the startup spent close to two full years building the foundational infrastructure that allows machines to understand codebases the way seasoned engineers do. Rather than training a specialized model on proprietary code or attempting to improve retrieval performance with better vector search, Potpie takes a fundamentally structural approach—constructing a graphical representation of entire software ecosystems that explicitly maps the relationships between functions, classes, services, dependencies, and the broader array of tools that engineering teams rely on every day. The resulting system is what the company calls a knowledge graph for code: a living, continuously updated, structured model of an organization's complete technical environment.
From a 2022 Hypothesis to a Fully Operational Platform
The intellectual foundations of Potpie reach back to early 2022, during the period when the first generation of GPT-3-powered applications was beginning to attract mainstream interest and capture the imagination of investors and founders alike. While the majority of entrepreneurs active at that moment were focused on building AI tools for writers, marketers, business analysts, and other text-centric knowledge workers, Aditi Kothari and Dhiren Mathur were asking a harder and less obviously commercial question: how could generative AI be made genuinely useful for software developers working inside large, complex, production-grade systems?
The challenge they identified was not merely a matter of making AI better at producing syntactically correct code. It was a deeper architectural mismatch between the way large language models process and represent information and the way real software systems are actually structured. Code is not linear. It does not flow from a beginning to an end the way a document or a financial report does. It is deeply hierarchical, richly interconnected across multiple layers of abstraction, and distributed across systems that may have been built, extended, and refactored over years or even decades. A language model that has learned from vast quantities of text is fundamentally ill-equipped to reason about a 40-million-line enterprise codebase with the same fluency and confidence that a principal engineer who has spent five years working deeply inside that system can bring to bear.
Rather than trying to patch over this limitation through clever prompting or better retrieval heuristics, the founding team made the decision to build real, durable infrastructure. Starting from October 2023 and continuing through approximately 22 months of focused research and development, Kothari and Mathur constructed the underlying architecture that would eventually become Potpie's commercial product. The company launched publicly in January 2025 and has since moved with notable speed to establish enterprise customer relationships, build genuine community momentum through open-source contributions, and—with this latest round of AI funding—secure the resources needed to execute on its broader ambitions.
The leadership of Emergent Ventures as the round's lead investor speaks directly to the credibility of both the founding team and the technical approach they have built. Anupam Rastogi, Managing Partner at Emergent Ventures, articulated the investment rationale by pointing out that in large enterprises, the real challenge of using AI is not generating code—it is understanding a system deeply and comprehensively enough to change it safely and confidently. Potpie's ontology-first architecture, combined with its rigorous approach to context curation and spec-driven agent behavior, creates a structured and continuously maintained model of an entire engineering ecosystem. This enables AI agents to reason across services, dependencies, issue tickets, and live production signals with the clarity and precision of a senior engineer—capabilities that are uniquely suited to high-stakes tasks such as complex root-cause analysis, impact assessment before major changes, and the execution of risky feature work inside codebases exceeding 50 million lines of code.
How the Knowledge Graph for Code Operates in Practice
At its technical core, Potpie functions by ingesting data from across the full spectrum of tools that engineering teams use—code repositories, logging and observability platforms, issue trackers, documentation systems, API registries, and more—and linking that information together into a coherent, structured, and queryable knowledge graph. Critically, this graph does not merely store or index raw data. It actively infers behavior, maps relationships between components, and surfaces the kind of deep structural understanding that would normally take years of immersion in a specific system to develop organically.
One of the most consequential aspects of this architecture is that it is not a static snapshot—it is a continuously evolving representation that updates itself as the underlying codebase and its associated tools change. When a pull request is opened, Potpie automatically updates the relevant documentation and the tickets associated with the work being done. When a new development cycle begins, it generates system designs that accurately reflect the current state of the architecture. It writes release notes. It maintains what the company calls Agent.md files—structured, versioned documents that define precisely how AI agents should behave when operating inside a given codebase. And it maintains a richly tagged, fully searchable index across APIs, services, and databases that dramatically compresses the search space any agent needs to navigate to perform its work effectively.
The measurable impact of this approach in real-world enterprise deployments is remarkable. One customer operating a codebase of approximately 40 million lines was able to reduce root-cause analysis time from close to a week down to roughly 30 minutes after deploying Potpie across their engineering organization. Another customer working with legacy systems that had been deeply integrated with hardware infrastructure over several decades used the platform to generate and continuously update their test suites in the background—compressing multiple sprint cycles worth of effort into a fraction of the time those tasks had previously required. These are not incremental efficiency improvements. They represent a qualitative and structural shift in what engineering teams can accomplish and how confidently they can operate inside complex, high-stakes systems.
AI Funding and the Infrastructure Layer That Enterprises Actually Need
The AI funding news around Potpie is part of a larger and increasingly visible trend in how sophisticated investors and enterprise technology buyers are beginning to evaluate opportunities in the artificial intelligence space. For the better part of the past several years, the lion's share of capital flowing into AI companies has been directed toward model development, foundation model training at massive scale, and application-layer tools designed to expose model capabilities through consumer-friendly interfaces. That investment wave has been genuinely important, but it has also progressively exposed a structural gap: the models are extraordinarily capable in controlled conditions, but they consistently struggle to operate effectively inside the kinds of complex, multi-tool, deeply historical, and constantly changing environments that real enterprises actually depend on.
This is precisely the moment in which infrastructure-layer startups like Potpie are finding their opening. Rather than competing on model quality—a contest that demands billions of dollars and is increasingly dominated by a small number of extremely well-resourced incumbents—Potpie competes on context quality. The underlying argument is both simple and compelling: an AI agent is only as useful as the information it can access and the tools it can use effectively. By owning the layer that manages both the quality of context and the behavior of agents within that context, Potpie positions itself as a foundational piece of the enterprise AI stack rather than a replaceable application running on top of someone else's infrastructure.
This distinction carries significant implications from both a product design and a business model perspective. Potpie is not selling a developer tool that helps individual engineers write slightly better code or debug marginally faster. It is building infrastructure that becomes more deeply integrated and more strategically valuable as the engineering environment grows in complexity and as AI agents become more capable. Every additional tool integrated into the Potpie knowledge graph makes the graph richer and more powerful. Every additional agent that relies on Potpie's context layer deepens the platform's integration into the customer's core workflow. This is the kind of compounding value—where the product improves as usage scales—that enterprise software investors recognize as the foundation of durable, defensible, and ultimately very large businesses.
Building for a Future Where AI Is a Native Engineering Participant
Potpie currently operates with a team of 12 people and has set a near-term target of growing to approximately 18 team members by the end of the current year. The capital raised in this pre-seed AI funding round will be allocated across three primary focus areas: accelerating the pace of enterprise customer deployments, expanding the engineering team with individuals capable of advancing the platform's core capabilities, and deepening the context and agent infrastructure that sits at the center of everything Potpie does.
The community traction the company has built in parallel with its commercial development adds an important dimension to its enterprise story. Crossing 5,000 GitHub stars on open-source projects is a meaningful signal in the developer tooling space—it indicates that engineers across the broader community are actively engaging with the platform, testing it in their own environments, and finding it valuable enough to recommend to colleagues and collaborators. In the world of developer infrastructure, open-source adoption frequently precedes and meaningfully shapes enterprise sales cycles, and Potpie's trajectory here suggests that its commercial pipeline is benefiting from genuine grassroots enthusiasm rather than purely top-down marketing.
Looking further ahead, the vision that Potpie's leadership has articulated extends well beyond being an AI efficiency tool for engineering departments. The long-term ambition is to become the foundational layer that engineering organizations of all sizes rely on to build, operate, and continuously evolve complex software systems—with AI functioning not as an experiment or a productivity add-on but as a fully integrated, first-class participant in every phase of how software gets made. Aditi Kothari has framed this vision by drawing a distinction between tools that handle straightforward, well-defined tasks and platforms that are capable of taking on genuinely non-trivial, open-ended problems. In her framing, the former category is where most developer tools currently live. Potpie is deliberately building toward the latter—a platform with the depth, the coverage, and the structural sophistication to set standards and support the creation and maintenance of enterprise-grade systems within a single, unified environment.
As AI funding continues to flow into companies tackling the hardest problems in enterprise technology, Potpie represents exactly the kind of infrastructure bet that has historically proven most valuable over long time horizons. The $2.2 million pre-seed is an early chapter in what promises to be a significant story in the evolution of how large organizations build and operate software with artificial intelligence at the center. At The AI World, we will continue following this space closely as the intersection of AI agents, enterprise codebases, and structured context intelligence reshapes what it means for a software team to operate at the frontier.