Tower Raises $6.4M to Fix AI Data Pipeline Gaps
Tower secures $6.4M in AI funding to solve the last-mile production crisis in Claude-powered data pipelines for modern data teams.
TL;DR
Tower, a Vienna-based startup, has raised $6.4M in AI funding to tackle the "last-mile" problem: the gap between AI-generated code and actually getting it to run in production. Built on open standards, its platform lets data teams deploy Claude-powered pipelines reliably without wrestling with legacy cloud tools like Snowflake or Databricks.
Tower Secures $6.4M in AI Funding to Solve the "Last-Mile" Crisis in Claude-Powered Data Pipelines
The world of data engineering has been undergoing a quiet but profound transformation over the past few years. The arrival of powerful AI coding assistants, particularly Anthropic's Claude, has dramatically lowered the barrier to building complex data applications. What once required months of specialised expertise can now be prototyped in hours. Yet, for all the excitement surrounding AI-assisted development, a critical and largely overlooked gap has persisted right at the final stage of the data pipeline — the point where AI-generated code must transition from concept to a reliable, production-ready system. This gap, increasingly referred to in engineering circles as the "last-mile" problem, has proven to be a stubborn bottleneck that neither legacy cloud platforms nor modern AI coding tools have managed to adequately address. That is precisely the challenge that Tower, a Vienna-based data infrastructure startup, has set out to solve — and it has just secured $6.4 million in AI funding to accelerate that mission.
The latest AI funding news coming from Europe's growing deep-tech ecosystem confirms that investors are paying close attention to foundational infrastructure plays, particularly those that sit at the intersection of artificial intelligence and enterprise data operations. Tower's successful raise, which spans both pre-seed and seed rounds, signals growing confidence in the idea that the future of data engineering won't just be about writing smarter code — it will be about building smarter systems that make that code run reliably, at scale, and without constant human intervention. For organisations that have already invested heavily in AI-assisted tooling, Tower's emergence couldn't be more timely.
The Last-Mile Problem That Was Holding Data Teams Back
To fully appreciate what Tower is building, it helps to understand the "last-mile" concept in the context of data engineering. When a data engineer uses an AI coding assistant like Claude to build a data pipeline or application, the coding phase itself has become remarkably smooth. AI can generate sophisticated logic, handle complex transformations, and even suggest architecture decisions — all within minutes. But what happens next? Where does that AI-generated application actually run? How is it tested, debugged, deployed, and maintained in a live production environment? These questions, deceptively simple on the surface, expose a gaping hole in the current data infrastructure landscape.
The legacy cloud platforms that dominate enterprise data operations today — think Snowflake, Databricks, and Microsoft Fabric — were designed for a world where data pipelines were built by specialised engineers following well-trodden workflows. These platforms were not designed to serve as the production runtime for AI-generated code that may be authored in minutes by someone with limited traditional engineering experience. Similarly, tools like GitHub Copilot and Cursor are powerful for writing code, but they stop short of providing the infrastructure layer needed to deploy, run, and iterate on that code in the real world. The result is a frustrating paradox: AI has made building data applications easier than ever, but getting those applications to actually work reliably in production remains just as hard as it has always been. This is the last-mile meltdown — and it represents one of the most significant unsolved problems in modern data engineering.
Serhii Sokolenko, co-founder of Tower, articulated the issue clearly when he noted that the tooling for data engineers had changed dramatically with the open-source movement and the rise of Python-native development, but the surrounding infrastructure had failed to keep pace. The platforms powering data workloads at scale — Snowflake, Databricks, and their peers — remained largely unchanged, still optimised for an earlier era of big data architecture that prioritised complexity and scalability over simplicity and developer experience. When AI-assisted coding entered the picture and made building data applications accessible to an even wider audience, the disconnect became impossible to ignore.
How Tower Is Redefining Data Infrastructure for the AI Era
Tower's approach to solving this problem is both pragmatic and architecturally elegant. Rather than layering AI capabilities on top of an existing data warehouse or analytics platform, Tower was built from the ground up as a unified environment that combines storage, compute, and collaboration into a single, developer-friendly platform. At the core of Tower's architecture is the Apache Iceberg open table format, a widely adopted standard that enables seamless interoperability with platforms like Snowflake, Databricks, and others. By anchoring its storage layer on Iceberg, Tower ensures that data teams retain full ownership of their data, independent of any single vendor's proprietary formats or lock-in strategies.
One of the most distinctive features of Tower is how it manages the relationship between AI agents and human engineers. Traditional data platforms treat AI as an external tool that can be used to write queries or generate code, but the execution and deployment of that code still falls largely on human engineers navigating complex infrastructure setups. Tower inverts this dynamic by making AI-agent collaboration a native feature of the platform itself. The system provides AI with access to fresh, company-specific data in real time, which dramatically reduces the hallucination and error rates that often plague AI-generated data applications when they're run against stale or incomplete datasets.
In terms of performance and cost, Tower's architecture offers significant advantages over its legacy counterparts. Its multi-tenant execution engine is optimised for rapid iteration, allowing data teams to test and deploy changes at a pace that simply isn't possible with traditional cloud data warehouse setups. Sokolenko has noted that Tower is not only easier to use and better integrated with the tools developers already rely on, but also substantially more cost-effective due to its simpler, open-standards-based architecture. For startups and growing businesses that are looking to build sophisticated data capabilities without the overhead of maintaining a Snowflake or Databricks deployment, this combination of simplicity, openness, and affordability is genuinely compelling.
The platform's traction since launch also speaks for itself. Tower has already processed over 200,000 jobs across more than 30,000 applications and has recorded over 70,000 downloads of its Python SDK. These numbers, achieved in a relatively short time since the company's founding in late 2024, reflect both the genuine unmet demand in the market and the platform's ability to deliver immediate value to data engineers who are ready to move beyond the limitations of legacy infrastructure.
Breaking Down the $6.4M AI Funding Round and Key Investors
The AI funding news around Tower's raise offers an interesting window into how European venture capital is evolving in response to the AI infrastructure boom. The total $6.4 million came in two tranches — a pre-seed round led by DIG Ventures, followed by a seed round led by Speedinvest, one of Europe's most active early-stage technology investors. Both firms have strong track records in identifying and backing transformative infrastructure plays across the continent, and their involvement lends considerable credibility to Tower's mission and market thesis.
Beyond the lead investors, the round drew participation from a diverse and strategically valuable group of backers. Flyer One Ventures, Roosh Ventures, Celero Ventures, and Angel Invest all contributed to the round, alongside notable angel investors including Jordan Tigani, the founder of MotherDuck and a former Google BigQuery lead, and Olivier Pomel, the co-founder and CEO of Datadog. The presence of these two angels is particularly significant. Both have deep experience building and scaling data infrastructure companies, and their decision to back Tower suggests that practitioners who understand the space at the deepest level see something genuinely differentiated in what Tower is building.
This latest AI funding news arrives at a moment when investor interest in data infrastructure — specifically the kind that enables AI agents to operate reliably in production environments — is accelerating rapidly. As enterprises across every industry move from experimenting with AI tools to deploying them in mission-critical workflows, the demand for robust, developer-friendly infrastructure that bridges the gap between AI-generated code and production-grade systems is only going to grow. Tower's positioning, as the go-to last-mile platform for data teams working with AI coding assistants, places it squarely in the path of this demand.
The funding will be deployed primarily to expand Tower's go-to-market capabilities, bring more enterprise customers onto the platform, and continue enhancing the product for Vertical AI and SaaS use cases. Given the company's early traction and the calibre of investors it has attracted, the next 12 to 18 months are likely to be defining ones for the company's growth trajectory.
A New Kind of Data Engineer Is Emerging — And Tower Is Built for Them
Perhaps the most fascinating dimension of Tower's story is what it reveals about the changing nature of data engineering as a profession and a practice. For years, the field was gatekept by complexity — building a reliable data pipeline required deep knowledge of distributed systems, cloud infrastructure, SQL optimisation, and a dozen other specialised disciplines. The entry point was high, and the tools reflected that. Snowflake, Databricks, and similar platforms were powerful, but they were designed for teams of specialists who would spend significant time configuring, optimising, and maintaining their infrastructure.
The rise of AI coding assistants has begun to tear down that gatekeeping. As Sokolenko observed, a new generation of users is now building data pipelines, AI agents, and interactive dashboards — people who would have had no meaningful way to engage with these workflows even a year ago. Business founders, marketing managers, product managers, and analysts are increasingly picking up Python and using tools like Claude to build sophisticated data applications without ever having worked in data engineering before. This democratisation of data development is genuinely exciting, but it also creates a new set of challenges. These non-traditional users are even less equipped to navigate the complexities of legacy production infrastructure than experienced data engineers are, which means the last-mile gap becomes an even bigger obstacle for them.
Tower addresses this directly by designing its platform around simplicity and accessibility without sacrificing the robustness that production workloads require. The combination of a Python-native SDK, Iceberg-based open storage, and AI-agent-friendly execution makes Tower the rare platform that can serve both experienced data engineers looking to move faster and newer users who are building data applications for the very first time. This broad accessibility, combined with the platform's focus on production reliability, positions Tower as something genuinely new in the market — not just another data tool, but a foundational layer for the next era of data-driven application development.
The team behind Tower reflects this global, cross-disciplinary ambition. With over 12 nationalities represented among its staff — spanning countries including Greece, Turkey, Sri Lanka, the United States, Canada, and the United Kingdom — Tower brings a diversity of perspectives to the problem it is solving. That breadth of experience is not incidental; building infrastructure that works across different organisational contexts, engineering cultures, and technical environments requires exactly this kind of diverse thinking.
Tower's Vision: Where Humans and AI Agents Work as One
Looking beyond the immediate funding round and the near-term product roadmap, Tower's long-term vision is both ambitious and philosophically interesting. The company is not trying to replace data engineers with AI, nor is it simply trying to make existing engineering workflows marginally more efficient. Instead, Tower is working towards a fundamentally different model of how data systems get built and maintained — one in which human engineers and AI agents collaborate as equal participants within a shared environment, each contributing what they do best.
In this vision, the AI agent handles the generation of code, the exploration of data structures, and the rapid prototyping of new pipelines, while the human engineer provides strategic direction, validates outputs, and handles the kinds of nuanced judgment calls that AI is not yet equipped to make reliably on its own. Tower's platform provides the shared workspace where both can operate simultaneously, with real-time data access, consistent execution environments, and the collaborative tooling needed to turn AI-generated ideas into dependable production systems.
This is, ultimately, what the AI funding ecosystem needs more of right now — not just more tools for generating content or automating tasks, but foundational infrastructure that makes AI genuinely useful in complex, high-stakes operational contexts. Tower is building that infrastructure for the data engineering world, and its early traction suggests that the market is ready for it. As AI continues to transform how software is built and how businesses operate, platforms like Tower that focus on the unglamorous but essential work of making AI outputs production-ready will become increasingly central to the technology stack of every data-driven organisation. The $6.4 million raised today is just the beginning of what promises to be a significant chapter in the evolution of AI-native data infrastructure.