
Lab-Grown Neurons for AI: TBC Raises $25M
The Biological Computing Co. says living-neuron compute can boost AI efficiency. What $25M funding means and why it matters for AI World Summit 2026.
TL;DR
The Biological Computing Co. (TBC) raised $25M in seed funding led by Primary Ventures to commercialize 'biological compute' that links living neurons with AI models. The system encodes images/video into neuron cultures and decodes their activity into useful representations. TBC also opened a San Francisco lab and aims for hybrid neuro‑silicon cloud clusters in 2027.
The Biological Computing Co. (TBC) says it has raised $25 million in seed funding and is commercializing a biological computing platform that integrates living neurons with modern AI workflows, targeting computer vision and generative AI use cases.
A $25M seed round betting on “post-silicon” AI
TBC’s announcement ties two moves together: a $25 million seed round led by Primary Ventures and the commercial launch of what it describes as a biological computing platform aimed at computer vision and generative AI workloads. The company says its approach is designed to complement (not simply replace) today’s silicon-heavy stacks by connecting living neurons with modern AI infrastructure so that frontier systems become more stable, scalable, and efficient while compute costs come down. TBC also notes it previously operated under the name Biological Black Box, and it is positioning this launch as a shift from research narrative to deployable infrastructure.
A major part of the story is the “why now” argument: TBC’s CEO Alex Ksendsovsky frames the company as emerging at the intersection of rapid progress in neuroscience, mounting constraints in current AI approaches, and a worsening climate-and-energy pressure that makes brute-force scaling harder to justify over time. In that framing, the core challenge isn’t that large models don’t work, but that the dominant recipe—scale up, optimize repeatedly, then scale again—gets exponentially expensive as each performance step-up demands more hardware, more power, and more capital. This is the kind of debate that keeps coming up in enterprise AI roadmaps, and it’s exactly the kind of “compute meets capability” tension we track at the ai world organisation as we curate themes for the ai world summit and broader ai world organisation events.
TBC’s investor messaging reinforces that this isn’t being pitched as a novelty project, but as an alternative architecture thesis: Primary’s Brian Schechter and Gaby Lorenzi argue silicon has taken AI far, yet the next breakthrough may require different compute primitives, with biological computing singled out as a credible path for step-change gains in demanding workloads such as computer vision and world models. That “world model” reference matters because it hints at ambitions beyond narrow demos—toward systems that maintain coherence over time, integrate perception and memory, and support longer-horizon reasoning in dynamic environments. For builders and buyers watching the infrastructure market, the company also points at the sheer scale of spend behind the status quo, citing a forecast that traditional AI infrastructure markets dominated by large GPU clusters could reach $1.7 trillion by 2030.
How living neurons fit into an AI pipeline
TBC’s technical pitch, as described by the company, is not “a brain in a box runs your app,” but a pipeline where real-world inputs are encoded into neuronal cultures and the resulting neural activity is decoded into representations that can be mapped onto frontier AI models. Specifically, TBC says its neuroscience and engineering team can encode data such as text, images, and video into living neurons, then decode neural activity into rich representations that connect to state-of-the-art models via modular adapters. In parallel, the company describes a second track—an “Inspired Compute” or “Algorithm Discovery” effort—where biologically derived principles inform new system designs, creating a new layer of compute intended to strengthen existing architectures rather than replacing them outright.
If you translate that into plain operational language, the promise is a hybrid approach: keep the best of modern ML engineering (foundation models, adapters, and production toolchains), but introduce a biological “processing layer” that may extract patterns differently, learn continuously, or represent information more efficiently than purely silicon-based methods. TBC’s leadership argues this can reduce the compute required for outputs by leveraging natural neuronal dynamics, and it claims neurons can support continuous learning and improved memory with less training than rigid, stateless silicon-only approaches. It also argues neurons can help with high-dimensional pattern extraction, enabling better generalization from fewer examples, while consuming only a fraction of the power used by traditional chips. These are bold claims—and they should be treated as hypotheses until reproducible results and scaled deployments validate them—but they’re the kinds of ideas gaining attention precisely because the AI industry is running into practical constraints around cost, energy, and iteration speed.
TBC also links this to product-readiness through concrete deployment language: the company says it is opening a flagship lab in San Francisco’s Mission Bay to support customer deployment, implying it intends to move beyond internal experimentation toward external usage, evaluation, and iteration with partners. From a market adoption perspective, “customer deployment” is where reality shows up: uptime requirements, repeatability, service-level expectations, and integration complexity tend to expose the gaps between a compelling architecture and a production-grade platform. That’s why the AI World community—through ai conferences by ai world and the ai world summit—typically scrutinizes not just model demos, but the stack’s operational story: tooling, monitoring, safety, governance, and total cost of ownership at scale.
What TBC says it can do today
Alongside the platform narrative, TBC points to early proof points that are meant to feel familiar to ML practitioners who live in today’s ecosystems. The company says it has already applied its technology to enhance AI models in multiple ways, including “VAE efficiency adapters” that improve reconstruction quality and representation efficiency in frontier models. It also claims it has demonstrated improvements in long-horizon coherence for generative AI video and “world models,” which is a meaningful target because coherence across longer sequences is still a major friction point in video generation and simulation-like applications.
Importantly, the company’s description suggests a strategy that doesn’t require developers to abandon current foundation models overnight. Instead, it positions biological compute as something that can integrate “directly with foundation models” to improve performance and reduce compute cost, which implies a pragmatic adoption path: hybrid experimentation, targeted workloads, and incremental integration where benefits can be measured. That kind of approach is often how enterprise adoption actually happens—one workload at a time, in narrow slices where performance-per-watt or cost-per-inference can be improved without rewriting the whole architecture.
There’s also a broader narrative of credibility building via recognized voices. In the announcement materials, investor and industry quotes frame TBC as pursuing a plausible “north star,” with Scott Belsky describing brain-like compute as a promising direction, and Tim Gardner emphasizing that TBC is using living neuronal cultures to discover learning rules for next-generation AI rather than merely borrowing metaphors. These quotes don’t prove the technical claims, but they do show how the company wants to be categorized: not as speculative sci-fi, but as a serious attempt to open a new compute category.
For the ai world organisation audience, this is a prime example of why “AI infrastructure” is no longer just about faster GPUs or better clusters. The infrastructure conversation now includes alternative architectures, new compute substrates, and hybrid stacks that blur lines between hardware innovation, ML system design, and even biology. That’s the kind of cross-disciplinary theme that fits naturally into the ai world summit 2025 / 2026 agenda style—where founders, researchers, enterprise buyers, and policymakers can evaluate what’s real, what’s premature, and what the next two-to-five years of capability might look like.
The scaling question: from lab demo to cloud economics
Even supporters of the idea acknowledge that scaling is the hard part. One industry analyst, Holger Mueller, argues enterprises are spending billions on infrastructure and any idea that reduces those costs will draw interest, but he also cautions the company must prove not only that the approach works, but that it can scale to the kinds of AI applications used today. That’s the key test for every alternative compute platform: it’s not enough to show a one-off improvement, because production AI is constrained by throughput, reliability, supply chain, maintainability, and the boring-but-decisive realities of operations.
TBC’s own roadmap implicitly recognizes this by pointing to hybrid deployment rather than immediate replacement. The company’s plan, as described, is to disrupt the GPU-cluster-dominated AI infrastructure market by launching hybrid neuro-silicon clusters designed for cloud environments, with a target launch date of 2027. “Hybrid neuro-silicon” is a revealing phrase: it suggests the near-term goal is not to throw out conventional compute, but to pair biological elements with silicon in a way that yields measurable efficiency or capability gains. That approach also aligns with how the cloud market tends to adopt new hardware: as specialized instances or accelerators for certain workloads, introduced carefully and priced based on demonstrated performance benefits.
From an engineering perspective, scaling a biological substrate introduces questions that are different from conventional compute scaling. You don’t just rack more servers; you must ensure consistent behavior, longevity, reproducibility, and quality control of the biological component under operational constraints. You also need an interface layer that ML teams can use without becoming biologists, which is why TBC’s emphasis on modular adapters and integration with existing models is not a detail—it’s central to whether adoption is even plausible. In other words, the “product” isn’t only the neurons; it’s the full abstraction that makes those neurons usable by normal ML workflows.
This is where community ecosystems matter. The ai world organisation exists to convene exactly these debates—what it takes to operationalize frontier ideas responsibly, how buyers evaluate risk, and how builders think about adoption curves—across the ai world summit and other ai world organisation events. If biological compute is going to be a category, it will need shared benchmarks, transparent evaluation norms, and open discussion about failure modes and ethics, not just promotional claims. That’s also why event programming at ai conferences by ai world increasingly benefits from multidisciplinary panels: hardware, ML engineering, neuroscience, ethics, and enterprise procurement in the same room.
Why this matters for AI World Summit 2025 / 2026
In the near term, TBC’s story matters because it signals a rising willingness—by founders and investors—to challenge the assumption that AI progress must be chained to silicon scaling alone. The company is explicitly arguing that compute innovation may come from “what comes after silicon,” and it is backing that with a commercialization narrative, a dedicated lab for deployment support, and a roadmap toward cloud-oriented hybrid clusters by 2027. Whether or not TBC ultimately delivers on its strongest claims, the direction of travel is clear: AI infrastructure discussions are broadening into alternative architectures, energy constraints, and new paths to stability and efficiency.
For the ai world organisation, the practical editorial angle is straightforward: enterprise leaders and builders should pay attention to any credible attempt to reduce compute cost and energy usage while improving stability and scalability, because those factors are increasingly deciding who can ship AI features profitably. At the same time, the community should maintain healthy skepticism and insist on evidence—repeatable benchmarks, transparency on methodology, and clear articulation of what workloads benefit and which don’t. This is exactly the kind of conversation we aim to host at the ai world summit and across ai world organisation events: separating hype from signal, and translating deep tech into decisions that CTOs, product leaders, and research teams can act on.