
Adaption Labs raises $50M for adaptive AI
Adaption Labs raises $50M to build adaptive AI that learns during use—cutting retraining and prompt overhead. What it signals for enterprise teams.
TL;DR
Adaption Labs, founded by ex-Cohere research leader Sara Hooker and Sudip Roy, raised a $50M seed led by Emergence to build cheaper, more adaptive AI. The team is exploring inference-time techniques like gradient-free learning, adapters, and dynamic decoding so models can adjust to tasks without constant fine-tuning, prompt rewrites, or runaway compute bills.
A newly launched AI startup called Adaption Labs has raised a $50 million seed round, positioning itself as a direct challenge to the industry’s “bigger model equals better model” reflex and betting instead on systems that adapt in real time. The company is co-founded by AI researcher Sara Hooker—formerly VP of research at Cohere and previously part of Google DeepMind—and Sudip Roy, who previously led inference computing at Cohere, with a shared focus on making advanced AI cheaper to run and easier to tailor to real business tasks.
This is exactly the kind of shift we track closely at the ai world organisation: not just who raised money, but what technical and economic assumptions they’re trying to overturn, and what that means for builders, operators, marketers, founders, and enterprise teams. In the run-up to the ai world summit and across ai world organisation events, we see a consistent theme: teams want AI that is useful in messy, changing environments—where requirements evolve daily—without a new “model project” every time something changes.
A key promise from Adaption Labs is straightforward but ambitious: build AI systems that can learn continuously without costly retraining or fine-tuning, and without the heavy prompt and context “gymnastics” that many companies rely on today to make generic models behave like domain experts. Hooker frames this as one of the biggest unsolved challenges in AI, and the startup’s product direction reflects a belief that the next performance leap won’t come simply from scaling up training runs, but from making models and systems more adaptive at the moment they’re being used.
A seed round that challenges “bigger is better”
Adaption Labs’ $50 million seed financing is being led by Emergence Capital Partners, with participation from Mozilla Ventures, Fifty Years, Threshold Ventures, Alpha Intelligence Capital, E14 Fund, and Neo, and the company is based in San Francisco. The startup declined to share its valuation following the fundraise, which is common for early rounds but still noteworthy given how competitive AI funding has become.
What makes this round stand out is less the number and more the thesis: Hooker has long argued that raw scale is hitting diminishing returns, and she has described a broader “reckoning point” where progress can’t rely solely on quadrupling model size year after year. In plain terms, the pitch is that the industry has poured enormous resources into pretraining massive models, but many real deployments are being throttled by cost, latency, customization overhead, and brittleness when models or prompts change.
That “deployment reality” is where this story connects directly to what we explore at the ai world summit. Most organizations don’t get value from AI because they own the biggest model; they get value when the system reliably solves the next ticket, the next customer query, the next marketing brief, the next analytics ask—quickly, safely, and within budget. Adaption Labs is effectively saying: the next frontier advantage will come from systems that adapt to tasks and users, not systems that are merely larger on paper.
Why continuous learning is the hard part
In AI research circles, “continuous learning” refers to systems that can keep improving or adjusting as new data, new tasks, and new user behaviors appear—without requiring a full retraining cycle that takes weeks or months and burns massive compute. Today’s dominant approach still tends to lock a model’s internal parameters after training, and any meaningful change often means expensive fine-tuning, careful dataset curation, and repeated evaluation cycles to avoid regressions.
For enterprises, that creates a familiar frustration: by the time you’ve adapted a model to how your teams actually work, the underlying base model may have updated, your prompts may have drifted, your products may have changed, and your compliance or brand requirements may have tightened. Adaption Labs is explicitly targeting this pain by trying to reduce reliance on retraining, fine-tuning, and constant prompt rewrites.
The company isn’t alone in believing this is where the next breakthroughs must happen. The broader “neolab” movement—new frontier AI labs emerging after the successes of older leaders—includes other efforts aimed at cracking continuous learning via new architectures and training approaches. In the same ecosystem, a senior OpenAI researcher, Jerry Tworek, recently left to found a new startup called Core Automation with an interest in continuous-learning methods, and former Google DeepMind researcher David Silver has launched a startup called Ineffable Intelligence focused on reinforcement learning approaches that can, in some configurations, support continuous adaptation.
From a market perspective, this is a signal: investors and researchers are increasingly hunting for the post-scaling playbook. If the “just make it bigger” strategy is getting more expensive while delivering smaller marginal gains, then it becomes rational to invest in approaches that reallocate compute and intelligence—toward inference-time efficiency, better data strategies, and more adaptive system design.
The three pillars: data, intelligence, interfaces
Adaption Labs says it is organizing its work around three pillars: adaptive data, adaptive intelligence, and adaptive interfaces. Even if you’re not deep in model architectures, these categories map cleanly to what businesses actually experience when deploying AI: data readiness, cost/performance tradeoffs, and user workflow adoption.
Adaptive data, as described, is about systems that can generate or manipulate the data they need on the fly instead of depending purely on a large, static training dataset. In practice, this points toward a future where AI systems become better at assembling the right context at the right moment—pulling from internal knowledge bases, structured tools, and user-provided signals—without demanding huge, repeated training pipelines for every new scenario.
Adaptive intelligence focuses on automatically adjusting how much compute a system spends based on the difficulty of the problem. This matters because not every query deserves the same spend: some tasks are simple and repetitive, others require deeper reasoning, longer context, or higher accuracy—and most companies don’t want to pay “maximum cost” for “minimum complexity” requests. If a system can allocate compute dynamically, it can reduce operating cost while still performing well where it counts.
Adaptive interfaces are about learning from how users interact with the system, not just from curated datasets. Hooker has also said the company plans to hire designers to explore interfaces beyond the standard “chat bar,” suggesting a belief that AI adoption will accelerate when the user experience fits real workflows rather than forcing every task into a conversational box.
For practitioners attending the ai world summit 2025 or ai world summit 2026, these three pillars translate into a useful checklist. Are you investing only in a model, or are you investing in the full adaptive system—data flows, compute policy, and interfaces that fit your teams? The organizations that win won’t just “use AI”; they’ll operationalize adaptation.
Gradient-free learning and inference-time adaptation
A central technical direction Adaption Labs is investigating is “gradient-free learning,” positioned as an alternative to the standard gradient descent-based training that dominates modern neural networks. In traditional training, models adjust billions of internal weights step by step to reduce error, a process that can require enormous compute and time—and once trained, those weights are typically fixed.
When a company wants a model to behave better for a specific task, it often turns to fine-tuning—further training on a smaller curated dataset, still large enough to be costly—because it changes the model’s weights again. When fine-tuning is too expensive or slow, many teams resort to increasingly elaborate prompts and context engineering, but those prompts can be brittle and may break when new model versions arrive.
Adaption Labs’ approach, as described, aims to shift the locus of adaptation to “inference time,” meaning the moment the system generates an answer. The idea is to update behavior without altering the underlying weights, allowing the system to respond differently depending on the task while keeping the base model stable.
Two methods highlighted in the report illustrate what this can look like in practice. One is “on-the-fly merging,” where a system selects from a repertoire of adapters—often small models trained on smaller datasets—that shape the primary model’s response depending on the user’s query. Another is “dynamic decoding,” which changes how the model chooses among probable outputs based on the task, again without changing the core weights.
Stepping back, the technical vision is broader than a single model: it’s a system that changes in real time based on interaction and task context. That framing matters because it aligns with where enterprise AI is heading: tool-using systems, modular adaptation, and workflow-level intelligence—rather than a monolithic model that you endlessly retrain.
The economic argument behind this shift is also explicit. Hooker has argued that pretraining compute is typically the most expensive compute—because it’s massive, long-running, and resource-intensive—while inference compute can deliver more “bang for the buck” if you use it efficiently for real-time adaptation. Roy’s background in inference computing is positioned as a key complement here, especially if real-time adaptation demands highly optimized GPU performance to keep latency low.
What this means for enterprises and the AI World Summit 2026
If Adaption Labs and similar “adaptive-first” startups succeed, enterprise AI strategy could shift in three important ways. First, organizations may spend less effort “teaching the model” through heavyweight retraining cycles and more effort designing adaptive systems—where data, compute, and interface are tuned to the moment of use. Second, procurement and platform choices may tilt toward solutions that make ongoing change cheaper: models that can adjust to new products, new policies, and new user patterns without a long queue of ML engineering work.
Third, it could reshape how teams measure AI capability. Instead of asking only “How smart is the base model?”, buyers will ask “How quickly can it adapt to my task, with minimal overhead, and at a cost I can defend?” That’s not a purely technical KPI; it’s operational, financial, and organizational—which is why these topics belong at the ai world summit, not only in research labs.
At the ai world organisation, we frame this moment as an opportunity for builders and decision-makers. In ai world organisation events and ai conferences by ai world, the most practical conversations often revolve around reducing friction: lowering inference costs, minimizing prompt maintenance, tightening feedback loops, and building systems that stay useful as the business changes. Adaption Labs’ thesis sits right in the middle of that reality—continuous learning without constant retraining, adaptation without prompt acrobatics, and performance improvements that come from architecture and system design rather than pure scale.
This is also a reminder that “smaller, smarter models” are not a compromise; they can be a deliberate strategy. Hooker has built a reputation for challenging “scale is all you need,” including work arguing that hardware constraints can shape which AI ideas succeed and research suggesting improved training techniques can allow smaller models to outperform larger ones. At Cohere, she also championed the Aya project, described as a collaboration involving thousands of computer scientists across many countries to improve language coverage using relatively compact models—an example of how creative data and training approaches can offset raw size.
None of this guarantees that adaptive architectures will replace today’s dominant approach overnight. But a $50 million seed round led by a major venture firm signals real confidence that the next phase of AI value creation will come from flexibility, efficiency, and real-time alignment with user needs—especially in enterprise settings where cost and reliability decide what ships.
For readers following ai world summit 2025 and ai world summit 2026 programming, this story is a strong prompt for the next set of questions you should ask any AI vendor, platform, or internal team. How will the system adapt when your policy changes? How will it adapt when your brand guidelines change? How will it adapt when user behavior shifts or new edge cases appear? And crucially, what will adaptation cost you—in engineering hours, in compute spend, and in time-to-value?