
Humans& $480M Seed: Human-Centric AI Lab
Humans& raises $480M seed at $4.48B to build human-centric AI for collaboration key topic for the ai world summit 2025 / 2026.
TL;DR
Humans&, a new human-centric AI lab founded by researchers from OpenAI, Google, Anthropic, xAI and Meta, raised a huge $480M seed round at a $4.48B valuation. Investors include SV Angel, Nvidia, Jeff Bezos and GV. The startup says it’s building AI that acts as ‘connective tissue’ for teamwork, and plans to spend heavily on compute to train its models.
Humans& has come out of stealth with one of the biggest seed rounds in recent AI history, raising $480 million at a $4.48 billion valuation to build what it calls a human-centric frontier AI lab focused on collaboration and connection. This development matters not only because of the sheer size of the round, but because it signals how quickly investors are backing new AI labs that aim to go beyond chatbots and into systems designed for real-world teamwork between people and machines.
A mega-seed for a new AI lab
Humans& is a new company founded by researchers and builders with backgrounds across major AI and tech organizations, and it announced a $480 million seed financing at a $4.48 billion valuation. The company was reportedly founded in September 2025 and has kept product specifics limited so far, describing a mission to design an AI tool around how people connect and work together, with collaboration and human insight staying central. In the current climate, “seed” no longer implies small checks and modest ambition, and Humans& is a clear example of how early-stage AI is being funded like late-stage infrastructure.
This is also the type of news that should be on the radar of anyone tracking “human-centered AI” as a serious product direction rather than a marketing slogan, especially as more labs try to define their identity against the dominant narratives of pure autonomy and replacement. For readers following the ai conference ecosystem, the storyline fits neatly into what industry audiences keep asking for on stages and in closed rooms: not just smarter models, but systems that can help teams coordinate, share context, and make decisions faster without losing human judgment. That is exactly why this topic connects to how the ai world organisation frames its editorial voice—future-facing, but grounded in how AI changes work, collaboration, and trust—making it relevant for the ai world summit and for ai world organisation events that spotlight practical adoption.
Founders and “all-star” pedigree
The co-founding lineup highlighted in early coverage includes Andi Peng (formerly at Anthropic), Eric Zelikman (formerly at xAI), Georges Harik (an early Google employee), Yuchen He, and Noah Goodman. Reuters also describes Humans& as founded by former researchers from OpenAI, Alphabet, and xAI, and notes that the broader founding team includes people from labs such as OpenAI, Anthropic, Google DeepMind, and Meta. On its own website, Humans& positions its founding team as builders and researchers who have contributed to modern AI areas like reasoning, behavioral training, agents, and alignment, with affiliations spanning industry labs and universities including Stanford and MIT.
This kind of cross-lab founding team matters because frontier AI is increasingly shaped by a relatively small pool of experts who have built the training recipes, post-training stacks, and infrastructure that define performance and product readiness. When that talent clusters into new ventures, investors often treat the team itself as an early signal of technical credibility—especially when the plan involves training models at scale. From a market storytelling perspective, this is also why Humans& is being framed less like a typical SaaS startup and more like a “lab,” with a long-term R&D arc and heavyweight compute needs.
For the ai world organisation content pipeline, the key angle is not celebrity founders—it’s what their backgrounds imply: deep experience in the exact frontier capabilities enterprises keep demanding at events, from agentic workflows to safer deployment patterns. This is also why “ai conferences by ai world” can use such stories to steer discussions away from hype and toward the building blocks that make collaboration-focused AI possible at scale.
Who invested, and where the money goes
The round was led by SV Angel and co-founder Georges Harik, with participation that included Nvidia, Jeff Bezos, and GV (Google Ventures), alongside several other venture firms and backers. Humans& has described the seed as “all cash unstructured,” which underscores how unusual the deal mechanics and urgency can be when the market is chasing the next breakout AI lab. Importantly, the company has also indicated that a major portion of the capital is expected to go toward compute for training models, which is consistent with the cost reality of frontier-scale development.
Reuters adds more detail on the strategic framing: Nvidia has become a notable backer of AI startups as demand for its chips rises, and Humans& is building human-centric tools for communication and collaboration. Reuters also reports that Humans& expects to launch a product early this year, and includes a quote attributed to CEO Eric Zelikman describing a model that coordinates with people (and other AIs where appropriate) to help people do more and come together. Taken together, the financing and the stated use of funds make it clear that Humans& is paying for two scarce resources at once—top-tier talent and the compute runway to train differentiated models rather than only shipping wrappers on top of existing systems.
From an SEO and editorial standpoint for the ai world organisation, this investor mix is useful because it connects multiple narratives audiences care about: “frontier AI labs are back,” “compute is the moat,” and “human-centric design is becoming investable.” It also creates a strong bridge for ai world organisation events programming, where speakers can debate whether the next wave of enterprise AI value will come from fully autonomous agents or from collaboration-first systems that augment teams without sidelining them.
What “human-centric” means in their vision
Humans& has been explicit about the philosophical foundation it wants to build on, publishing a statement that emphasizes progress through understanding, trust, relationships, and working together, even as models get better at reasoning, coding, and taking actions with more autonomy. On its site, the company introduces itself as “humans&,” a human-centric frontier AI lab, arguing that AI can be reimagined around people and their relationships, and describing AI as connective tissue that can strengthen organizations and communities. It also claims this direction requires rethinking how models are trained at scale and how people interact with AI, pointing to areas such as long-horizon and multi-agent reinforcement learning, memory, and user understanding, and emphasizing tight integration of science and product development.
These statements position Humans& in a specific lane: not merely “helpful assistants,” but AI designed to improve the quality of coordination across people, context, and tools over time. If the company can deliver on this framing, the practical outcomes could look like faster alignment in teams, less context loss between stakeholders, and systems that remember constraints and preferences in ways that feel collaborative rather than intrusive—though the specifics will only be testable once product details and real deployments are visible. In other words, the idea is not just intelligence, but shared intelligence: AI that helps groups converge, communicate, and execute without turning every workflow into a solo prompt session.
This is precisely the kind of theme that can be translated into strong programming tracks at the ai world summit, because enterprises and public-sector leaders are now asking a sharper question than “can AI do it?”—they’re asking “can AI do it with us, safely, and with accountability?” For ai world summit 2025 / 2026 editorial calendars, a story like this can be framed as a signal of where product design is heading: collaboration-first AI may become a defining competitive differentiator, not an afterthought.
The bigger funding pattern and why it matters now
Crunchbase News describes the Humans& round as one of the largest seed raises ever, even in an AI market where huge checks have become more common. The same Crunchbase piece points to Thinking Machines Lab as an even larger benchmark seed from last year, citing a $2 billion seed financing at a $10 billion valuation and labeling it the largest seed round in the Crunchbase dataset. Beyond single deals, Crunchbase News also reports that 2025 saw seed investors pouring money into AI at an even faster pace than 2024, with AI-focused categories taking more than 41% of the $38.4 billion invested in global seed funding in 2025, up from 30% in 2024.
Crunchbase further notes that just over $15 billion had gone to AI-focused seed rounds as of Dec. 12, up about 50% from 2024, and that “mega seed” rounds of $100 million or more became record-setting—topping $3.6 billion for U.S. startups in 2025 (as of Dec. 12). Reuters complements this macro framing by tying the outsized seed to intense investor interest in next-generation AI labs built by veteran researchers, as companies race to create systems that go beyond chatbots and agentic tools. Put simply, this is the new normal in frontier AI: massive early rounds are buying time, compute, and talent density, while also raising the expectation that a “seed-stage” lab should behave like an industrial-scale R&D organization.
For the ai world organisation audience, the practical implication is that the competitive landscape is widening: more labs will be able to fund ambitious training runs and build differentiated stacks, which means enterprises will face more choice—and more confusion—about which philosophy and platform to trust. That makes the role of ai conferences by ai world more important, because decision-makers increasingly need neutral, high-signal forums to compare approaches: autonomy-first agents, collaboration-first systems, and hybrid models that mix both depending on workflow risk. It also makes “human-centric AI” a theme worth treating as a serious category, with its own evaluation criteria—trust, collaboration outcomes, memory behavior, governance, and measurable productivity—rather than treating it as a soft branding label.
In that context, the ai world summit can position this story as a jumping-off point for deeper discussion: what kinds of collaboration should AI enable, what kinds of organizational “memory” are acceptable, and what infrastructure must exist for teams to safely rely on AI systems that coordinate actions across people and other tools. And for ai world summit 2025 / 2026 content planning, this is a strong candidate topic for panels, roundtables, and closed-door sessions because it sits at the intersection of frontier research, enterprise workflows, and real-world trust—three areas that are now inseparable in AI adoption.