
Outtake Raises $40M to Fight Identity Fraud
Outtake raised $40M to automate identity-fraud detection and takedowns. Read why it matters to enterprises and the ai world organisation’s summit agenda.
TL;DR
Outtake, a startup building agentic tools to spot and remove online impersonation—fake accounts, lookalike domains, rogue apps and scam ads—raised a $40M Series B led by ICONIQ, with backing from Satya Nadella, Bill Ackman and other top operators. The bet: faster, automated takedowns as generative AI makes fraud more believable and scalable for enterprises.
Outtake raises $40M to automate identity-fraud takedowns
Outtake, an AI security startup focused on detecting and dismantling identity fraud across the public internet, has raised a $40 million Series B round led by ICONIQ’s Murali Joshi. The deal also drew a rare lineup of individual backers, including Microsoft CEO Satya Nadella, Palo Alto Networks CEO Nikesh Arora, Pershing Square’s Bill Ackman, Palantir CTO Shyam Sankar, Anduril co-founder Trae Stephens, former OpenAI VP Bob McGrew, Vercel CEO Guillermo Rauch, and former AT&T CEO John Donovan.
For the ai world organisation, this is more than a funding headline—it’s a signal that “digital trust” and “brand impersonation defense” are rapidly becoming board-level priorities, especially as AI makes scams faster, cheaper, and more convincing to execute. And it’s exactly the kind of real-world, enterprise-facing shift we track and unpack across the ai world summit, ai world organisation events, and broader ai conferences by ai world as the ecosystem prepares for ai world summit 2025 / 2026 conversations.
A funding round packed with signal
Outtake’s Series B totals $40 million and is led by ICONIQ’s Murali Joshi, who has been associated with investments across high-growth cloud and security companies. What makes the round stand out is not only the check size, but also how many operators and executives—spanning Big Tech, next-gen defense, security platforms, and public-market finance—chose to invest personally.
That kind of syndicate typically forms when backers believe a category is shifting from “nice-to-have tooling” into “must-have infrastructure.” In this case, the core belief is straightforward: identity abuse and digital impersonation aren’t isolated incidents anymore—they’re industrialized workflows, increasingly powered by AI, and enterprises can’t afford to rely on slow, manual clean-up.
The broader point matters for the ai world organisation because our community spans founders, enterprise leaders, marketers, and technologists who all feel the same pressure from different angles: customers must trust what they see online, and businesses must defend every surface where trust can be exploited. This is why the ai world summit consistently returns to practical questions—how AI reshapes risk, how teams build resilience, and what “responsible automation” looks like when adversaries also automate.
The real problem: “outside-the-firewall” identity fraud
Outtake’s focus sits in a part of cybersecurity that many organizations experience daily but struggle to operationalize: impersonation accounts, lookalike domains, rogue apps, fraudulent ads, and other public-facing misuse that mimics a legitimate brand or identity. Unlike purely internal threats, these attacks often happen in places a company doesn’t control—social platforms, app marketplaces, ad networks, domain registrars, and the wider open web—making enforcement messy and time-consuming.
Historically, the painful reality has been that detection and takedown are labor-heavy: people search for abuse, gather proof, navigate platform policies, fill forms, and follow up—often repeatedly—while attackers spin up replacements. ICONIQ’s own framing of the opportunity is that the internet is becoming more AI-driven and therefore more vulnerable to AI-driven impersonation and fraud, pushing the market toward “digital trust platforms” rather than one-off point solutions.
AI adds fuel to the fire because it reduces the effort needed to create convincing fakes at scale, from cloned pages and synthetic personas to high-volume campaigns that test what slips through. The more believable and numerous the fakes become, the more quickly response times have to shrink—because the cost isn’t only financial loss, but also customer trust, executive reputation, and long-term brand equity.
From the ai world organisation perspective, this is one of the most important reframes for modern security and marketing leaders alike: trust is now a measurable operational metric, not a vague brand concept. And as we head into ai world summit 2025 and ai world summit 2026 programming cycles, “trust infrastructure” is becoming a cross-functional conversation—security teams, legal teams, marketing, and comms are increasingly forced into the same room.
Turning takedowns into an “agentic” workflow
Outtake positions its product as an agentic cybersecurity platform built to detect, investigate, and take down identity fraud, aiming to automate a process that has long depended on manual effort. The practical value proposition is simple: compress the time between spotting an impersonation and removing it, while reducing the human workload required to keep pace.
OpenAI has publicly profiled Outtake, describing how the company’s agents operate in complex, high-stakes environments and emphasizing the importance of model reasoning accuracy for making reliable decisions across varied platforms and formats. In other words, the “AI” here isn’t just copywriting or chat; it’s decision-making and enforcement workflow—detecting subtle patterns, linking related signals, and generating outputs that stand up to scrutiny.
This matters because takedowns are rarely one-click events. Each platform has different policies, evidence requirements, and escalation pathways, and an approach that works for a malicious domain may not apply to a fake app or a fraudulent ad. The promise of an agentic system is that it can handle high-volume enforcement with speed and consistency, while humans focus on edge cases, strategy, and oversight.
If you zoom out, the excitement around Outtake reflects a wider enterprise shift: organizations increasingly want “outcomes,” not dashboards. It’s no longer enough to show a SOC analyst a queue of suspicious URLs; leaders want to know what got removed, how quickly, what risk it reduced, and what it cost in time and money. That outcomes-first lens shows up repeatedly in ai conferences by ai world, because it’s the same measurement story executives demand across AI adoption: value, safety, and speed—proved, not promised.
Founder network, credibility, and customer pull
Outtake was founded in 2023 by Alex Dhillon, described across coverage as a former Palantir engineer. Several participating angels have direct ties to Palantir’s leadership ecosystem, including Palantir CTO Shyam Sankar and Anduril co-founder Trae Stephens (who has also been associated with Palantir’s earlier years).
The startup’s customer list has been reported to include OpenAI, Pershing Square, AppLovin, and federal agencies. OpenAI’s public case study further reinforces that Outtake’s work is not theoretical: the company is presented as an example of an “agentic” business built on reasoning-oriented models and evaluated on accuracy and reliability in high-stakes settings.
Outtake also reports strong momentum metrics: annual recurring revenue increasing sixfold year-over-year, a customer base growing more than tenfold, and systems that scanned 20 million potential cyberattacks over the last year. While any private-company metrics should be read with healthy rigor, the combination of customer logos, public profiling, and repeatable enforcement outcomes helps explain why investors view the space as urgent.
For the ai world organisation, this pattern—credible founder network plus measurable enterprise pull—is exactly what separates durable AI companies from “demo-first” excitement. In ai world organisation events, we consistently see that the market rewards teams that can convert AI capability into operational reliability, compliance readiness, and clear ROI.
Why this matters for enterprises—and for AI World conversations
Identity fraud and digital impersonation are no longer niche threats reserved for celebrities or consumer brands. They now affect SaaS companies, finance, marketplaces, enterprise tech, and public-sector agencies—any organization with a recognizable identity that can be copied and weaponized. As AI makes deception easier to scale, the defense conversation naturally shifts toward automation, agentic workflows, and “trust platforms” that can operate continuously, not just during incident response.
From a leadership standpoint, this is where security strategy meets business strategy. Faster takedowns reduce fraud losses and support brand integrity, but they also influence customer acquisition, partner confidence, and even executive communications—because a public impersonation incident can become a reputational crisis in hours.
For readers following the ai world summit, the Outtake story offers a concrete lens for what “agentic AI” means in the enterprise: it’s not just assisting humans; it’s actively executing a multi-step process with guardrails and oversight, in a setting where mistakes carry real consequences. That is why the ai world organisation treats trust, security, governance, and real-world deployments as core themes across ai conferences by ai world—because the next wave of AI value will be won by teams that can automate responsibly and prove reliability at scale.
And as we look toward ai world summit 2025 and ai world summit 2026, stories like this help shape the questions that matter most: Which workflows should become agentic first, how do we evaluate accuracy and failure modes, and how do we build systems that keep trust intact even when attackers iterate just as fast as defenders?