Selector Raises $32M: AI Funding for Observability.
Selector raises $32M in AI Funding to scale AI observability, cut downtime, and expand product, GTM, and customer success for enterprises.
TL;DR
Selector raised $32M in AI Funding (led by AVP), doubling valuation to $375M. This AI funding news shows growing demand for AI observability that uses LLMs, knowledge graphs and causal reasoning to spot issues earlier, pinpoint root causes faster, and reduce enterprise downtime via smarter correlation and automation.
Selector raises $32M as AI Funding shifts toward always-on reliability
Selector, a Santa Clara, California–based AI-driven observability and network intelligence platform, has raised $32 million in new AI Funding, and the raise reportedly doubles the company’s valuation to $375 million. This AI funding news matters because it highlights a clear market direction: investors are backing AI systems that don’t just “analyze,” but actively reduce downtime by accelerating detection, diagnosis, and resolution across complex enterprise environments.
The deal at a glance: what was announced and who backed it
Selector’s $32 million AI Funding round was led by AVP, with participation from Ansa Capital, Two Bear Capital, Sinewave Ventures, Singtel Innov8, and other existing investors. The announcement positions Selector inside the fast-growing category of AI observability and network intelligence, where the focus is on making operations more resilient as infrastructure becomes more distributed and toolchains become more fragmented.
AVP is described as an independent global investment platform focused on high-growth technology companies, spanning deep tech and tech-enabled businesses across Europe and North America, and the firm reports managing more than €2.5 billion in assets across venture, early growth, growth, and fund-of-funds strategies. For readers tracking AI Funding trends, the presence of a growth-oriented lead plus a mix of specialized and strategic backers signals that “reliability AI” is moving from a nice-to-have to a board-level priority, especially for enterprises where minutes of downtime can cascade into reputational risk and lost revenue.
What Selector builds: AI observability that connects signals to actions
Selector is led by CEO Kannan Kothandaraman and offers an AI-powered observability and network intelligence platform designed to unify data, correlation, and automation across domains. In practical terms, observability becomes far more valuable when it can connect telemetry (metrics, logs, traces, events, and network signals) into a coherent explanation of what is happening, why it is happening, and what to do next—without forcing teams to swivel-chair between multiple tools.
The company’s approach combines large language models, knowledge graphs, and causal reasoning to help teams detect, diagnose, and resolve issues faster. That mix matters because LLMs can help interpret and summarize technical signals, knowledge graphs can preserve relationships across services and dependencies, and causal reasoning aims to move beyond correlation into “likely root cause” pathways that are actionable during incidents.
Selector also positions the platform as adaptable to enterprise workflows and as supporting predictive maintenance, causal inference, and LLM-based AI correlation. This blend is increasingly central to AI funding news in enterprise infrastructure, because organizations want AI that fits existing ITSM and NOC/SOC processes rather than forcing an expensive rip-and-replace of operational habits, escalation paths, and governance controls.
From a customer standpoint, Selector primarily serves Fortune 1000 organizations and includes several Fortune 20 customers. That enterprise footprint is important in the AI Funding narrative because it implies the platform is being deployed in environments with strict requirements around security, compliance, latency, change management, and auditability—areas where “demo-grade AI” often fails, and where operational AI must prove itself continuously.
Why this AI funding news fits the broader observability shift
Even when companies invest heavily in monitoring, many incident-response cycles still break down at the same points: alert storms that overwhelm teams, noisy signals that lack context, and handoffs between network, infra, and application owners that slow resolution. When that happens, the cost isn’t only technical; it’s organizational—teams lose trust in dashboards, executives lose patience with recurring issues, and customers experience inconsistent performance.
This is why AI Funding is increasingly flowing into platforms that reduce mean time to understand, not just mean time to detect. Selector’s positioning—unifying data, correlation, and automation across domains—maps to what enterprise buyers actually want: fewer fragmented “truth sources,” more shared context, and faster agreement on what to do next. In other words, the best AI funding news stories aren’t just about model capability; they’re about whether AI can drive reliable operational outcomes under pressure.
It’s also notable that the category is evolving from “observability as dashboards” toward “observability as decision support.” Selector’s use of LLMs, knowledge graphs, and causal reasoning is aligned with that transition, because it’s aimed at turning raw signals into explanations and workflows. The moment a platform can recommend a likely cause, show the evidence chain, and automate safe remediation steps (with human approvals when needed), it stops being a reporting layer and starts being part of the operating system for modern IT.
Where the $32M goes: product, go-to-market, and customer success
According to the announcement, Selector will use the new AI Funding to accelerate AI innovation, expand product development, scale global go-to-market efforts, and enhance customer success initiatives. This breakdown is typical of AI funding news in growth-stage enterprise software: the technology must keep advancing, but the real unlock often comes from operational execution—packaging, pricing, repeatable sales motions, onboarding playbooks, and post-deployment outcomes.
“Accelerate AI innovation” and “expand product development” suggests continued investment in the platform’s core intelligence and automation capabilities, including how it applies LLMs, knowledge graphs, and causal reasoning to real-world operations. In the observability market, product depth often shows up in edge-case handling: messy telemetry, partial instrumentation, brittle integrations, multi-cloud variability, and the “unknown unknowns” that trigger the worst incidents.
“Scale global go-to-market” indicates the company expects demand across regions and verticals, which is consistent with how enterprise reliability challenges are universal even when tech stacks differ. For buyers, the GTM scale-up usually translates into clearer implementation pathways, stronger partner ecosystems, and more localized support—important factors when the platform is tied to mission-critical operations.
Finally, “enhance customer success initiatives” is a meaningful signal in this AI Funding story, because operational tools only succeed if teams adopt them daily, trust them during incidents, and can measure improved outcomes over time. In many enterprises, customer success isn’t just training; it’s ongoing alignment between platform capabilities and evolving operating models, including changes in architecture, security posture, and governance.
What it means for enterprises—and how AI World frames the conversation
For enterprise leaders, this AI funding news reinforces a practical takeaway: the next wave of AI value will be judged less by novelty and more by reliability, traceability, and speed-to-action in real environments. Selector’s platform is positioned around faster detection, diagnosis, and resolution, and it emphasizes cross-domain unification and automation—exactly the pressure points that show up when organizations scale digital services and can’t afford blind spots.
At The AI World Organisation, this is also the kind of AI Funding story that belongs in the larger ecosystem conversation: how do we move from experimentation to impact at scale, and how do we make AI deployments accountable, measurable, and operationally safe? The AI World Organisation describes itself as a global AI ecosystem, with a mission to bridge cutting-edge AI innovation and real-world application, recognize AI pioneers, and build collaboration through events and community. It also states it is the apex body of 5000+ AI leaders globally, working across principles including “AI for Good,” “AI for All,” and “AI for Innovation and Impact,” with activity across 25+ countries and 70+ cities.