
Lucend raises €2.7M to scale U.S. data centers
Lucend raises €2.7M to expand in the U.S., bringing Transparent AI that cuts PUE, power and water use—insights for the ai world summit 2026.
TL;DR
Amsterdam-based Lucend raised €2.7M ($3.3M) in seed funding, led by Remarkable Ventures Climate with Mitsubishi Electric’s Innovation Fund and others, to expand into the U.S. Its “Transparent AI” uses existing data-centre sensor data to deliver operator-reviewed, prescriptive actions—aimed at higher uptime and lower PUE, power and water use—without new hardware or CapEx.
Lucend Raises €2.7M to Take Transparent Data Centre Optimisation Into the U.S.
Amsterdam-rooted data centre optimisation company Lucend has secured fresh Seed funding—€2.7 million (about $3.3 million)—to accelerate its move into the U.S. market and scale what it calls “transparent” AI-driven operational intelligence for data centres. The round is led by Remarkable Ventures Climate (RVC) and includes participation from Mitsubishi Electric’s Innovation Fund, New Climate Ventures, Avesta, Stepchange, and existing backer 4impact capital. For operators dealing with rising compute demand, tighter power constraints, and stakeholder pressure around efficiency, the core promise is simple: clearer daily decisions using the data centres already generate—without needing new sensors, new hardware, or disruptive rebuilds.
From the perspective of the ai world organisation, this is exactly the kind of practical, operator-focused AI story that deserves attention across our ai world organisation events and discussion tracks at the ai world summit. The ai world summit is built around what it takes to make AI usable in the real world, and data-centre efficiency is one of the most immediate, measurable areas where applied AI intersects with cost, reliability, and sustainability outcomes. As we look across ai conferences by ai world and plan programming for ai world summit 2025 / 2026, deals like this help signal where infrastructure software is headed: toward explainable recommendations, measurable impact, and deployment models that don’t require massive capex cycles.
Why data centres are looking for “daily intelligence,” not more dashboards
Data centre operations teams sit on a flood of telemetry—temperatures, fan speeds, valve positions, humidity, power draw, setpoints, and more—yet many facilities still struggle to turn that raw data into consistent actions that reduce risk and waste. Lucend positions its product as a bridge between “knowing there’s data” and “knowing what to do next,” emphasizing that its platform is designed to deliver a daily briefing of recommended actions rather than leaving teams to interpret endless charts and alerts.
The platform’s framing is important because it targets a common operational reality: teams often spend more time reacting than optimizing, especially when alarms, vendor work, and stakeholder updates dominate the day. In that environment, the difference between a generic analytics layer and a system that proposes specific, reviewable interventions becomes material—particularly when the goal is to improve efficiency without compromising uptime. Lucend’s messaging leans into a “human-in-the-loop” model, where recommendations are generated by the software but approval and execution stay with operators, with the added claim that each recommendation includes traceability that shows how the insight was derived.
For enterprise buyers, this matters because optimization is not just a math problem—it’s also a governance problem. When a platform can explain why it is recommending a change, and when it allows teams to accept, defer, or reject changes, it reduces the perceived risk of adopting AI in systems that cannot afford surprises. In practical terms, this approach aligns with how many high-reliability environments adopt automation: decision support first, then carefully bounded automation later—if at all.
For the ai world organisation, this “transparent + operator-controlled” direction is also a useful lens for conversations at the ai world summit and across ai conferences by ai world. In many industries, AI adoption stalls not because the model can’t predict something, but because teams can’t validate the recommendation, don’t trust the outcome, or can’t fit it into existing compliance and change-management workflows. A product category that insists on proof trails and operator control is, at minimum, responding to that trust gap directly.
The funding round: who backed Lucend and what it’s meant to enable
Lucend’s Seed funding totals $3.3 million, and the company is explicitly tying the round to U.S. expansion—growing operations, sales, and customer support to serve American facilities. While the news is often summarized as “€2.7 million,” the underlying announcement positions the financing as fuel for scaling a data-centre optimization offering in a market where the company argues the opportunity is unusually large.
The investor mix also signals how the story is being framed. Remarkable Ventures Climate led the round, with participation from Mitsubishi Electric Innovation Fund, New Climate Ventures, Avesta, and Stepchange, alongside continued support from 4impact capital. In the announcement, Mitsubishi Electric’s Executive Officer (VP, Business Innovation) Komi Matsubara is quoted emphasizing that Lucend has competitiveness in AI-driven optimisation and that Mitsubishi Electric aims to combine its hardware and infrastructure control technologies with Lucend’s platform to deliver more value and strengthen competitiveness in the data centre business. That combination—software optimization plus infrastructure control expertise—reflects a broader direction in the market, where operational intelligence increasingly needs to integrate with how facilities are actually managed, not just how they are reported.
Lucend also highlights momentum markers intended to validate product maturity beyond a typical early-stage narrative. The company notes that in September 2025, Coolgradient (the former name) won the Model IT category at Yotta 2025’s Innovate Arena in Las Vegas. It also emphasizes that the technology has been deployed since 2023 across dozens of data centers in multiple cities, including Melbourne, Singapore, Paris, London, Amsterdam, and Chicago. Those deployments are described as spanning different climates, designs, and system types—ranging from closed-loop systems to adiabatic cooling designs, and including cooling and electrical assets such as chillers, IACs, UPSs, and generators.
From the ai world organisation viewpoint, this blend of funding + field validation is what makes the story relevant for ai world organisation events. The ai world summit is not only about frontier models; it’s also about operational AI that can be adopted in environments with strict uptime expectations and complex stakeholder requirements. As ai world summit 2026 discussions increasingly focus on infrastructure readiness, this kind of expansion story is a reminder that “AI growth” is inseparable from “data center optimization,” because the economics of compute ultimately show up as facility-level power, cooling, and reliability decisions.
What Lucend says its “Transparent AI” actually does in operations
Lucend describes its platform as a system that connects to existing data centre infrastructure—without requiring new hardware—to turn static environments into adaptive, self-learning ones. It states that the software can map relationships across an enormous sensor history (described as 300 billion sensor readings) and that its AI analyzes billions of data points daily to generate prescriptive recommendations. The company’s product pages reinforce this positioning by describing a daily intelligence briefing that delivers specific recommendations backed by impact predictions and verifiable data trails.
A key operational design choice is the insistence on control and reviewability. Lucend repeatedly frames recommendations as optional, with the operator choosing whether and when to implement them, and it emphasizes “showing its work” through data trails. This is relevant in data-centre contexts because the simplest efficiency move is not always the safest move, and because facilities operate within site-specific constraints—maintenance schedules, redundancy designs, contractual SLAs, and compliance requirements.
Lucend also foregrounds speed-to-value, positioning a “time to value” of roughly six weeks and suggesting that the software can plug into existing systems quickly. The “integration without disruption” claim is paired with the message that it can work with sensor data from common facility management stacks such as BMS, DCIM, or SCADA environments. Alongside that, Lucend notes compatibility with operational workflows, including integrations with ITSM tooling like ServiceNow and support for compliance processes through pre-filled documentation.
In the press announcement, Lucend ties the platform’s benefit to facility outcomes rather than model metrics. It claims that to date, customers have achieved approximately 40% reduction in PUE, about 25% reduction in power use, about 30% reduction in water use, and about 40% improvement in team efficiency. The company also includes an anecdote of a facility manager using the product like a “daily newspaper” and reporting $4.3 million in savings over one year alongside a 40% PUE reduction. While every facility will differ, these claims illustrate what Lucend wants buyers to believe: the platform can identify efficiency and reliability opportunities that are present in the data but hard to see in day-to-day operations.
At the ai world organisation, we treat these kinds of claims as prompts for deeper operator conversations: what baseline was used, how long the measurement window was, what interventions drove the gains, and what tradeoffs were avoided. That’s also why the story fits into the ai world summit format—because the most valuable insights often come from the operational “how,” not just the investment headline. For attendees exploring solutions through ai conferences by ai world, Lucend’s approach highlights a broader trend: decision-support AI that prioritizes explainability and implementation pathways, not just anomaly detection.
Why the U.S. is the strategic prize (and why regulation is part of the story)
Lucend’s announcement argues that the U.S. is the key market because more than half of all global data centers are located there. Regardless of the exact share, the strategic logic is clear: if a company wants to become a meaningful player in data centre optimisation, it must operate where enterprise capacity and cloud concentration are deepest, and where facility operators are under intense pressure to expand while controlling cost and reliability risk.
The timing is also being shaped by public policy signals. In July 2025, the White House issued an executive order titled “Accelerating Federal Permitting of Data Center Infrastructure,” explicitly linking national policy objectives to the rapid buildout of AI data centers and the energy infrastructure that powers them. The order defines a “Data Center Project” as a facility requiring more than 100 megawatts of new load dedicated to AI inference, training, simulation, or synthetic data generation. It also introduces the concept of “Qualifying Projects” and lays out pathways to encourage them, including financial support mechanisms such as loans, loan guarantees, grants, tax incentives, and offtake agreements.
Beyond incentives, the executive order pushes for permitting speed by directing agencies to identify categorical exclusions under NEPA and to establish new ones where appropriate for actions related to qualifying projects. It further emphasizes transparency and efficiency via FAST-41, including publishing qualifying projects on the Permitting Dashboard with expedited review schedules and moving eligible projects toward FAST-41 “covered project” status. The order also calls for using federally owned land and resources for the development of data centers, while remaining consistent with the land’s intended purpose.
For operators and vendors, this policy context changes the conversation. If the U.S. is working to accelerate permitting and buildout, demand for capacity can rise faster—yet the practical constraints of power availability, local siting realities, and water usage scrutiny do not disappear. That tension is exactly where operational efficiency tools are marketed as “the fastest capacity you can add”: the capacity you unlock by running existing infrastructure better.
Lucend leans into this narrative by repeatedly stating that its platform increases efficiency and uptime “without new CapEx” and “without disrupting existing operations.” It positions transparent recommendations as a way to improve reliability while conserving resources such as energy and water, which increasingly show up in both sustainability reporting and local community discussions around data center growth. In other words, it’s not just “do more with less”—it’s “prove you’re doing more responsibly,” with traceability that can stand up to internal audits and external scrutiny.
From the ai world organisation perspective, this is an essential storyline for ai world organisation events and the ai world summit. The AI boom is not abstract; it is physical, power-bound, and operationally constrained, and the winners will often be those who can pair performance with verifiable efficiency. For ai world summit 2025 and ai world summit 2026 programming, this is also a strong example of how “AI infrastructure” includes software layers that make the existing stack smarter—not only new chips, new sites, or new megawatts.
What this means for the AI infrastructure ecosystem (and why we’re watching it)
Lucend’s story is not just a funding update; it’s a signal about where data-centre operations software is heading. The company’s emphasis on transparency, human oversight, and verifiable trails suggests that in high-stakes environments, “black box optimization” is increasingly a non-starter—especially if the buyer is accountable for uptime and compliance. If Lucend can translate its early deployments into repeatable success in the U.S., it may help define a broader category: daily operational intelligence that sits between raw telemetry and change execution.
The most interesting part of the model is arguably the packaging of AI as a daily workflow rather than a one-time transformation. In many facilities, efficiency initiatives start strong and then lose momentum because they rely on a handful of experts, manual analysis, or quarterly review cycles; “daily clarity” is a claim that optimization can become routine. When that routine includes impact predictions and post-implementation verification, it also becomes easier to defend internally—especially when teams need to justify changes to leadership or to customers.
There is also a broader market logic behind “software-led operational efficiency” versus “new hardware or capacity buildout,” particularly for mature facilities. Hardware upgrades are expensive, slow, and sometimes constrained by supply chains or downtime windows, whereas software that reinterprets existing sensor data is marketed as a lower-friction path to measurable gains. That doesn’t mean software is effortless—data quality, integration complexity, and organizational adoption are real barriers—but it does mean the ROI conversation can start sooner.
At the ai world organisation, our role is to connect stories like this to practical learning for builders, operators, and decision-makers. That’s why the ai world summit consistently tracks the infrastructure layer, not only because data centres enable AI, but because data-centre constraints increasingly shape what AI products can cost, how fast they can scale, and how responsibly they can run. When we curate sessions and conversations across ai conferences by ai world, we look for examples where the technology is grounded in measurable operational outcomes and where adoption paths respect the reality of enterprise change control.
For organizations evaluating Lucend or similar platforms, the right next questions are operational: what sensors and systems are required, how recommendations are validated, what governance model exists for approving changes, and how the platform separates correlation from actionable causality. These are also the questions that make for strong panel discussions, case studies, and workshops at ai world organisation events—because the market is moving from “AI can optimize” to “AI can optimize safely, explainably, and at scale.”
If you’re following the buildout of AI infrastructure into 2026 and beyond, Lucend’s expansion is worth tracking for a simple reason: it focuses on efficiency as a capability you can deploy now, not a promise that arrives after the next construction cycle. And as U.S. permitting and capacity expansion accelerates under new federal priorities for AI data centers and related energy infrastructure, the pressure to run facilities more efficiently—while documenting that efficiency—will only rise.