
Google backs Sakana AI to grow Gemini in Japan
Google invests in Japan’s Sakana AI to strengthen Gemini in enterprises, while Sakana taps Alphabet language models to build reliable AI tools.
TL;DR
Google has invested in Japanese startup Sakana AI (amount undisclosed) to strengthen Gemini’s reach in Japan’s enterprise market. Sakana, fresh off a $135M Series B valuing it around $2.6B, says access to Google’s foundation models will help it build more reliable AI services—an edge as corporate Japan weighs AI adoption and the race stays open against ChatGPT.
Google has made an investment in Japanese AI startup Sakana AI, a move aimed at strengthening Gemini’s footprint in Japan’s fast-moving enterprise AI market. Sakana AI is coming off a $135 million Series B round that valued the company at about $2.6 billion, and it has not disclosed how much Google invested.
As the ai race accelerates across APAC, this is exactly the kind of partnership leaders will be dissecting inside the ai world organisation ecosystem—especially across the ai world summit conversations, ai world organisation events, and the wider calendar of ai conferences by ai world for ai world summit 2025 / 2026.
Google’s bet on Sakana AI
Google’s investment places it among Sakana AI’s backers and signals a focused push to raise Gemini’s relevance in Japan, where companies are eager to adopt AI but also want solutions that work smoothly with local language, workflows, and compliance expectations. The funding comes after Sakana’s $135 million Series B round, which valued the Tokyo-based startup at roughly $2.6 billion, making it one of Japan’s most valuable AI startups in the current cycle.
While Sakana did not reveal the investment amount, the strategic intent is clearer than the number: Google gets a credible local ally to help position Gemini inside large Japanese enterprises, and Sakana gains closer access to Alphabet’s foundation model capabilities to speed product development and raise reliability for high-stakes use cases. In practical terms, that means a two-way collaboration where Google benefits from Sakana’s momentum and enterprise relationships in Japan, while Sakana benefits from deeper model optionality and infrastructure-level support that can strengthen performance in production environments.
The competitive backdrop matters because Gemini is still chasing OpenAI’s ChatGPT mindshare in many global markets, and Japan’s corporate sector is large, well-capitalized, and increasingly open to AI-driven workflow redesign—if solutions can be implemented without disrupting decades-old operating models. This “reluctance to change” dynamic in corporate Japan is a recurring theme: leaders may see the upside, but they also want risk controls, governance clarity, and proof that tools will integrate into existing systems rather than forcing a full reinvention overnight.
For the ai world organisation community, this story is less about a single investment and more about a repeatable playbook: global model providers increasingly need local champions, local data understanding, and local enterprise credibility to win regulated or culturally specific markets. That same pattern will continue to surface across ai world organisation events, particularly in APAC, where the ai world summit agenda naturally intersects with questions around “sovereign AI,” enterprise adoption barriers, and the operational reality of deploying LLM-powered systems at scale.
Why this partnership helps Gemini in Japan
One reason this investment stands out is that Sakana AI is not simply another app-layer chatbot vendor—it is positioned as a Japan-optimized AI company building models and solutions designed to work well with Japanese language and business context. That positioning aligns with what many Japanese enterprises want right now: AI that is useful inside the company, not just impressive in demos, and that respects local norms, data sensitivity, and sector-specific constraints.
From Google’s perspective, partnering with a highly valued domestic AI startup can accelerate distribution and trust in a market where local relationships and reputation often shape adoption speed, especially in finance and government-adjacent areas. Sakana’s CEO David Ha has emphasized that having access to more foundation models—especially from Google—can raise product performance, which frames the partnership as an engineering advantage as much as a go-to-market move.
This also reflects a broader enterprise AI reality: many companies do not want to bet on one model forever, and they increasingly prefer multi-model strategies that let them balance cost, latency, accuracy, security posture, and vendor risk. If Sakana can combine its local delivery strength with stronger access to Alphabet’s model ecosystem, it can offer Japanese clients more robust options for real deployments, which in turn can create more “Gemini-adjacent” wins even when customers are still evaluating multiple providers.
Sakana’s prior fundraising context reinforces why Google would want to be involved: TechCrunch reported the Series B as approximately ¥20 billion (about $135 million) at a $2.65 billion post-money valuation, highlighting how quickly Sakana became a major name in Japan’s AI landscape. The same report described Sakana as founded in 2023 by former Google researchers—including David Ha—adding a talent lineage that often matters when global firms choose partners for deep technical collaboration.
In many ways, this partnership is also a signal to Japan’s enterprise buyers: Gemini is not only a global product but is being supported through local ecosystem collaboration, which can reduce perceived adoption risk for conservative organizations. That “risk reduction” effect is often underestimated in AI conversations, yet it is frequently decisive—especially when buyers are weighing an incumbent leader like ChatGPT against a challenger product that needs stronger local proof points.
This is why the ai world organisation treats these shifts as more than headlines: they shape what enterprise leaders will demand from vendors in 2026—clear deployment playbooks, dependable integrations, and measurable ROI rather than just novelty. Those themes are central to how sessions at the ai world summit are framed, because the goal is to translate AI momentum into business outcomes across regions and industries, not only to celebrate technology progress.
Sakana’s enterprise traction in finance
Sakana AI has already made meaningful inroads in Japan’s finance sector, supported by major backers such as Mitsubishi UFJ Financial Group (MUFG), and it has landed service contracts with MUFG Bank and Daiwa Securities to build AI tools. That finance footprint is significant because banks and securities firms typically represent some of the strictest environments for AI adoption, requiring careful attention to compliance, auditability, and operational continuity.
Sakana’s relationship with MUFG is not just a vague partnership claim; Sakana AI has publicly described signing a multi-year partnership agreement with MUFG Bank and working on bank-specific AI systems and AI-enabled internal workflows aligned with business objectives. MUFG has also communicated that strengthening AI capabilities and data infrastructure is part of its medium-term plan, and that leveraging Sakana’s technical capabilities is intended to elevate MUFG’s AI strategy.
When a startup proves it can deliver in financial services, it tends to unlock a broader enterprise pipeline because other sectors treat finance adoption as a high bar for credibility. In Japan, that credibility is especially valuable given the article’s point that many corporate business models have been built over decades and change is often incremental, making trusted partners a key accelerator.
From Google’s perspective, Sakana’s enterprise traction creates a practical distribution channel for Gemini-era tooling, because any ecosystem that touches finance—risk, compliance, customer operations, internal productivity—can become an entry point for broader enterprise expansion. For Sakana, access to Alphabet’s language models can help it build more dependable services in “critical” environments, where errors, downtime, or hallucinations are not merely inconvenient but potentially costly.
This is where the conversation becomes more nuanced than “Gemini vs ChatGPT,” because enterprise deployments usually involve layered architectures: internal knowledge bases, retrieval systems, fine-tuned components, monitoring and guardrails, and domain-specific agents. If Sakana is building bank-specific AI systems as part of a multi-year transformation program, then reliability, governance, and integration quality become the differentiators, and model access becomes one lever among many to improve end-to-end results.
For the ai world organisation audience, this finance story resonates because financial institutions across APAC are often among the first movers in applied AI, and their implementation lessons tend to translate directly to other regulated sectors like healthcare, telecom, and public services. It also aligns with why the ai world summit ecosystem supports practical tracks and sector-led learning, including finance-related programming within the broader summit framework.
Japan’s sovereign AI push, defense interest, and what’s next
Sakana AI’s rise is also tied to Japan’s national interest in developing AI that is built on local language and not fully dependent on US companies, which the Bloomberg report notes through Sakana’s early success in winning a government-backed grant. TechCrunch similarly framed Sakana’s mission around Japan-optimized models and described the company’s view that there is growing global demand for “sovereign AI” solutions that reflect national cultures and values.
Looking ahead, Sakana is aiming to work more closely with Japan’s Defense Ministry and other government agencies, while also expanding its enterprise client base and growing overseas, though it has noted that some projects may restrict the use of foreign models. That detail is important because it explains why Sakana would value “more access to more foundation models” overall—different clients, sectors, or geographies may require different model choices depending on data residency, procurement rules, or security standards.
The same Bloomberg coverage suggested that if Sakana sees progress in both defense and enterprise initiatives, the company may need more capital, which implies that this Google relationship could be one part of a broader strategic financing and partnership roadmap. TechCrunch also reported that Sakana planned to expand its enterprise business beyond finance into industrial, manufacturing, and government sectors in 2026, reinforcing a multi-sector expansion narrative rather than a single vertical bet.
For corporate Japan, the key question is not whether AI will be adopted, but how quickly organizations can modernize workflows without breaking what already works—especially when legacy processes, risk policies, and internal approvals have been refined for decades. That is why partnerships like this can matter: they reduce friction by combining global AI scale with local implementation credibility, which can move adoption from experimentation to contracts and production.
This is also precisely where the ai world organisation plays a convening role, because the hardest part of AI transformation is rarely a model demo—it is leadership alignment, governance, skill readiness, vendor selection, and execution discipline across teams. The ai world summit and broader ai world organisation events are designed around those on-the-ground realities, bringing together leaders to share what works and what fails when AI moves from pilots into critical services.


