Mirai Secures $10M to Optimize On-Device AI Amid Cloud Cost Surge

TECH
Whalesbook Logo
AuthorSimar Singh|Published at:
Mirai Secures $10M to Optimize On-Device AI Amid Cloud Cost Surge
Overview

London-based Mirai has secured $10 million in seed funding, led by Uncork Capital, to advance its mission of optimizing artificial intelligence models for on-device execution. Co-founded by Reface and Prisma alumni, the startup leverages its proprietary Rust-based inference engine, claiming up to 37% speed increases on Apple Silicon without compromising output quality. This move addresses the escalating costs and latency issues associated with cloud-based AI, positioning Mirai as a key enabler of more efficient, private, and responsive AI applications on consumer hardware. The broader market for on-device AI is rapidly expanding, with projections indicating significant growth driven by demand for real-time processing and enhanced data privacy.

Instant Stock Alerts on WhatsApp

Used by 10,000+ active investors

1

Add Stocks

Select the stocks you want to track in real time.

2

Get Alerts on WhatsApp

Receive instant updates directly to WhatsApp.

  • Quarterly Results
  • Concall Announcements
  • New Orders & Big Deals
  • Capex Announcements
  • Bulk Deals
  • And much more

Mirai Targets On-Device AI Efficiency with $10M Seed Round

The burgeoning field of artificial intelligence continues its rapid expansion, yet the conversation often centers on the immense cloud infrastructure required to power advanced models. Mirai, a London-based startup founded by Dima Shvets and Alexey Moiseenkov—veterans of successful consumer AI applications like Reface and Prisma—is deliberately charting a different course. The company announced a $10 million seed funding round, spearheaded by Uncork Capital, to bolster its efforts in making AI models perform more effectively and efficiently directly on user devices, such as smartphones and laptops.

This funding injection directly addresses a critical market gap: the economic and performance limitations of relying exclusively on cloud-based AI inference. As generative AI adoption accelerates, the substantial costs associated with cloud compute, data transfer, and energy consumption are becoming increasingly apparent. Mirai's strategy leverages the growing power of consumer hardware, offering a compelling alternative to expensive, latency-prone cloud solutions.

The Economic Imperative for Edge AI

The current AI paradigm is characterized by enormous cloud computing expenditures. For example, running AI inference on cloud servers, particularly for tasks like retail video analytics, can incur millions of dollars annually for large organizations. A single NVIDIA A100 GPU instance can cost upwards of $40,000 per year for continuous operation, and data transfer costs alone can add significant financial burdens. This economic pressure is driving a shift towards edge computing solutions. Studies indicate that moving AI inference to consumer devices like smartphones can drastically reduce energy consumption by up to 95% and carbon footprints by as much as 88% compared to cloud processing. The on-device AI market is projected to grow exponentially, reaching an estimated $124 billion by 2032, signaling a strong market appetite for localized AI. Andy McLoughlin of Uncork Capital highlights this trend, noting that while current venture capital flows into 'rocketship' cloud companies, the focus will inevitably shift to underlying business economics, creating a fertile ground for edge AI specialists like Mirai [Source A].

Mirai's Performance Edge: Rust and Apple Silicon

Mirai's core innovation lies in its proprietary inference engine, meticulously engineered using the Rust programming language. Rust is renowned for its performance, memory safety, and concurrency capabilities, making it an ideal choice for demanding AI workloads. The company claims its engine can boost AI model generation speed by up to 37% on Apple Silicon, all while preserving the quality of the output—a crucial distinction from many optimization techniques. Unlike competitors whose engines might be abstractions built downwards for broad compatibility, Mirai's stack was designed specifically for Apple Silicon, allowing for deep hardware-aware optimization of execution, memory management, and scheduling. This approach aims to deliver near-zero latency and robust privacy guarantees, essential for sensitive consumer applications. The forthcoming SDK is designed for seamless developer integration, targeting a 'Stripe-like, eight lines of code' experience, simplifying the adoption of advanced on-device AI functionalities into existing apps.

Competitive Landscape and Foundational Strengths

Mirai enters a competitive but rapidly expanding market. Companies like Quadric are also developing inference engines for on-device AI chips, having recently secured $30 million in Series C funding to accelerate their efforts in edge LLMs and automotive applications. Modal Labs, another inference optimization startup, is reportedly nearing a $2.5 billion valuation. While tech giants like Apple and Qualcomm integrate AI capabilities into their hardware, startups like Mirai are creating specialized software layers that unlock the full potential of these devices. The founders' track record is a significant asset; Dima Shvets co-founded Reface, which boasts over 300 million global downloads and recently secured $18 million for user acquisition. Alexey Moiseenkov co-founded Prisma, a widely recognized AI filter application [Source A]. Their deep understanding of consumer application development and the nuances of AI deployment on mobile platforms provides Mirai with a strong foundation.

The Forensic Bear Case

Despite the promising technological advancements and market tailwinds, Mirai faces significant hurdles. The on-device AI market, while growing, is fiercely competitive, with major hardware manufacturers like Apple and Qualcomm integrating their own AI acceleration and frameworks like Core ML and Snapdragon AI Engine. Mirai's initial focus on Apple Silicon, while strategic for performance, presents a dependency risk; expanding to Android and other platforms will require substantial engineering effort and adaptation. Furthermore, convincing developers to adopt a new SDK, even with simplified integration, requires demonstrating tangible benefits over existing solutions and established cloud-based AI services. The claim of a 37% speed increase, while notable, needs to be validated across a broad range of models and use cases under real-world conditions, not just synthetic benchmarks. The inherent complexity of AI model optimization and deployment means that achieving Mirai's ambitious integration targets without compromising model quality or introducing new performance bottlenecks will be an ongoing challenge. The success of Mirai's Rust-based engine will also hinge on the continued maturity and adoption of Rust within the broader AI development community, which is still largely Python-centric for prototyping and research.

Future Outlook

Mirai aims to create an "LLM OS" that integrates optimized inference, models, and deployment into a reliable on-device foundation. The company plans to support text and voice modalities initially, with future expansion into vision. They are also developing an orchestration layer for hybrid cloud-device operations, acknowledging that not all AI tasks can or should reside solely on the edge. This forward-looking approach, coupled with its experienced founding team and a clear focus on addressing the economic and technical inefficiencies of cloud AI, positions Mirai to capture a significant share of the rapidly evolving on-device AI market.

Get stock alerts instantly on WhatsApp

Quarterly results, bulk deals, concall updates and major announcements delivered in real time.

Disclaimer:This content is for educational and informational purposes only and does not constitute investment, financial, or trading advice, nor a recommendation to buy or sell any securities. Readers should consult a SEBI-registered advisor before making investment decisions, as markets involve risk and past performance does not guarantee future results. The publisher and authors accept no liability for any losses. Some content may be AI-generated and may contain errors; accuracy and completeness are not guaranteed. Views expressed do not reflect the publication’s editorial stance.