Multiverse Computing Releases Free, Compressed AI Model

Phucthinh

Multiverse Computing Releases Free, Compressed AI Model: A Game Changer for Enterprise Deployment

Large language models (LLMs) are revolutionizing industries, but their immense size presents a significant barrier to widespread adoption. The computational resources and costs associated with deploying and running these models are prohibitive for many organizations. Spanish startup Multiverse Computing is tackling this challenge head-on with its CompactifAI technology, offering compressed AI models that bridge the gap between cutting-edge capabilities and practical affordability. This move is particularly significant as businesses increasingly seek to leverage the power of AI without breaking the bank. Today, developers gain access to a newer version of Multiverse’s HyperNova 60B model for free on Hugging Face, signaling a commitment to democratizing access to powerful AI tools. The company also plans to open source more compressed models in 2026, further expanding their reach.

The Problem with Large Language Models: Size and Cost

The current landscape of LLMs is dominated by behemoths like OpenAI’s GPT-4 and Google’s Gemini. While these models demonstrate impressive performance, their sheer size – often exceeding hundreds of billions of parameters – translates into substantial infrastructure requirements. This includes expensive GPUs, large memory capacity, and significant energy consumption. For many companies, especially smaller and medium-sized enterprises (SMEs), these costs are simply unsustainable. Furthermore, the latency associated with running these large models can hinder real-time applications.

CompactifAI: A Quantum-Inspired Solution

Multiverse Computing’s CompactifAI technology offers a compelling solution. Inspired by principles from quantum computing, this compression technique reduces the size of LLMs without significantly sacrificing performance or accuracy. The core innovation lies in a novel approach to model pruning and quantization, allowing Multiverse to create models that are smaller, faster, and more efficient. This isn’t simply about shrinking the model; it’s about intelligently reducing its complexity while preserving its core intelligence.

HyperNova 60B: Performance and Accessibility

The HyperNova 60B model is a prime example of CompactifAI in action. At 32GB, it’s roughly half the size of OpenAI’s gpt-oss-120b, the model it’s derived from. This reduction in size translates to lower memory usage and reduced latency, making it more suitable for deployment on a wider range of hardware. The updated version, HyperNova 60B 2602, further enhances functionality with improved support for tool calling and agentic coding – crucial capabilities for complex AI applications where inference costs can quickly escalate.

Key benefits of HyperNova 60B include:

  • Reduced Size: 32GB compared to 120GB of its base model.
  • Lower Latency: Faster response times for real-time applications.
  • Improved Efficiency: Reduced computational costs.
  • Enhanced Functionality: Better support for tool calling and agentic coding.

Multiverse vs. Mistral AI: A European AI Rivalry

Multiverse isn’t alone in its pursuit of accessible AI. French AI company Mistral AI, with its Mistral Large 3 model, is a direct competitor. Multiverse claims that HyperNova 60B outperforms Mistral Large 3 in certain benchmarks, demonstrating the effectiveness of its compression technology. However, beyond the technological competition, both companies share a common ground.

Both Multiverse and Mistral have expanded beyond their home countries, establishing offices in the United States, Canada, and across Europe. They both cater to enterprise customers, offering tailored AI solutions for specific business needs. Multiverse’s client roster includes prominent organizations like Iberdrola, Bosch, and the Bank of Canada, highlighting its growing credibility in the enterprise space.

Funding and Growth: A Potential Unicorn in the Making

While not yet officially a unicorn (a privately held startup valued at over $1 billion), Multiverse Computing is reportedly on the cusp of achieving this milestone. Rumors suggest the company is currently raising a €500 million funding round at a valuation exceeding €1.5 billion. In a statement to GearTech, Multiverse confirmed ongoing discussions with potential investors but refrained from commenting on specific valuation or funding details. The company also declined to confirm reports of reaching €100 million in annual recurring revenue (ARR) in January.

These figures, while still smaller than OpenAI’s impressive $20 billion ARR, are rapidly approaching Mistral AI’s ARR of over $400 million. This growth is fueled by increasing demand for alternatives to U.S.-dominated AI technologies. Multiverse strategically positions itself as a provider of “sovereign solutions across the AI stack,” appealing to organizations seeking greater control and independence over their AI infrastructure.

Geopolitical Significance and Government Support

The geopolitical implications of AI are becoming increasingly apparent. Multiverse’s focus on sovereign AI solutions has resonated with governments seeking to reduce reliance on foreign technology. This has led to collaborations like the one with the regional government of Aragón in northeastern Spain. The Spanish Agency for Technological Transformation (SETT) also participated in Multiverse’s $215 million Series B funding round last year. Furthermore, the company has consistently benefited from support from the Basque region, potentially paving the way for the region’s first unicorn.

The Rise of European AI

The success of companies like Multiverse and Mistral AI underscores the growing strength of the European AI ecosystem. Driven by a combination of government support, innovative research, and a commitment to ethical AI principles, Europe is emerging as a significant player in the global AI landscape. This is particularly important in a world where AI is increasingly seen as a strategic asset.

Looking Ahead: Open Source and the Future of Compressed AI

Multiverse Computing’s commitment to accessibility extends beyond offering a free version of HyperNova 60B. The company plans to open source more compressed models in 2026, further empowering developers and researchers. This move will likely accelerate innovation in the field of compressed AI and drive down the cost of deploying LLMs. The future of AI is not just about building bigger and more powerful models; it’s about making those models accessible to everyone. Multiverse Computing, with its innovative CompactifAI technology and commitment to open source, is playing a crucial role in shaping that future.

Key trends to watch in the compressed AI space:

  • Continued advancements in compression techniques: Expect further improvements in model pruning, quantization, and knowledge distillation.
  • Increased adoption of edge AI: Compressed models will enable more AI applications to run directly on devices, reducing latency and improving privacy.
  • Growing demand for sovereign AI solutions: Organizations will increasingly seek AI solutions that offer greater control and independence.
  • The rise of specialized compressed models: Models tailored to specific tasks and industries will become more prevalent.

The release of the free, compressed HyperNova 60B model is a significant step forward in making powerful AI technology more accessible. Multiverse Computing’s innovative approach, coupled with its strategic vision, positions the company as a leader in the rapidly evolving world of artificial intelligence.

Readmore: