Liquid AI to Unveil First Products Built on Liquid Foundation Models (LFMs) at Exclusive MIT Event

For more information or to schedule an interview with Dr. Ramin Hasani, please contact:

Greg Matusky
CEO Gregory FCA
greg@gregoryfca.com
267-226-9083

Mike Lizun
mike@greogryfca.com
215-313-0441

Liquid AI, an MIT spin-off and foundation model company, will unveil its first products at an exclusive event held at MIT's Kresge Auditorium on Wednesday, October 23, 2024, from 10 am to 1 pm ET. Register now to attend in person or watch via live stream.

The event will showcase AI products for financial services, biotech, and consumer electronics, built using Liquid AI’s pioneering Liquid Foundation Models (LFMs), a new generation of generative AI models that achieve state-of-the-art performance at every scale while maintaining a significantly smaller memory footprint both during training and inference beyond what was possible before. This particularly enables on-device and private enterprise use cases.

More than 1,000 AI leaders, scientists, and executives will attend this highly anticipated event. Liquid AI’s LFMs promise to enhance the landscape of AI applications, providing powerful solutions for industries demanding increased quality, control, efficiency, and explainability in their AI systems.

Ramin Hasani, CEO and co-founder of Liquid AI, emphasized the significance of this product launch: "Our Liquid Foundation Models elevates the scaling laws for general-purpose AI systems at every scale for any data modality. Our first series of language LFMs achieve state-of-the-art performance at every scale, while maintaining a small on-device memory footprint. This opens new possibilities for real-time, local AI applications, allowing our enterprise customers to harness AI without the limitations of heavy cloud dependence or extensive memory requirements."

Liquid AI’s LFMs are designed to handle complex tasks, including multi-step reasoning and long-context recall while being computationally efficient. The first series of language LFMs, available in 1B, 3B, and 40B configurations, deliver robust performance and broad knowledge capacity across various domains, enabling them to solve tasks such as question answering, translation, composition, and summarization amongst other skills.

Key features of Liquid Foundation Models include:

  • Increased Quality for Reliable Decision-Making: LFMs offer advanced knowledge capacity, enabling them to excel in knowledge-based tasks.
  • Sustainability Through Efficiency: With reduced memory usage and near-constant inference speeds, LFMs are highly efficient for both training and deployment. Their on-device computing capabilities minimize reliance on cloud services, reducing costs and energy consumption.
  • Enhanced Explainability: Built from first principles, LFMs provide more white-box explainability than transformer-based architectures, allowing users to better understand and manage the decision-making processes of the models.

Liquid AI’s models are versatile and can be applied across industries, offering high-performance solutions for natural language processing, audio analysis, video recognition, and any sequential multimodal data.

About Liquid AI

Liquid AI is at the forefront of artificial intelligence innovation, developing foundation models that set new standards for performance and efficiency. With the mission to build capable and efficient general-purpose AI systems at every scale, Liquid AI continues to push the boundaries of what's possible in AI technology.


Read Previous

Nidec Announces Scheduled Commencement o

Read Next

Luna Announces Resignation of Pamela Coe

Add Comment