Summary

  • Liquid AI, a startup spun out of the Massachusetts Institute of Technology (MIT), has created a new model based on convolution algorithms that could make artificial intelligence (AI) faster and less power-hungry, the company announced ahead of the International Conference on Learning Representations (ICLR) in Vienna, Austria.
  • Liquid AI’s new multi-hybrid model, Hyena Edge, which has been designed to work on smartphones and edge devices, has been benchmarked at 30% faster and with a smaller memory footprint than Transformers, another AI model.
  • The Transformer architecture that dominates large language models (LLMs), such as Google’s Gemini and OpenAI’s GPT, has traditionally struggled to reach peak performance when trying to balance computational efficiency and response time with the quality of the model.
  • The new model is particularly efficient, achieving better results on a series of standard language benchmarks than its Transformer counterparts, with no loss of predictive quality, the company said.

By Carl Franzen

Original Article