(Reuters) – Meta Platforms on Saturday released the latest version of its large language model (LLM) Llama, called the Llama 4 Scout and Llama 4 Maverick.
Meta said Llama is a multimodal AI system. Multimodal systems are capable of processing and integrating various types of data including text, video, images and audio, and can convert content across these formats.
Meta said in a statement that the Llama 4 Scout and Llama 4 Maverick are its “most advanced models yet” and “the best in their class for multimodality.”
Meta added that Llama 4 Maverick and Llama 4 Scout will be open source software. It also said it was previewing Llama 4 Behemoth, which it called “one of the smartest LLMs in the world and our most powerful yet to serve as a teacher for our new models.”
Big technology firms have been investing aggressively in artificial intelligence (AI) infrastructure following the success of OpenAI’s ChatGPT, which altered the tech landscape and drove investment into machine learning.
The Information reported on Friday that Meta had delayed the launch of its LLM’s latest version because during development, Llama 4 did not meet Meta’s expectations on technical benchmarks, particularly in reasoning and math tasks.
The company was also concerned that Llama 4 was less capable than OpenAI’s models in conducting humanlike voice conversations, the report added.
Meta plans to spend as much as $65 billion this year to expand its AI infrastructure, amid investor pressure on big tech firms to show returns on their investments.
(Reporting by Rishabh Jaiswal in Bengaluru; Editing by David Gregorio)
Comments