Home Technology Open-Source Tools Meta releases Llama 4 a new crop AI models

Meta releases Llama 4 a new crop AI models

0
Meta releases Llama 4 a new crop AI models

Meta released a new collectionof AI models, Llama 4 in its Llama Family — on a Sunday, no less.

In total, there are four new models: Llama 4, Llama 4, Maverick and Llama 4. Meta says that all were trained using “large amounts” of unlabeled image, video, and text data to give them a “broad visual comprehension”.

The success and performance of DeepSeek’s open models, which are on par with Meta’s previous flagship Llama model, has reportedly pushed Llama into overdrive. Meta has reportedly scrambled its war rooms to figure out how DeepSeek reduced the cost of running and deployment models like R1 or V3.

Scout, Maverick, and Hugging Face are available from Meta’s partners and on Llama.com (19459028), while Behemoth remains in training. Meta announced that Meta AI, the AI-powered assistant for apps like WhatsApp, Messenger, Instagram, and Instagram has been updated in 40 countries to use Llama 4. For now, multimodal features are only available in English in the U.S.

Llama 4’s license may be a problem for some developers.

Users or companies “domiciled” in the EU or with a “principal business location” are prohibited from using the models or distributing them . This is likely due to the governance requirements imposed by regional AI and data privacy legislation. Meta has in the past criticized these laws for being too burdensome. Meta may grant or deny a special license to companies with 700 million monthly active users, just as it did with previous Llama releases. Meta wrote on its blogthat “These Llama 4 Models mark the beginning a new era for Llama’s ecosystem.”

“This is only the beginning of the Llama 4 Collection.”

Image credits:Meta.

Meta states that Llama 4 models are the first cohort to use a mix of experts (MoE), which is more computationally effective for training and answering questions. MoE architectures break down data processing into subtasks, and then delegate those tasks to smaller “expert” models. Maverick has 400 billion total parameter values, but only 17

billionparameter values across 128 “experts.” Parameters roughly correspond to the model’s ability to solve problems. Scout has 109 billion parameters, 16 experts and 17 billion active parameters.

According Meta’s internal tests, Maverick is superior to models like OpenAI’s GPT-4o, Google’s Gemini 2, and Google’s Gemini 2 on certain benchmarks, including coding, reasoning and multilingual. Maverick is not as capable as more recent models such as Google’s Gemini 3.5 Pro, Anthropic Claude 3.7 Sonnet and OpenAI GPT-4.5. Scout’s strength lies in tasks such as document summarization, reasoning over large codebases and analyzing documents. It has a context window that is 10 million tokens large. (Tokens are raw text fragments, e.g. The word “fantastic”split into “fan,” ‘that,” and ‘tic” is an example. In plain English, it can process and work on extremely long documents. According to Meta’s calculations, Scout can be run on a single Nvidia GPU, whereas Maverick needs an Nvidia DGX H100 system or equivalent.

Meta’s unreleased Behemoth requires even more powerful hardware. Behemoth, according to the company, has 288 billion parameters active, 16 experts and nearly two trillion parameters total. Meta’s internal benchmarking shows that Behemoth is outperforming GPT 4.5, Claude 3.7 Sonnet and Gemini 2.0 PRO (but not 2.5 PRO) on several evaluations of STEM skills such as math problem solving.

It is worth noting that none of the Llama models are a “reasoning model” in the same way as OpenAI’s O1 and O3-mini. Reasoning models are more reliable and fact-check answers. However, they take longer to provide answers than traditional models.

(
Image credits:Meta)

Meta claims that all of its Llama 4 model refuse to answer questions with “contentious” answers less often. Llama 4 is able to respond to “debated topics” in politics and social issues that previous Llama models could not. The company also claims that Llama 4 has a “dramatically balanced” response to certain prompts.

A Meta spokesperson told TechCrunch that[Y]will provide factual, helpful responses without judgement. “[W]We’re continuing to improve Llama so that it can answer more questions, respond to a variety […] of different viewpoints and doesn’t prefer some views over others.” Sacks has in the past singled outOpenAI ChatGPT for being “programmed” to be “woke” and untruthful on political subject matter.

Bias in AI is a technical problem that cannot be solved. Musk’s AI company, xAI has struggled to develop a chatbot which does not endorse certain political views over others.

This hasn’t prevented companies like OpenAI from adjusting AI models to answer a greater number of questions, especially questions related to controversial topics.

www.aiobserver.co

NO COMMENTS

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Exit mobile version