Meta released a new collection of artificial intelligence in the Llama family on Saturday.
There are four new models in total: llama 4 scout, llama 4 maverick and llama 4 Behemoth. Meta says that all of them have been trained on “large visual understanding” to give them “not labeled text, picture and video data in a large amount of”.
Deepseek accelerated competition
The success of Deepseek, the Chinese AI laboratory, the success of Meta’s open or better performance with the previous flagship models of Meta has accelerated the development of Llama, as it has been reported. Meta is said to have confused war rooms to solve how Deepseek has reduced the cost of running and distributing models like R1 and V3.
Scout and Maverick are on the air, Behemoth is on the road
Scout and Maverick are clearly available on the partners of the AI giant platform Huging Face, while Behemoth is still in the educational stage. Meta says that AI -supported assistant Meta AI in applications such as WhatsApp, Messenger and Instagram has been updated to use Llama 4 in 40 countries. Multi -mod features are limited to the US in English for now.
Some developers may appeal to LLAM 4 license.
Restrictions and license problems are experiencing in the EU
Users and companies with “residing” or “main workplace” in the EU have probably been banned from using or distributing models as a result of the management requirements imposed by the laws of artificial intelligence and data confidentiality of the region. (In the past, Meta has described these laws as extremely burdensome.) In addition, companies with more than 700 million active users per month, as in previous LLAM versions, should request a special license from Meta to their own discretion or reject.
“These Llama 4 models point to the beginning of a new era for Llama ecosystem,” Meta wrote in a blog post. “This is just a start for Llama 4 collection.”
Meta says that Llama 4 is the first model group using a MOE mixture (MOE) architecture that is more computing in terms of answering training and queries. MoE mainly devotes data processing tasks to sub -tasks and then transfers them to smaller, specialized “expert” models.
For example, Maverick has a total of 400 billion parameters, but only 17 billion active parameters between 128 “experts”. (Parameters correspond to the problem -solving skills of a rough model.) Scout has 17 billion active parameters, 16 experts and a total of 109 billion parameters.
According to Meta’s internal tests, Maverick, which the company says is the best for the use of “General Assistant and Chat”, such as creative writing, leaves behind OpenAI’s GPT-4O and Google’s Gemini 2.0. However, Maverick cannot fully compete with more talented new models such as Google’s Gemini 2.5 Pro, Anthropic’s Claude 3.7 Sonnet and OpenAI’s GPT-4.5.
Scout’s powerful aspects lies in tasks such as document summarizing and reasoning on large code base. Uniquely, it has a very large context window: 10 million markers. (“Markers” represents raw text pieces – eg the word “fantastic” is divided into “Fan”, “Tas” and “Tic”.
According to Meta’s calculations, Scout can work in a single Nvidia H100 GPU, while Maverick needs an Nvidia H100 DGX system or equivalent.
META’s unauthorized Behemoth will need an even stronger equipment. According to the company, Behemoth has 288 billion active parameters, 16 experts and about two trillion total parameters. The internal comparison of Meta shows that the GPT-4.5, Claude 3.7 Sonnet and Gemini 2.0 Pro (not 2.5 Pro) in various assessments that measure Stem skills such as solving mathematics problem solving.
None of the remarkable llama 4 models are a proper “reasoning” model of OpenAI’s O1 and O3-Mini. The reasoning models confirm their answers and often respond to questions more reliably, but as a result, it takes longer than traditional, “reasonable” models to offer answers.
Interestingly, Meta says that all Llama 4 models are arranged to refuse to answer “controversial” questions less frequently. According to the company, Llama 4 responds to the “controversial” political and social issues that have not answered by previous Llama models. In addition, the company says that Llama 4 is “dramatic more balanced” and will certainly not entertain these requests.
“(L) You can trust that Lama 4 will provide realistic answers without judging,” a meta spokesman said Techcrunch. “(L) We continue to make Lama more sensitive, so that he can answer more questions, respond to various different perspectives (…) and does not prefer some views to others.”
These changes came after some of the White House accusing the artificial intelligence chat robots of being “awake” of artificial intelligence chat robots.
Most of the close confidants of President Donald Trump, including billionaire Elon Musk and Crypto and Crypto and Artificial Intelligence “Tsar” David Sacks, claimed that popular artificial intelligence chat robots censored conservative views. Historically, Sacks described OpenAI’s Chatgpt “programmed to awaken” and as a liar on political issues.
In fact, the prejudice in artificial intelligence is a technical problem that is difficult to solve. Musk’s own artificial intelligence company XAI tried to create a chat robot that does not support some political views any more than others.
This did not prevent companies such as OpenAI from arranging artificial intelligence models to answer more questions, especially questions about controversial issues.