Meta AI just released the Llama 4 Scout and Llama 4 Maverick: Llama 4 Model

Today, Meta AI announced the release of its latest generation of multi-model Llama 4, with two variants: the Llama 4 Scout and the Llama 4 Maverick. These models represent significant technological advances in multimodal AI, providing improved functionality for text and image comprehension.
Llama 4 Scout is a 17 billion parameter model built by 16 expert modules. It introduces an extensive context window capable of holding up to 10 million tokens. This substantive contextual capability enables the model to effectively manage and interpret a wide range of text content, benefiting long document processing, complex code bases, and detailed dialogue tasks. In a comparative evaluation, Llama 4 Scout demonstrated superior performance in a well-known benchmark dataset relative to contemporary models such as Gemma 3, Gemini 2.0 Flash-lite, and Missstral 3.1.
Parallel to the Scouts, Camel 4 Camel is also built on a 17 billion active parameter building with 128 expert modules that are clearly designed to enhance visual grounding. This design facilitates precise alignment between text cues and related visual elements, allowing target responses to be accurately rooted in a specific image area. Maverick shows strong performance in comparative evaluations, surpassing GPT-4O and Gemini 2.0 Flash, especially in multimodal inference tasks. Additionally, Maverick achieved comparable results with DeepSeek V3 in terms of inference and coding benchmarks while taking about half of the activity parameters.
A key feature of Maverick is its notable performance to cost efficiency. The benchmark work specifically on the LMARENA platform documented the chat-optimized version of Maverick’s ELO rating of 1417, indicating its computational efficiency and practical applicability in session and multimodal contexts.

The development of scouts and calfs is derived from distillation techniques, which are derived from ongoing training for the Meta’s more powerful model, Llama 4 Behemoth. Behemoth is still under active training, and on established models such as GPT-4.5, Claude Sonnet 3.7, and Gemini 2.0 Pro, Behemoth has significant advantages in higher models. Behemoth’s insights and advanced methodologies play a role in perfecting the technical capabilities of scouts and calfs.
With the introduction of Llama 4, Meta AI improves multimodal AI through highly refined and technologically complex models, with deep semantic understanding and precise multimodal alignment. This release further illustrates Meta AI’s ongoing commitment to fostering innovation and maintaining open access to researchers, developers and enterprise applications.
With the finalization and public offering of Llama 4 Bememoth, future progress in multimodal AI is expected. Initial results suggest that the behemoth has the potential to set new standards in multimodal performance, especially in STEM applications and computational inference tasks. Meta AI plans to disclose detailed technical specifications and performance indicators after completing the behemoth model.
The announcement highlights the technological limitations of Meta AI’s commitment to driving multi-model modeling, thus supporting the evolution of practical and research-based AI applications in different sectors, including scientific research, education and complex dialogue systems. As Meta AI continues this trajectory, technological advances embodied in Llama 4 Scout, Maverick are expected to promote substantial advances in the computing and practical capabilities of multimodal AI.
Check Benchmark and download Camel 4. All credits for this study are to the researchers on the project. Also, please stay tuned for us twitter And don’t forget to join us 85k+ ml reddit.
[Register Now] Open Source AI at Minicon Virtual Conference: Free Registration + Certificate of Attendance + 3-hour Short Event (April 12, 9am-12pm) + At the Seminar [Sponsored]
Meta AI just released the Llama 4 Scout and Llama 4 Maverick: The Llama 4 model first appeared on Marktechpost.