Available in two variants, Scout and Maverick, the model uses a mixture-of-experts (MoE) architecture that activates only parts of the model per query, making it more efficient to run. Here’s how to start using it.
Where to get it
Meta has made Llama 4 available via its GitHub repository, with access gated through a request form for responsible usage….








