llama3.2-1b

MAX Model

2 versions

Meta's Llama 3.2 goes small with 1B and 3B models.

Run this model

  1. Install our magic package manager:

    curl -ssL https://magic.modular.com/ | bash

    Then run the source command that's printed in your terminal.

  2. Install Max Pipelines in order to run this model.

    magic global install max-pipelines
  3. Start a local endpoint for llama3.2/1b:

    max-serve serve --huggingface-repo-id meta-llama/Llama-3.2-1B

    The endpoint is ready when you see the URI printed in your terminal:

    Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)
  4. Now open another terminal to send a request using curl:

    curl -N http://0.0.0.0:8000/v1/chat/completions -H "Content-Type: application/json" -d '{
        "model": "llama3.2/1b",
        "stream": true,
        "messages": [
            {"role": "system", "content": "You are a helpful assistant."},
            {"role": "user", "content": "Who won the World Series in 2020?"}
        ]
    }' | grep -o '"content":"[^"]*"' | sed 's/"content":"//g' | sed 's/"//g' | tr -d '
    ' | sed 's/\n/
    /g'
  5. 🎉 Hooray! You’re running Generative AI. Our goal is to make this as easy as possible.

About

The Meta Llama 3.2 collection comprises state-of-the-art multilingual large language models (LLMs) available in 1B and 3B parameter sizes. These models are pretrained and instruction-tuned, specifically designed for high-performance generative tasks in text-based dialogue, including agentic retrieval and summarization. Llama 3.2 models deliver superior results on industry benchmarks, outperforming many open-source and proprietary alternatives.

Sizes

3B Parameters (Default)

The 3B model consistently outperforms competitors like Gemma 2 (2.6B) and Phi 3.5-mini in areas such as:

  • Instruction following
  • Summarization
  • Prompt rewriting
  • Tool use

1B Parameters

The 1B model offers robust performance for localized and edge-device use cases, excelling in:

  • Personal information management
  • Multilingual knowledge retrieval
  • Rewriting tasks

Benchmarks

Llama 3.2 instruction-tuned benchmarks

Supported Languages: Officially supports English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai, with broader multilingual capabilities drawn from its training dataset.

DETAILS

MODEL CLASS
MAX Model

MAX Models are extremely optimized inference pipelines to run SOTA performance for that model on both CPU and GPU. For many of these models, they are the fastest version of this model in the world.

Browse 18+ MAX Models

MODULAR GITHUB

Modular

CREATED BY

meta-llama

MODEL

meta-llama/Llama-3.2-1B

TAGS

arxiv:2204.05149
arxiv:2405.16406
autotrain_compatible
de
en
endpoints_compatible
es
facebook
fr
hi
it
license:llama3.2
llama
llama-3
meta
pt
pytorch
region:us
safetensors
text-generation
text-generation-inference
th
transformers

@ Copyright - Modular Inc - 2024