llama3.2-vision-11b

MAX Model

1 versions

Llama 3.2 Vision is a collection of instruction-tuned image reasoning generative models in 11B and 90B sizes.

Run this model

  1. Install our magic package manager:

    curl -ssL https://magic.modular.com/ | bash

    Then run the source command that's printed in your terminal.

  2. Install Max Pipelines in order to run this model.

    magic global install max-pipelines
  3. Start a local endpoint for llama3.2-vision/11b:

    max-serve serve --huggingface-repo-id meta-llama/Llama-3.2-11B-Vision-Instruct

    The endpoint is ready when you see the URI printed in your terminal:

    Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)
  4. Now open another terminal to send a request using curl:

    curl -N http://0.0.0.0:8000/v1/chat/completions -H "Content-Type: application/json" -d '{
        "model": "llama3.2-vision/11b",
        "stream": true,
        "messages": [
            {
              "role": "user",
              "content": [
                {
                  "type": "text",
                  "text": "What is in this image?"
                },
                {
                  "type": "image_url",
                  "image_url": {
                    "url": "https://upload.wikimedia.org/wikipedia/commons/1/13/Tunnel_View%2C_Yosemite_Valley%2C_Yosemite_NP_-_Diliff.jpg"
                  }
                }
              ]
            }
        ]
    
    }' | grep -o '"content":"[^"]*"' | sed 's/"content":"//g' | sed 's/"//g' | tr -d '
    ' | sed 's/\n/
    /g'
  5. 🎉 Hooray! You’re running Generative AI. Our goal is to make this as easy as possible.

About

Llama 3.2-Vision

The Llama 3.2-Vision collection of multimodal large language models (LLMs) includes 11B and 90B parameter sizes and is instruction-tuned for tasks involving image reasoning, captioning, and answering questions about images. These models generate text-based outputs based on text + image inputs and achieve state-of-the-art performance on common benchmarks, surpassing many open source and proprietary multimodal models.

Supported Languages: For text-only tasks, the officially supported languages are English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai, though training included additional languages. For tasks involving both text and images, only English is supported.

References

GitHub

HuggingFace

DETAILS

MODEL CLASS
MAX Model

MAX Models are extremely optimized inference pipelines to run SOTA performance for that model on both CPU and GPU. For many of these models, they are the fastest version of this model in the world.

Browse 18+ MAX Models

MODULAR GITHUB

Modular

CREATED BY

meta-llama

MODEL

meta-llama/Llama-3.2-11B-Vision-Instruct

TAGS

arxiv:2204.05149
conversational
de
en
endpoints_compatible
es
facebook
fr
hi
image-text-to-text
it
license:llama3.2
llama
llama-3
meta
mllama
pt
pytorch
region:us
safetensors
text-generation-inference
th
transformers

@ Copyright - Modular Inc - 2024