llama3.1-8b

MAX Model

1 versions

Llama 3.1 is a new state-of-the-art model from Meta available in 8B, 70B and 405B parameter sizes.

Run this model

  1. Install our magic package manager:

    curl -ssL https://magic.modular.com/ | bash

    Then run the source command that's printed in your terminal.

  2. Install Max Pipelines in order to run this model.

    magic global install max-pipelines
  3. Start a local endpoint for llama3.1/8b:

    max-serve serve --huggingface-repo-id meta-llama/Llama-3.1-8B-Instruct

    The endpoint is ready when you see the URI printed in your terminal:

    Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)
  4. Now open another terminal to send a request using curl:

    curl -N http://0.0.0.0:8000/v1/chat/completions -H "Content-Type: application/json" -d '{
        "model": "llama3.1/8b",
        "stream": true,
        "messages": [
            {"role": "system", "content": "You are a helpful assistant."},
            {"role": "user", "content": "Who won the World Series in 2020?"}
        ]
    }' | grep -o '"content":"[^"]*"' | sed 's/"content":"//g' | sed 's/"//g' | tr -d '
    ' | sed 's/\n/
    /g'
  5. 🎉 Hooray! You’re running Generative AI. Our goal is to make this as easy as possible.

About

Meta Llama 3.1

image.png

Llama 3.1 family of models available:

  • 8B
  • 70B
  • 405B

Llama 3.1 405B is the first openly available AI model with state-of-the-art capabilities in general knowledge, steerability, mathematics, tool use, and multilingual translation. The enhanced 8B and 70B models are multilingual, boast a longer 128K context length, and exhibit advanced reasoning skills, enabling use cases like long-form summarization, multilingual conversational agents, and coding assistance.

Meta has updated its licensing to allow developers to use Llama model outputs, including 405B, to enhance other models. The 3.1 release evaluations cover over 150 benchmark datasets across various languages, supplemented by human assessments in real-world scenarios. Results show Llama 3.1 is competitive with top-tier models like GPT-4, GPT-4o, and Claude 3.5 Sonnet, while smaller models challenge similarly sized closed and open models.

image.png

image.png

image.png

References

DETAILS

MODEL CLASS
MAX Model

MAX Models are extremely optimized inference pipelines to run SOTA performance for that model on both CPU and GPU. For many of these models, they are the fastest version of this model in the world.

Browse 18+ MAX Models

MODULAR GITHUB

Modular

CREATED BY

meta-llama

MODEL

meta-llama/Llama-3.1-8B-Instruct

TAGS

arxiv:2204.05149
autotrain_compatible
base_model:finetune:meta-llama/Llama-3.1-8B
base_model:meta-llama/Llama-3.1-8B
conversational
de
en
endpoints_compatible
es
facebook
fr
hi
it
license:llama3.1
llama
llama-3
meta
pt
pytorch
region:us
safetensors
text-generation
text-generation-inference
th
transformers

@ Copyright - Modular Inc - 2024