llama3-8b

MAX Model

1 versions

Meta Llama 3: The most capable openly available LLM to date

Run this model

  1. Install our magic package manager:

    curl -ssL https://magic.modular.com/ | bash

    Then run the source command that's printed in your terminal.

  2. Install Max Pipelines in order to run this model.

    magic global install max-pipelines
  3. Start a local endpoint for llama3/8b:

    max-serve serve --huggingface-repo-id meta-llama/Meta-Llama-3-8B-Instruct

    The endpoint is ready when you see the URI printed in your terminal:

    Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)
  4. Now open another terminal to send a request using curl:

    curl -N http://0.0.0.0:8000/v1/chat/completions -H "Content-Type: application/json" -d '{
        "model": "llama3/8b",
        "stream": true,
        "messages": [
            {"role": "system", "content": "You are a helpful assistant."},
            {"role": "user", "content": "Who won the World Series in 2020?"}
        ]
    }' | grep -o '"content":"[^"]*"' | sed 's/"content":"//g' | sed 's/"//g' | tr -d '
    ' | sed 's/\n/
    /g'
  5. 🎉 Hooray! You’re running Generative AI. Our goal is to make this as easy as possible.

About

Llama 3

The most capable openly available LLM to date.

Meta Llama 3, developed by Meta Inc., introduces a new family of state-of-the-art large language models available in 8B and 70B parameter sizes, designed for both pre-trained and instruction-tuned use cases. These models set a new benchmark in openness and capability for large language models.

Llama 3's instruction-tuned variants are optimized for dialogues and chat applications, offering significant advancements over many other open-source models across standard benchmarks. This makes them particularly well-suited for conversational AI tasks and interactive applications.

Model Variants

Instruct models are fine-tuned for chat and dialogue applications.
Pre-trained models serve as the foundational models for various use cases.

For further details, visit Introducing Meta Llama 3: The most capable openly available LLM to date.

DETAILS

MODEL CLASS
MAX Model

MAX Models are extremely optimized inference pipelines to run SOTA performance for that model on both CPU and GPU. For many of these models, they are the fastest version of this model in the world.

Browse 18+ MAX Models

MODULAR GITHUB

Modular

CREATED BY

meta-llama

MODEL

meta-llama/Meta-Llama-3-8B-Instruct

TAGS

autotrain_compatible
conversational
en
endpoints_compatible
facebook
license:llama3
llama
llama-3
meta
pytorch
region:us
safetensors
text-generation
text-generation-inference
transformers

@ Copyright - Modular Inc - 2024