llama3-gradient-8b

MAX Model

1 versions

This model extends LLama-3 8B's context length from 8k to over 1m tokens.

Run this model

  1. Install our magic package manager:

    curl -ssL https://magic.modular.com/ | bash

    Then run the source command that's printed in your terminal.

  2. Install Max Pipelines in order to run this model.

    magic global install max-pipelines
  3. Start a local endpoint for llama3-gradient/8b:

    max-serve serve --huggingface-repo-id gradientai/Llama-3-8B-Instruct-Gradient-1048k

    The endpoint is ready when you see the URI printed in your terminal:

    Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)
  4. Now open another terminal to send a request using curl:

    curl -N http://0.0.0.0:8000/v1/chat/completions -H "Content-Type: application/json" -d '{
        "model": "llama3-gradient/8b",
        "stream": true,
        "messages": [
            {"role": "system", "content": "You are a helpful assistant."},
            {"role": "user", "content": "Who won the World Series in 2020?"}
        ]
    }' | grep -o '"content":"[^"]*"' | sed 's/"content":"//g' | sed 's/"//g' | tr -d '
    ' | sed 's/\n/
    /g'
  5. 🎉 Hooray! You’re running Generative AI. Our goal is to make this as easy as possible.

About

This model builds upon LLama-3 8B, extending the context length from 8k to over 1040k tokens. Developed by Gradient with computational support from Crusoe Energy, it demonstrates that state-of-the-art large language models can handle extended context lengths with minimal additional training. This is achieved by adjusting RoPE theta parameters. The model was trained on 830M tokens in this stage and 1.4B tokens in total across all stages, representing less than 0.01% of the original Llama-3 pre-training dataset.

Large Context Window

Using extended context windows (e.g., 256k tokens) requires significant memory—at least 64GB for 256k and upwards of 100GB for windows exceeding 1M tokens.

References

Website

Hugging Face

DETAILS

MODEL CLASS
MAX Model

MAX Models are extremely optimized inference pipelines to run SOTA performance for that model on both CPU and GPU. For many of these models, they are the fastest version of this model in the world.

Browse 18+ MAX Models

MODULAR GITHUB

Modular

CREATED BY

gradientai

MODEL

gradientai/Llama-3-8B-Instruct-Gradient-1048k

TAGS

arxiv:2305.14233
arxiv:2309.00071
arxiv:2402.08268
autotrain_compatible
conversational
doi:10.57967/hf/3372
en
endpoints_compatible
license:llama3
llama
llama-3
meta
region:us
safetensors
text-generation
text-generation-inference
transformers

@ Copyright - Modular Inc - 2024