yarn-llama2-7b

MAX Model

2 versions

An extension of Llama 2 that supports a context of up to 128k tokens.

Run this model

  1. Install our magic package manager:

    curl -ssL https://magic.modular.com/ | bash

    Then run the source command that's printed in your terminal.

  2. Install Max Pipelines in order to run this model.

    magic global install max-pipelines
  3. Start a local endpoint for yarn-llama2/7b:

    max-serve serve --huggingface-repo-id NousResearch/Yarn-Llama-2-7b-64k

    The endpoint is ready when you see the URI printed in your terminal:

    Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)
  4. Now open another terminal to send a request using curl:

    curl -N http://0.0.0.0:8000/v1/chat/completions -H "Content-Type: application/json" -d '{
        "model": "yarn-llama2/7b",
        "stream": true,
        "messages": [
            {"role": "system", "content": "You are a helpful assistant."},
            {"role": "user", "content": "Who won the World Series in 2020?"}
        ]
    }' | grep -o '"content":"[^"]*"' | sed 's/"content":"//g' | sed 's/"//g' | tr -d '
    ' | sed 's/\n/
    /g'
  5. ๐ŸŽ‰ Hooray! Youโ€™re running Generative AI. Our goal is to make this as easy as possible.

About

Yarn Llama 2 is an advanced language model based on Llama2, designed to support extended context sizes of up to 128k tokens. Developed by Nous Research, it leverages the YaRN method to enhance the model's capacity for processing larger context windows. This makes it particularly suitable for tasks requiring extensive context understanding, such as long-form writing or detailed document analysis.

The model is available in configurations for 64k and 128k context sizes and can be accessed via API. Developers can use the model by submitting a prompt to generate text, showcasing its ability to handle highly contextualized input efficiently.

References

Hugging Face

YaRN: Efficient Context Window Extension of Large Language Models

DETAILS

MODEL CLASS
MAX Model

MAX Models are extremely optimized inference pipelines to run SOTA performance for that model on both CPU and GPU. For many of these models, they are the fastest version of this model in the world.

Browse 18+ MAX Models

MODULAR GITHUB

Modular

CREATED BY

NousResearch

MODEL

NousResearch/Yarn-Llama-2-7b-64k

TAGS

arxiv:2309.00071
autotrain_compatible
custom_code
dataset:pg19
endpoints_compatible
llama
pytorch
region:us
text-generation
text-generation-inference
transformers

@ Copyright - Modular Inc - 2024