vicuna-7b

MAX Model

3 versions

General use chat model based on Llama and Llama 2 with 2K to 16K context sizes.

Run this model

  1. Install our magic package manager:

    curl -ssL https://magic.modular.com/ | bash

    Then run the source command that's printed in your terminal.

  2. Install Max Pipelines in order to run this model.

    magic global install max-pipelines
  3. Start a local endpoint for vicuna/7b:

    max-serve serve --huggingface-repo-id lmsys/vicuna-7b-v1.5

    The endpoint is ready when you see the URI printed in your terminal:

    Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)
  4. Now open another terminal to send a request using curl:

    curl -N http://0.0.0.0:8000/v1/chat/completions -H "Content-Type: application/json" -d '{
        "model": "vicuna/7b",
        "stream": true,
        "messages": [
            {"role": "system", "content": "You are a helpful assistant."},
            {"role": "user", "content": "Who won the World Series in 2020?"}
        ]
    }' | grep -o '"content":"[^"]*"' | sed 's/"content":"//g' | sed 's/"//g' | tr -d '
    ' | sed 's/\n/
    /g'
  5. ๐ŸŽ‰ Hooray! Youโ€™re running Generative AI. Our goal is to make this as easy as possible.

About

Vicuna is an advanced chat assistant model available in three distinctive variants, each designed for specific use cases. The v1.3 variant is fine-tuned from the Llama model and supports a maximum context size of 2048 tokens. Meanwhile, v1.5 is a refinement based on Llama 2, retaining the same context size of 2048 tokens. For applications requiring a larger working memory, the v1.5-16k variant, also fine-tuned from Llama 2, expands the context size substantially to 16,000 tokens.

All three versions are trained on conversational data derived from ShareGPT, ensuring a foundation built on diverse, user-interactive dialogues. These models target tasks ranging from casual question answering to deeper engagements, such as complex explanations or multi-paragraph elaborations.

Example prompts

What is the meaning of life? Explain it in 5 paragraphs.

References

HuggingFace

DETAILS

MODEL CLASS
MAX Model

MAX Models are extremely optimized inference pipelines to run SOTA performance for that model on both CPU and GPU. For many of these models, they are the fastest version of this model in the world.

Browse 18+ MAX Models

MODULAR GITHUB

Modular

CREATED BY

lmsys

MODEL

lmsys/vicuna-7b-v1.5

TAGS

arxiv:2306.05685
arxiv:2307.09288
autotrain_compatible
license:llama2
llama
pytorch
region:us
text-generation
text-generation-inference
transformers

@ Copyright - Modular Inc - 2024