zephyr-7b

MAX Model

1 versions

Zephyr is a series of fine-tuned versions of the Mistral and Mixtral models that are trained to act as helpful assistants.

Run this model

  1. Install our magic package manager:

    curl -ssL https://magic.modular.com/ | bash

    Then run the source command that's printed in your terminal.

  2. Install Max Pipelines in order to run this model.

    magic global install max-pipelines
  3. Start a local endpoint for zephyr/7b:

    max-serve serve --huggingface-repo-id HuggingFaceH4/zephyr-7b-beta

    The endpoint is ready when you see the URI printed in your terminal:

    Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)
  4. Now open another terminal to send a request using curl:

    curl -N http://0.0.0.0:8000/v1/chat/completions -H "Content-Type: application/json" -d '{
        "model": "zephyr/7b",
        "stream": true,
        "messages": [
            {"role": "system", "content": "You are a helpful assistant."},
            {"role": "user", "content": "Who won the World Series in 2020?"}
        ]
    }' | grep -o '"content":"[^"]*"' | sed 's/"content":"//g' | sed 's/"//g' | tr -d '
    ' | sed 's/\n/
    /g'
  5. 🎉 Hooray! You’re running Generative AI. Our goal is to make this as easy as possible.

About

Zephyr is a series of language models designed as helpful AI assistants. The latest model, Zephyr 141B-A35B, is a fine-tuned version of Mixtral 8x22b.

Sizes

  • zephyr-141b: A Mixture of Experts (MoE) model with a total of 141 billion parameters and 35 billion active parameters.
  • zephyr-7b: The original Zephyr model.

Source:

DETAILS

MODEL CLASS
MAX Model

MAX Models are extremely optimized inference pipelines to run SOTA performance for that model on both CPU and GPU. For many of these models, they are the fastest version of this model in the world.

Browse 18+ MAX Models

MODULAR GITHUB

Modular

CREATED BY

HuggingFaceH4

MODEL

HuggingFaceH4/zephyr-7b-beta

TAGS

arxiv:2305.14233
arxiv:2305.18290
arxiv:2310.01377
arxiv:2310.16944
autotrain_compatible
base_model:finetune:mistralai/Mistral-7B-v0.1
base_model:mistralai/Mistral-7B-v0.1
conversational
dataset:HuggingFaceH4/ultrachat_200k
dataset:HuggingFaceH4/ultrafeedback_binarized
en
endpoints_compatible
generated_from_trainer
license:mit
mistral
model-index
pytorch
region:us
safetensors
text-generation
text-generation-inference
transformers

@ Copyright - Modular Inc - 2024