dolphin-mistral-7b

MAX Model

1 versions

The uncensored Dolphin model based on Mistral that excels at coding tasks. Updated to version 2.8.

Run this model

  1. Install our magic package manager:

    curl -ssL https://magic.modular.com/ | bash

    Then run the source command that's printed in your terminal.

  2. Install Max Pipelines in order to run this model.

    magic global install max-pipelines
  3. Start a local endpoint for dolphin-mistral/7b:

    max-serve serve --huggingface-repo-id cognitivecomputations/dolphin-2.1-mistral-7b

    The endpoint is ready when you see the URI printed in your terminal:

    Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)
  4. Now open another terminal to send a request using curl:

    curl -N http://0.0.0.0:8000/v1/chat/completions -H "Content-Type: application/json" -d '{
        "model": "dolphin-mistral/7b",
        "stream": true,
        "messages": [
            {"role": "system", "content": "You are a helpful assistant."},
            {"role": "user", "content": "Who won the World Series in 2020?"}
        ]
    }' | grep -o '"content":"[^"]*"' | sed 's/"content":"//g' | sed 's/"//g' | tr -d '
    ' | sed 's/\n/
    /g'
  5. 🎉 Hooray! You’re running Generative AI. Our goal is to make this as easy as possible.

About

The Dolphin model, developed by Eric Hartford, is based on Mistral 0.2 and was released in March 2024. This uncensored language model is designed for both commercial and non-commercial use and is particularly effective at coding. Dolphin offers robust capabilities and an extended context window of up to 32,000 tokens, making it suitable for complex applications.

Versions

Tag Date Notes
v2.8 latest 03/31/2024 Based on Mistral 0.2 with support for a context window of 32K tokens.
v2.6 12/27/2023 Fixed a training configuration issue that improved quality, and improvements to the training dataset for empathy.
v2.2.1 10/30/2023 This is a checkpoint release, to fix overfit training
v2.2 10/29/2023 Added conversation and empathy data.
v2.1 10/11/2023 Enhanced with the airoboros dataset.
v2.0 10/2/2023 Initial release of the model.

References

HuggingFace

DETAILS

MODEL CLASS
MAX Model

MAX Models are extremely optimized inference pipelines to run SOTA performance for that model on both CPU and GPU. For many of these models, they are the fastest version of this model in the world.

Browse 18+ MAX Models

MODULAR GITHUB

Modular

CREATED BY

cognitivecomputations

MODEL

cognitivecomputations/dolphin-2.1-mistral-7b

TAGS

autotrain_compatible
conversational
dataset:ehartford/dolphin
dataset:jondurbin/airoboros-2.2.1
en
endpoints_compatible
license:apache-2.0
mistral
pytorch
region:us
safetensors
text-generation
text-generation-inference
transformers

@ Copyright - Modular Inc - 2024