orca-mini-3b

MAX Model

2 versions

A general-purpose model ranging from 3 billion parameters to 70 billion, suitable for entry-level hardware.

Run this model

  1. Install our magic package manager:

    curl -ssL https://magic.modular.com/ | bash

    Then run the source command that's printed in your terminal.

  2. Install Max Pipelines in order to run this model.

    magic global install max-pipelines
  3. Start a local endpoint for orca-mini/3b:

    max-serve serve --huggingface-repo-id pankajmathur/orca_mini_v9_7_3B-Instruct

    The endpoint is ready when you see the URI printed in your terminal:

    Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)
  4. Now open another terminal to send a request using curl:

    curl -N http://0.0.0.0:8000/v1/chat/completions -H "Content-Type: application/json" -d '{
        "model": "orca-mini/3b",
        "stream": true,
        "messages": [
            {"role": "system", "content": "You are a helpful assistant."},
            {"role": "user", "content": "Who won the World Series in 2020?"}
        ]
    }' | grep -o '"content":"[^"]*"' | sed 's/"content":"//g' | sed 's/"//g' | tr -d '
    ' | sed 's/\n/
    /g'
  5. 🎉 Hooray! You’re running Generative AI. Our goal is to make this as easy as possible.

About

Orca Mini is a family of Llama and Llama 2 models trained on datasets inspired by the methodology in the paper, Orca: Progressive Learning from Complex Explanation Traces of GPT-4. These models are designed for efficient fine-tuning and use, available in two variations: the original Orca Mini based on Llama with 3, 7, and 13 billion parameters, and Orca Mini v3 based on Llama 2 with 7, 13, and 70 billion parameters.

Memory Requirements

  • 7B models require at least 8GB of RAM.
  • 13B models require at least 16GB of RAM.
  • 70B models require at least 64GB of RAM.

Reference

3B model: Pankaj Mathur
7B model: Pankaj Mathur
13B model: Pankaj Mathur
13B v3: Pankaj Mathur
70B v3: Pankaj Mathur

Orca: Progressive Learning from Complex Explanation Traces of GPT-4

DETAILS

MODEL CLASS
MAX Model

MAX Models are extremely optimized inference pipelines to run SOTA performance for that model on both CPU and GPU. For many of these models, they are the fastest version of this model in the world.

Browse 18+ MAX Models

MODULAR GITHUB

Modular

CREATED BY

pankajmathur

MODEL

pankajmathur/orca_mini_v9_7_3B-Instruct

TAGS

autotrain_compatible
base_model:finetune:meta-llama/Llama-3.2-3B-Instruct
base_model:meta-llama/Llama-3.2-3B-Instruct
conversational
dataset:pankajmathur/orca_mini_v1_dataset
dataset:pankajmathur/orca_mini_v8_sharegpt_format
en
endpoints_compatible
license:llama3.2
llama
region:us
safetensors
text-generation
text-generation-inference
transformers

@ Copyright - Modular Inc - 2024