solar-10.7b

MAX Model

1 versions

A compact, yet powerful 10.7B large language model designed for single-turn conversation.

Run this model

  1. Install our magic package manager:

    curl -ssL https://magic.modular.com/ | bash

    Then run the source command that's printed in your terminal.

  2. Install Max Pipelines in order to run this model.

    magic global install max-pipelines
  3. Start a local endpoint for solar/10.7b:

    max-serve serve --huggingface-repo-id upstage/SOLAR-10.7B-Instruct-v1.0

    The endpoint is ready when you see the URI printed in your terminal:

    Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)
  4. Now open another terminal to send a request using curl:

    curl -N http://0.0.0.0:8000/v1/chat/completions -H "Content-Type: application/json" -d '{
        "model": "solar/10.7b",
        "stream": true,
        "messages": [
            {"role": "system", "content": "You are a helpful assistant."},
            {"role": "user", "content": "Who won the World Series in 2020?"}
        ]
    }' | grep -o '"content":"[^"]*"' | sed 's/"content":"//g' | sed 's/"//g' | tr -d '
    ' | sed 's/\n/
    /g'
  5. 🎉 Hooray! You’re running Generative AI. Our goal is to make this as easy as possible.

About

Solar is the first open-source language model with 10.7 billion parameters, designed to be both compact and highly capable. Despite its relatively smaller size, it achieves state-of-the-art performance among models under 30 billion parameters, showcasing remarkable efficiency.

Built on the Llama 2 architecture, Solar employs the innovative Depth Up-Scaling technique, which includes integrating Mistral 7B weights into upscaled layers. This cutting-edge approach allows the model to deliver superior results while maintaining its compact form.

On the challenging H6 benchmark, Solar surpasses the performance of models with up to 30 billion parameters, including the prominent Mixtral 8X7B model. This achievement highlights its advanced design and optimization, setting a new standard for efficiency and power in open-source AI.

References

HuggingFace

Upstage AI

DETAILS

MODEL CLASS
MAX Model

MAX Models are extremely optimized inference pipelines to run SOTA performance for that model on both CPU and GPU. For many of these models, they are the fastest version of this model in the world.

Browse 18+ MAX Models

MODULAR GITHUB

Modular

CREATED BY

upstage

MODEL

upstage/SOLAR-10.7B-Instruct-v1.0

TAGS

arxiv:2312.15166
arxiv:2403.19270
autotrain_compatible
base_model:finetune:upstage/SOLAR-10.7B-v1.0
base_model:upstage/SOLAR-10.7B-v1.0
conversational
dataset:Intel/orca_dpo_pairs
dataset:Open-Orca/OpenOrca
dataset:allenai/ultrafeedback_binarized_cleaned
dataset:c-s-ale/alpaca-gpt4-data
en
endpoints_compatible
license:cc-by-nc-4.0
llama
region:us
safetensors
text-generation
text-generation-inference
transformers

@ Copyright - Modular Inc - 2024