starling-lm-7b

MAX Model

1 versions

Starling is a large language model trained by reinforcement learning from AI feedback focused on improving chatbot helpfulness.

Run this model

  1. Install our magic package manager:

    curl -ssL https://magic.modular.com/ | bash

    Then run the source command that's printed in your terminal.

  2. Install Max Pipelines in order to run this model.

    magic global install max-pipelines
  3. Start a local endpoint for starling-lm/7b:

    max-serve serve --huggingface-repo-id berkeley-nest/Starling-LM-7B-alpha

    The endpoint is ready when you see the URI printed in your terminal:

    Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)
  4. Now open another terminal to send a request using curl:

    curl -N http://0.0.0.0:8000/v1/chat/completions -H "Content-Type: application/json" -d '{
        "model": "starling-lm/7b",
        "stream": true,
        "messages": [
            {"role": "system", "content": "You are a helpful assistant."},
            {"role": "user", "content": "Who won the World Series in 2020?"}
        ]
    }' | grep -o '"content":"[^"]*"' | sed 's/"content":"//g' | sed 's/"//g' | tr -d '
    ' | sed 's/\n/
    /g'
  5. 🎉 Hooray! You’re running Generative AI. Our goal is to make this as easy as possible.

About

Starling-7B is an open, non-commercial large language model (LLM) developed using reinforcement learning from AI feedback (RLAIF). This model leverages the new GPT-4 labeled ranking dataset, Nectar, as well as an advanced reward training and policy tuning pipeline.

The model achieves an impressive score of 8.09 on MT Bench when evaluated with GPT-4 as a judge, making it the highest-performing model on MT-Bench apart from OpenAI’s GPT-4 and GPT-4 Turbo.

Based on MT Bench evaluations, using GPT-4 scoring. Further human evaluation is needed.

Authors: Banghua Zhu, Evan Frick, Tianhao Wu, Hanlin Zhu, and Jiantao Jiao.

For correspondence, please contact Banghua Zhu (banghua@berkeley.edu).

Reference

Starling-7B: Increasing LLM Helpfulness & Harmlessness with RLAIF

HuggingFace

DETAILS

MODEL CLASS
MAX Model

MAX Models are extremely optimized inference pipelines to run SOTA performance for that model on both CPU and GPU. For many of these models, they are the fastest version of this model in the world.

Browse 18+ MAX Models

MODULAR GITHUB

Modular

CREATED BY

berkeley-nest

MODEL

berkeley-nest/Starling-LM-7B-alpha

TAGS

RLAIF
RLHF
arxiv:2306.02231
autotrain_compatible
conversational
dataset:berkeley-nest/Nectar
en
endpoints_compatible
license:apache-2.0
mistral
region:us
reward model
safetensors
text-generation
text-generation-inference
transformers

@ Copyright - Modular Inc - 2024