xwinlm-7b

MAX Model

2 versions

Conversational model based on Llama 2 that performs competitively on various benchmarks.

Run this model

  1. Install our magic package manager:

    curl -ssL https://magic.modular.com/ | bash

    Then run the source command that's printed in your terminal.

  2. Install Max Pipelines in order to run this model.

    magic global install max-pipelines
  3. Start a local endpoint for xwinlm/7b:

    max-serve serve --huggingface-repo-id Xwin-LM/Xwin-LM-7B-V0.2

    The endpoint is ready when you see the URI printed in your terminal:

    Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)
  4. Now open another terminal to send a request using curl:

    curl -N http://0.0.0.0:8000/v1/chat/completions -H "Content-Type: application/json" -d '{
        "model": "xwinlm/7b",
        "stream": true,
        "messages": [
            {"role": "system", "content": "You are a helpful assistant."},
            {"role": "user", "content": "Who won the World Series in 2020?"}
        ]
    }' | grep -o '"content":"[^"]*"' | sed 's/"content":"//g' | sed 's/"//g' | tr -d '
    ' | sed 's/\n/
    /g'
  5. ๐ŸŽ‰ Hooray! Youโ€™re running Generative AI. Our goal is to make this as easy as possible.

About

Xwin-LM is a cutting-edge language model built on the foundation of Llama 2. It incorporates a suite of advanced techniques to significantly enhance its performance and overall quality in natural language understanding and generation tasks. The model is designed to set higher standards in adaptability, accuracy, and linguistic capabilities, making it a versatile tool for diverse AI applications.

Reference

Hugging Face

DETAILS

MODEL CLASS
MAX Model

MAX Models are extremely optimized inference pipelines to run SOTA performance for that model on both CPU and GPU. For many of these models, they are the fastest version of this model in the world.

Browse 18+ MAX Models

MODULAR GITHUB

Modular

CREATED BY

Xwin-LM

MODEL

Xwin-LM/Xwin-LM-7B-V0.2

TAGS

autotrain_compatible
endpoints_compatible
license:llama2
llama
pytorch
region:us
text-generation
text-generation-inference
transformers

@ Copyright - Modular Inc - 2024