wizardlm2-7b

MAX Model

2 versions

State of the art large language model from Microsoft AI with improved performance on complex chat, multilingual, reasoning and agent use cases.

Run this model

  1. Install our magic package manager:

    curl -ssL https://magic.modular.com/ | bash

    Then run the source command that's printed in your terminal.

  2. Install Max Pipelines in order to run this model.

    magic global install max-pipelines
  3. Start a local endpoint for wizardlm2/7b:

    max-serve serve --huggingface-repo-id cognitivecomputations/WizardLM-7B-Uncensored

    The endpoint is ready when you see the URI printed in your terminal:

    Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)
  4. Now open another terminal to send a request using curl:

    curl -N http://0.0.0.0:8000/v1/chat/completions -H "Content-Type: application/json" -d '{
        "model": "wizardlm2/7b",
        "stream": true,
        "messages": [
            {"role": "system", "content": "You are a helpful assistant."},
            {"role": "user", "content": "Who won the World Series in 2020?"}
        ]
    }' | grep -o '"content":"[^"]*"' | sed 's/"content":"//g' | sed 's/"//g' | tr -d '
    ' | sed 's/\n/
    /g'
  5. 🎉 Hooray! You’re running Generative AI. Our goal is to make this as easy as possible.

About

WizardLM-2: Next-Generation State-of-the-Art LLM

WizardLM-2 represents a groundbreaking leap in large language models, excelling in complex chat interactions, multilingual capabilities, advanced reasoning, and agent-driven applications. This model family includes three distinct tiers, each designed to address varied performance and scale needs:

  • wizardlm2-7b: A highly efficient model offering competitive performance, rivaling other open-source models up to 10x its size in terms of speed.
  • wizardlm2-8x22b: The most sophisticated model in the family, setting the standard for the best open-source LLM. It achieved top performance in Microsoft’s internal evaluations on highly complex tasks.
  • wizardlm2-70b: Designed with elite reasoning capabilities relative to its size, this model is anticipated to further raise the bar upon release (coming soon).

WizardLM-2 is poised to redefine the boundaries of open-access AI, delivering transformative performance across diverse and challenging use cases.

References

Blog Post
HuggingFace

DETAILS

MODEL CLASS
MAX Model

MAX Models are extremely optimized inference pipelines to run SOTA performance for that model on both CPU and GPU. For many of these models, they are the fastest version of this model in the world.

Browse 18+ MAX Models

MODULAR GITHUB

Modular

CREATED BY

cognitivecomputations

MODEL

cognitivecomputations/WizardLM-7B-Uncensored

TAGS

autotrain_compatible
dataset:ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered
endpoints_compatible
license:other
llama
pytorch
region:us
text-generation
text-generation-inference
transformers
uncensored

@ Copyright - Modular Inc - 2024