wizard-vicuna-uncensored-7b

MAX Model

3 versions

Wizard Vicuna Uncensored is a 7B, 13B, and 30B parameter model based on Llama 2 uncensored by Eric Hartford.

Run this model

  1. Install our magic package manager:

    curl -ssL https://magic.modular.com/ | bash

    Then run the source command that's printed in your terminal.

  2. Install Max Pipelines in order to run this model.

    magic global install max-pipelines
  3. Start a local endpoint for wizard-vicuna-uncensored/7b:

    max-serve serve --huggingface-repo-id cognitivecomputations/Wizard-Vicuna-7B-Uncensored

    The endpoint is ready when you see the URI printed in your terminal:

    Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)
  4. Now open another terminal to send a request using curl:

    curl -N http://0.0.0.0:8000/v1/chat/completions -H "Content-Type: application/json" -d '{
        "model": "wizard-vicuna-uncensored/7b",
        "stream": true,
        "messages": [
            {"role": "system", "content": "You are a helpful assistant."},
            {"role": "user", "content": "Who won the World Series in 2020?"}
        ]
    }' | grep -o '"content":"[^"]*"' | sed 's/"content":"//g' | sed 's/"//g' | tr -d '
    ' | sed 's/\n/
    /g'
  5. 🎉 Hooray! You’re running Generative AI. Our goal is to make this as easy as possible.

About

Wizard Vicuna Uncensored is a series of open-source models with 7B, 13B, and 30B parameters, created by Eric Hartford. Based on LLaMA 2, the models were trained with datasets that excluded responses containing alignment or moralizing language. These models aim to provide a general-purpose AI without the limitations of heavily aligned content.

Memory Requirements

  • 7B models: Require at least 8GB of RAM.
  • 13B models: Require at least 16GB of RAM.
  • 30B models: Require at least 32GB of RAM.

For systems with lower memory, using a q4 quantized version is recommended. Higher quantization levels offer increased accuracy but demand more memory and processing power.

Model Variants

By default, the models utilize 4-bit quantization (q4). Higher-bit quantization options are available, trading efficiency for increased precision.

Model Variants Quantization Tags
7B latest, 7b, 7b-q4_0
13B 13b, 13b-q4_0
30B 30b, 30b-q4_0

Model Sources

DETAILS

MODEL CLASS
MAX Model

MAX Models are extremely optimized inference pipelines to run SOTA performance for that model on both CPU and GPU. For many of these models, they are the fastest version of this model in the world.

Browse 18+ MAX Models

MODULAR GITHUB

Modular

CREATED BY

cognitivecomputations

MODEL

cognitivecomputations/Wizard-Vicuna-7B-Uncensored

TAGS

autotrain_compatible
dataset:ehartford/wizard_vicuna_70k_unfiltered
en
endpoints_compatible
license:other
llama
model-index
pytorch
region:us
text-generation
text-generation-inference
transformers
uncensored

@ Copyright - Modular Inc - 2024