3 versions
Wizard Vicuna Uncensored is a 7B, 13B, and 30B parameter model based on Llama 2 uncensored by Eric Hartford.
Install our magic
package manager:
curl -ssL https://magic.modular.com/ | bash
Then run the source
command that's printed in your terminal.
Install Max Pipelines in order to run this model.
magic global install max-pipelines
Start a local endpoint for wizard-vicuna-uncensored/7b:
max-serve serve --huggingface-repo-id cognitivecomputations/Wizard-Vicuna-7B-Uncensored
The endpoint is ready when you see the URI printed in your terminal:
Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)
Now open another terminal to send a request using curl
:
curl -N http://0.0.0.0:8000/v1/chat/completions -H "Content-Type: application/json" -d '{
"model": "wizard-vicuna-uncensored/7b",
"stream": true,
"messages": [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Who won the World Series in 2020?"}
]
}' | grep -o '"content":"[^"]*"' | sed 's/"content":"//g' | sed 's/"//g' | tr -d '
' | sed 's/\n/
/g'
🎉 Hooray! You’re running Generative AI. Our goal is to make this as easy as possible.
Wizard Vicuna Uncensored is a series of open-source models with 7B, 13B, and 30B parameters, created by Eric Hartford. Based on LLaMA 2, the models were trained with datasets that excluded responses containing alignment or moralizing language. These models aim to provide a general-purpose AI without the limitations of heavily aligned content.
For systems with lower memory, using a q4 quantized version is recommended. Higher quantization levels offer increased accuracy but demand more memory and processing power.
By default, the models utilize 4-bit quantization (q4). Higher-bit quantization options are available, trading efficiency for increased precision.
Model Variants | Quantization Tags |
---|---|
7B | latest, 7b, 7b-q4_0 |
13B | 13b, 13b-q4_0 |
30B | 30b, 30b-q4_0 |
DETAILS
MAX Models are extremely optimized inference pipelines to run SOTA performance for that model on both CPU and GPU. For many of these models, they are the fastest version of this model in the world.
Browse 18+ MAX Models
MODULAR GITHUB
ModularCREATED BY
cognitivecomputations
MODEL
cognitivecomputations/Wizard-Vicuna-7B-Uncensored
TAGS
@ Copyright - Modular Inc - 2024