1 versions
Mistral OpenOrca is a 7 billion parameter model, fine-tuned on top of the Mistral 7B model using the OpenOrca dataset.
Install our magic
package manager:
curl -ssL https://magic.modular.com/ | bash
Then run the source
command that's printed in your terminal.
Install Max Pipelines in order to run this model.
magic global install max-pipelines
Start a local endpoint for mistral-openorca/7b:
max-serve serve --huggingface-repo-id Open-Orca/Mistral-7B-OpenOrca
The endpoint is ready when you see the URI printed in your terminal:
Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)
Now open another terminal to send a request using curl
:
curl -N http://0.0.0.0:8000/v1/chat/completions -H "Content-Type: application/json" -d '{
"model": "mistral-openorca/7b",
"stream": true,
"messages": [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Who won the World Series in 2020?"}
]
}' | grep -o '"content":"[^"]*"' | sed 's/"content":"//g' | sed 's/"//g' | tr -d '
' | sed 's/\n/
/g'
๐ Hooray! Youโre running Generative AI. Our goal is to make this as easy as possible.
Mistral OpenOrca is a cutting-edge language model with 7 billion parameters, built on top of the Mistral 7B foundation and fine-tuned using the OpenOrca dataset. At the time of its release, it is positioned as the leading model for its size, outperforming all other 7B and 13B parameter models. Performance evaluations on the HuggingFace Leaderboard further establish it as the best model under 30B parameters.
DETAILS
MAX Models are extremely optimized inference pipelines to run SOTA performance for that model on both CPU and GPU. For many of these models, they are the fastest version of this model in the world.
Browse 18+ MAX Models
MODULAR GITHUB
ModularCREATED BY
Open-Orca
MODEL
Open-Orca/Mistral-7B-OpenOrca
TAGS
@ Copyright - Modular Inc - 2024