2 versions
General use models based on Llama and Llama 2 from Nous Research.
Install our magic
package manager:
curl -ssL https://magic.modular.com/ | bash
Then run the source
command that's printed in your terminal.
Install Max Pipelines in order to run this model.
magic global install max-pipelines
Start a local endpoint for nous-hermes/7b:
max-serve serve --huggingface-repo-id NousResearch/Nous-Hermes-llama-2-7b
The endpoint is ready when you see the URI printed in your terminal:
Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)
Now open another terminal to send a request using curl
:
curl -N http://0.0.0.0:8000/v1/chat/completions -H "Content-Type: application/json" -d '{
"model": "nous-hermes/7b",
"stream": true,
"messages": [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Who won the World Series in 2020?"}
]
}' | grep -o '"content":"[^"]*"' | sed 's/"content":"//g' | sed 's/"//g' | tr -d '
' | sed 's/\n/
/g'
🎉 Hooray! You’re running Generative AI. Our goal is to make this as easy as possible.
Nous Hermes, released by Nous Research, includes general-use AI models with 7B and 13B parameters in two main variants: one based on Llama and another based on Llama 2. Both variants are trained on the same datasets.
For lower memory availability, consider 4-bit quantization (q4), which balances performance and memory usage. Higher quantization levels improve accuracy but demand more resources.
Llama 2-based models:
Includes 7B and 13B parameter variants.
Example aliases: 7b-llama2
, 13b-llama2-q4_0
Llama 1-based models:
Includes a 13B parameter variant. Example tag: 13b-llama1-q4_0
DETAILS
MAX Models are extremely optimized inference pipelines to run SOTA performance for that model on both CPU and GPU. For many of these models, they are the fastest version of this model in the world.
Browse 18+ MAX Models
MODULAR GITHUB
ModularCREATED BY
NousResearch
MODEL
NousResearch/Nous-Hermes-llama-2-7b
TAGS
@ Copyright - Modular Inc - 2024