nous-hermes-7b

MAX Model

2 versions

General use models based on Llama and Llama 2 from Nous Research.

Run this model

  1. Install our magic package manager:

    curl -ssL https://magic.modular.com/ | bash

    Then run the source command that's printed in your terminal.

  2. Install Max Pipelines in order to run this model.

    magic global install max-pipelines
  3. Start a local endpoint for nous-hermes/7b:

    max-serve serve --huggingface-repo-id NousResearch/Nous-Hermes-llama-2-7b

    The endpoint is ready when you see the URI printed in your terminal:

    Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)
  4. Now open another terminal to send a request using curl:

    curl -N http://0.0.0.0:8000/v1/chat/completions -H "Content-Type: application/json" -d '{
        "model": "nous-hermes/7b",
        "stream": true,
        "messages": [
            {"role": "system", "content": "You are a helpful assistant."},
            {"role": "user", "content": "Who won the World Series in 2020?"}
        ]
    }' | grep -o '"content":"[^"]*"' | sed 's/"content":"//g' | sed 's/"//g' | tr -d '
    ' | sed 's/\n/
    /g'
  5. 🎉 Hooray! You’re running Generative AI. Our goal is to make this as easy as possible.

About

Nous Hermes, released by Nous Research, includes general-use AI models with 7B and 13B parameters in two main variants: one based on Llama and another based on Llama 2. Both variants are trained on the same datasets.

Memory Requirements

  • 7B models need at least 8GB of RAM.
  • 13B models require at least 16GB of RAM.

For lower memory availability, consider 4-bit quantization (q4), which balances performance and memory usage. Higher quantization levels improve accuracy but demand more resources.

Model Variants

  • Llama 2-based models:
    Includes 7B and 13B parameter variants.
    Example aliases: 7b-llama2, 13b-llama2-q4_0

  • Llama 1-based models:
    Includes a 13B parameter variant. Example tag: 13b-llama1-q4_0

Model Sources

DETAILS

MODEL CLASS
MAX Model

MAX Models are extremely optimized inference pipelines to run SOTA performance for that model on both CPU and GPU. For many of these models, they are the fastest version of this model in the world.

Browse 18+ MAX Models

MODULAR GITHUB

Modular

CREATED BY

NousResearch

MODEL

NousResearch/Nous-Hermes-llama-2-7b

TAGS

autotrain_compatible
distillation
en
endpoints_compatible
license:mit
llama
llama-2
pytorch
region:us
safetensors
self-instruct
synthetic instruction
text-generation
text-generation-inference
transformers

@ Copyright - Modular Inc - 2024