hermes3-8b

MAX Model

2 versions

Hermes 3 is the latest version of the flagship Hermes series of LLMs by Nous Research

Run this model

  1. Install our magic package manager:

    curl -ssL https://magic.modular.com/ | bash

    Then run the source command that's printed in your terminal.

  2. Install Max Pipelines in order to run this model.

    magic global install max-pipelines
  3. Start a local endpoint for hermes3/8b:

    max-serve serve --huggingface-repo-id NousResearch/Hermes-3-Llama-3.1-8B

    The endpoint is ready when you see the URI printed in your terminal:

    Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)
  4. Now open another terminal to send a request using curl:

    curl -N http://0.0.0.0:8000/v1/chat/completions -H "Content-Type: application/json" -d '{
        "model": "hermes3/8b",
        "stream": true,
        "messages": [
            {"role": "system", "content": "You are a helpful assistant."},
            {"role": "user", "content": "Who won the World Series in 2020?"}
        ]
    }' | grep -o '"content":"[^"]*"' | sed 's/"content":"//g' | sed 's/"//g' | tr -d '
    ' | sed 's/\n/
    /g'
  5. 🎉 Hooray! You’re running Generative AI. Our goal is to make this as easy as possible.

About

image.png

Hermes 3 is a generalist language model offering significant advancements over Hermes 2. Key improvements include advanced agentic capabilities, enhanced roleplaying, better reasoning, long-context coherence, and multi-turn conversational abilities. These upgrades make Hermes 3 versatile and powerful across a broad range of applications.

The Hermes series focuses on aligning language models with user intent, prioritizing user control and customization. Hermes 3 enhances this ethos by offering robust steering capabilities and expanded function calling for generating structured outputs. It also excels in generalist assistant roles and delivers notable progress in code generation.

The Hermes 3 series includes four different models: 3B, 8B, 70B, and 405B, accommodating varying requirements and scales of deployment.

References:

DETAILS

MODEL CLASS
MAX Model

MAX Models are extremely optimized inference pipelines to run SOTA performance for that model on both CPU and GPU. For many of these models, they are the fastest version of this model in the world.

Browse 18+ MAX Models

MODULAR GITHUB

Modular

CREATED BY

NousResearch

MODEL

NousResearch/Hermes-3-Llama-3.1-8B

TAGS

Llama-3
arxiv:2408.11857
autotrain_compatible
axolotl
base_model:finetune:meta-llama/Llama-3.1-8B
base_model:meta-llama/Llama-3.1-8B
chat
chatml
conversational
distillation
en
endpoints_compatible
finetune
function calling
gpt4
instruct
json mode
license:llama3
llama
region:us
roleplaying
safetensors
synthetic data
text-generation
text-generation-inference
transformers

@ Copyright - Modular Inc - 2024