nous-hermes2-10.7b

MAX Model

2 versions

The powerful family of models by Nous Research that excels at scientific discussion and coding tasks.

Run this model

  1. Install our magic package manager:

    curl -ssL https://magic.modular.com/ | bash

    Then run the source command that's printed in your terminal.

  2. Install Max Pipelines in order to run this model.

    magic global install max-pipelines
  3. Start a local endpoint for nous-hermes2/10.7b:

    max-serve serve --huggingface-repo-id NousResearch/Nous-Hermes-2-Yi-34B

    The endpoint is ready when you see the URI printed in your terminal:

    Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)
  4. Now open another terminal to send a request using curl:

    curl -N http://0.0.0.0:8000/v1/chat/completions -H "Content-Type: application/json" -d '{
        "model": "nous-hermes2/10.7b",
        "stream": true,
        "messages": [
            {"role": "system", "content": "You are a helpful assistant."},
            {"role": "user", "content": "Who won the World Series in 2020?"}
        ]
    }' | grep -o '"content":"[^"]*"' | sed 's/"content":"//g' | sed 's/"//g' | tr -d '
    ' | sed 's/\n/
    /g'
  5. 🎉 Hooray! You’re running Generative AI. Our goal is to make this as easy as possible.

About

Nous Hermes 2 is the latest version of the Nous Hermes model, representing significant advancements in AI performance. Trained on 1,000,000 entries of predominantly GPT-4 generated data in combination with high-quality datasets from across the AI field, this model sets new standards in performance. It achieves exceptional results in benchmarks like GPT4All, AGIEval, and BigBench, solidifying its reputation as a leading model.

Versions

Type Date Description
10.7b 01/01/2024 A 10.7b model based on Solar. A major improvement across the board on benchmarks compared to the base Solar 10.7B model, and comes close to approaching the 34B Yi model
34b 12/25/2023 The original Nous Hermes 2 34B model based on Yi

References

HuggingFace

DETAILS

MODEL CLASS
MAX Model

MAX Models are extremely optimized inference pipelines to run SOTA performance for that model on both CPU and GPU. For many of these models, they are the fastest version of this model in the world.

Browse 18+ MAX Models

MODULAR GITHUB

Modular

CREATED BY

NousResearch

MODEL

NousResearch/Nous-Hermes-2-Yi-34B

TAGS

autotrain_compatible
base_model:01-ai/Yi-34B
base_model:finetune:01-ai/Yi-34B
chatml
conversational
dataset:teknium/OpenHermes-2.5
distillation
en
endpoints_compatible
finetune
gpt4
instruct
license:apache-2.0
llama
region:us
safetensors
synthetic data
text-generation
text-generation-inference
transformers
yi

@ Copyright - Modular Inc - 2024