neural-chat-7b

MAX Model

1 versions

A fine-tuned model based on Mistral with good coverage of domain and language.

Run this model

  1. Install our magic package manager:

    curl -ssL https://magic.modular.com/ | bash

    Then run the source command that's printed in your terminal.

  2. Install Max Pipelines in order to run this model.

    magic global install max-pipelines
  3. Start a local endpoint for neural-chat/7b:

    max-serve serve --huggingface-repo-id Intel/neural-chat-7b-v3-1

    The endpoint is ready when you see the URI printed in your terminal:

    Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)
  4. Now open another terminal to send a request using curl:

    curl -N http://0.0.0.0:8000/v1/chat/completions -H "Content-Type: application/json" -d '{
        "model": "neural-chat/7b",
        "stream": true,
        "messages": [
            {"role": "system", "content": "You are a helpful assistant."},
            {"role": "user", "content": "Who won the World Series in 2020?"}
        ]
    }' | grep -o '"content":"[^"]*"' | sed 's/"content":"//g' | sed 's/"//g' | tr -d '
    ' | sed 's/\n/
    /g'
  5. 🎉 Hooray! You’re running Generative AI. Our goal is to make this as easy as possible.

About

NeuralChat is a high-performance chatbot model released by Intel. It is a fine-tuned version of the Mistral model, optimized for conversational AI applications. NeuralChat is tailored to deliver efficient and high-quality natural language understanding and generation, making it well-suited for tasks that require interactive and dynamic chatbot functionalities. By leveraging advancements in natural language processing and fine-tuning techniques, NeuralChat provides an enhanced user experience with faster response times and improved accuracy.

The model is available on HuggingFace, making it accessible to developers for integration into various applications. NeuralChat’s design emphasizes scalability and performance, catering to both small and large-scale deployments. It stands out as a robust choice for industries seeking reliable AI solutions for customer service, virtual assistants, and other conversational interfaces.

References

HuggingFace

DETAILS

MODEL CLASS
MAX Model

MAX Models are extremely optimized inference pipelines to run SOTA performance for that model on both CPU and GPU. For many of these models, they are the fastest version of this model in the world.

Browse 18+ MAX Models

MODULAR GITHUB

Modular

CREATED BY

Intel

MODEL

Intel/neural-chat-7b-v3-1

TAGS

Intel
LLMs
arxiv:2306.02707
autotrain_compatible
base_model:finetune:mistralai/Mistral-7B-v0.1
base_model:mistralai/Mistral-7B-v0.1
conversational
dataset:Open-Orca/SlimOrca
en
endpoints_compatible
license:apache-2.0
mistral
model-index
pytorch
region:us
safetensors
text-generation
text-generation-inference
transformers

@ Copyright - Modular Inc - 2024