models

/

Dolphin3.0-Llama3.1-8B-Q4_K_M

Choose Version

(3 versions)

MAX Model
Q4_K_M
CPU
  1. Install our magic package manager:

    curl -ssL https://magic.modular.com/ | bash

    Then run the source command that's printed in your terminal.

  2. Install Max Pipelines in order to run this model.

    magic global install max-pipelines && magic global update
  3. Start a local endpoint for Dolphin3.0-Llama3.1/8B-Q4_K_M:

    max-pipelines serve --huggingface-repo-id=cognitivecomputations/Dolphin3.0-Llama3.1-8B \
    --weight-path=bartowski/Dolphin3.0-Llama3.1-8B-GGUF/Dolphin3.0-Llama3.1-8B-Q4_K_M.gguf

    The endpoint is ready when you see the URI printed in your terminal:

    Server ready on http://0.0.0.0:8000 (Press CTRL+C to quit)
  4. Now open another terminal to send a request using curl:

    curl -N http://0.0.0.0:8000/v1/chat/completions -H "Content-Type: application/json" -d '{
        "model": "cognitivecomputations/Dolphin3.0-Llama3.1-8B",
        "stream": true,
        "messages": [
            {"role": "system", "content": "You are a helpful assistant."},
            {"role": "user", "content": "Who won the World Series in 2020?"}
        ]
    }' | grep -o '"content":"[^"]*"' | sed 's/"content":"//g' | sed 's/"//g' | tr -d '
    ' | sed 's/\n//g'
  5. 🎉 Hooray! You’re running Generative AI. Our goal is to make this as easy as possible.

Deploy this model to cloud

DETAILS

ChatMODEL CLASS
MAX Model

MAX Models are popular open-source models converted to MAX’s native graph format. Anything with the label is either SOTA or being worked on. Learn more about MAX Models.

Browse all MAX Models

HARDWARE
CPU
QUANTIZATION
Q4_K_M
ARCHITECTURE
MAX Model

MAX GITHUB

Modular / MAX

BASE MODEL

cognitivecomputations

cognitivecomputations/Dolphin3.0-Llama3.1-8B

QUANTIZED BY

cognitivecomputations

cognitivecomputations/Dolphin3.0-Llama3.1-8B-GGUF

QUESTIONS ABOUT THIS MODEL?

Leave a comment

PROBLEMS WITH THE CODE?

File an Issue

TAGS

safetensors

/

llama

/

en

/

dataset:OpenCoder-LLM/opc-sft-stage1

/

dataset:OpenCoder-LLM/opc-sft-stage2

/

dataset:microsoft/orca-agentinstruct-1M-v1

/

dataset:microsoft/orca-math-word-problems-200k

/

dataset:NousResearch/hermes-function-calling-v1

/

dataset:AI-MO/NuminaMath-CoT

/

dataset:AI-MO/NuminaMath-TIR

/

dataset:allenai/tulu-3-sft-mixture

/

dataset:cognitivecomputations/dolphin-coder

/

dataset:HuggingFaceTB/smoltalk

/

dataset:cognitivecomputations/samantha-data

/

dataset:m-a-p/CodeFeedback-Filtered-Instruction

/

dataset:m-a-p/Code-Feedback

/

base_model:meta-llama/Llama-3.1-8B

/

base_model:finetune:meta-llama/Llama-3.1-8B

/

license:llama3.1

/

region:us

Resources & support for
running Dolphin3.0-Llama3.1-8B

@ Copyright - Modular Inc - 2025