5 versions
DeepSeek's first-generation of reasoning models with comparable performance to OpenAI-o1, including six dense models distilled from DeepSeek-R1 based on Llama and Qwen.
Install our magic
package manager:
curl -ssL https://magic.modular.com/ | bash
Then run the source
command that's printed in your terminal.
Install Max Pipelines in order to run this model.
magic global install max-pipelines
Start a local endpoint for deepseek-r1/8b:
max-serve serve --huggingface-repo-id deepseek-ai/DeepSeek-R1-Distill-Llama-8B
The endpoint is ready when you see the URI printed in your terminal:
Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)
Now open another terminal to send a request using curl
:
curl -N http://0.0.0.0:8000/v1/chat/completions -H "Content-Type: application/json" -d '{
"model": "deepseek-r1/8b",
"stream": true,
"messages": [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Who won the World Series in 2020?"}
]
}' | grep -o '"content":"[^"]*"' | sed 's/"content":"//g' | sed 's/"//g' | tr -d '
' | sed 's/\n/
/g'
đ Hooray! Youâre running Generative AI. Our goal is to make this as easy as possible.
DeepSeek's first-generation reasoning models rival OpenAI-o1 in math, code, and reasoning tasks.
DeepSeek-R1
DeepSeek-R1 is the primary model in this series, showcasing advanced reasoning capabilities.
DeepSeek has successfully distilled reasoning patterns from larger models into smaller ones, achieving superior performance compared to small models trained using reinforcement learning. Fine-tuned using reasoning data generated by DeepSeek-R1, these distilled models excel on benchmarks.
The model weights are under the MIT License, supporting commercial use, modifications, and derivative works. Qwen distilled models are based on Qwen-2.5 (Apache 2.0 License), while Llama models derive from Llama3.x series, following corresponding licenses.
DETAILS
MAX Models are extremely optimized inference pipelines to run SOTA performance for that model on both CPU and GPU. For many of these models, they are the fastest version of this model in the world.
Browse 18+ MAX Models
MODULAR GITHUB
ModularCREATED BY
deepseek-ai
MODEL
deepseek-ai/DeepSeek-R1-Distill-Llama-8B
TAGS
@ Copyright - Modular Inc - 2024