1 versions
An advanced language model crafted with 2 trillion bilingual tokens.
Install our magic
package manager:
curl -ssL https://magic.modular.com/ | bash
Then run the source
command that's printed in your terminal.
Install Max Pipelines in order to run this model.
magic global install max-pipelines
Start a local endpoint for deepseek-llm/7b:
max-serve serve --huggingface-repo-id deepseek-ai/deepseek-llm-7b-chat
The endpoint is ready when you see the URI printed in your terminal:
Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)
Now open another terminal to send a request using curl
:
curl -N http://0.0.0.0:8000/v1/chat/completions -H "Content-Type: application/json" -d '{
"model": "deepseek-llm/7b",
"stream": true,
"messages": [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Who won the World Series in 2020?"}
]
}' | grep -o '"content":"[^"]*"' | sed 's/"content":"//g' | sed 's/"//g' | tr -d '
' | sed 's/\n/
/g'
đ Hooray! Youâre running Generative AI. Our goal is to make this as easy as possible.
DeepSeek LLM is a cutting-edge language model designed with both 7 billion and 67 billion parameter variations, available in chat
and base
configurations.
Superior General Capabilities: The DeepSeek LLM 67B Base demonstrates remarkable proficiency, surpassing Llama2 70B Base in key areas, including reasoning, coding, mathematics, and Chinese comprehension.
Proficient in Coding and Math: The 67B Chat model delivers exceptional performance in coding tasks, evaluated using the HumanEval benchmark, and excels in mathematics, tested with the GSM8K benchmark.
DETAILS
MAX Models are extremely optimized inference pipelines to run SOTA performance for that model on both CPU and GPU. For many of these models, they are the fastest version of this model in the world.
Browse 18+ MAX Models
MODULAR GITHUB
ModularCREATED BY
deepseek-ai
MODEL
deepseek-ai/deepseek-llm-7b-chat
TAGS
@ Copyright - Modular Inc - 2024