models

/

Qwen2.5-Instruct-0.5B

Version: 0.5B GPU BF16

You can quickly deploy Qwen2.5-Instruct-0.5B to an endpoint using our MAX container. It includes the latest version of MAX with GPU support and our Python-based inference server called MAX Serve.

With the following Docker command, you’ll get an OpenAI-compatible endpoint running Qwen2.5-Instruct-0.5B:

docker run --gpus 1 \
    -v ~/.cache/huggingface:/root/.cache/huggingface \
    --env "HF_HUB_ENABLE_HF_TRANSFER=1" \
    --env "HF_TOKEN=" \
    -p 8000:8000 \
    docker.modular.com/modular/max-openai-api:nightly \
    --huggingface-repo-id Qwen/Qwen2.5-0.5B-Instruct

In order to download the model from Hugging Face, you just need to fill in the HF_TOKEN value with your access token, unless the model is from https://huggingface.co/modularai.

Learn more

For more information about the container image, see the MAX container documentation.

To learn more about how to deploy MAX to the cloud, check out our MAX Serve tutorials.

DETAILS

ChatMODEL CLASS
MAX Model

MAX Models are popular open-source models converted to MAX’s native graph format. Anything with the label is either SOTA or being worked on. Learn more about MAX Models.

Browse all MAX Models

HARDWARE
GPU
QUANTIZATION
BF16
ARCHITECTURE
MAX Model

MAX GITHUB

Modular / MAX

MODEL

Qwen

Qwen/Qwen2.5-0.5B-Instruct

QUESTIONS ABOUT THIS MODEL?

Leave a comment

PROBLEMS WITH THE CODE?

File an Issue

TAGS

transformers

/

safetensors

/

qwen2

/

text-generation

/

chat

/

conversational

/

en

/

arxiv:2407.10671

/

base_model:Qwen/Qwen2.5-0.5B

/

base_model:finetune:Qwen/Qwen2.5-0.5B

/

license:apache-2.0

/

autotrain_compatible

/

text-generation-inference

/

endpoints_compatible

/

region:us

Resources & support for
running Qwen2.5-Instruct-0.5B

@ Copyright - Modular Inc - 2025