smollm-135m

MAX Model

3 versions

🪐 A family of small models with 135M, 360M, and 1.7B parameters, trained on a new high-quality dataset.

Run this model

  1. Install our magic package manager:

    curl -ssL https://magic.modular.com/ | bash

    Then run the source command that's printed in your terminal.

  2. Install Max Pipelines in order to run this model.

    magic global install max-pipelines
  3. Start a local endpoint for smollm/135m:

    max-serve serve --huggingface-repo-id HuggingFaceTB/SmolLM-135M

    The endpoint is ready when you see the URI printed in your terminal:

    Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)
  4. Now open another terminal to send a request using curl:

    curl -N http://0.0.0.0:8000/v1/chat/completions -H "Content-Type: application/json" -d '{
        "model": "smollm/135m",
        "stream": true,
        "messages": [
            {"role": "system", "content": "You are a helpful assistant."},
            {"role": "user", "content": "Who won the World Series in 2020?"}
        ]
    }' | grep -o '"content":"[^"]*"' | sed 's/"content":"//g' | sed 's/"//g' | tr -d '
    ' | sed 's/\n/
    /g'
  5. 🎉 Hooray! You’re running Generative AI. Our goal is to make this as easy as possible.

About

SmolLM is a series of compact and computationally efficient language models specifically designed to meet the needs of developers and researchers looking for smaller-scale implementations of generative AI. These models are available in three parameter sizes: 135M, 360M, and 1.7B. Despite their relatively small size compared to larger models, SmolLM maintains strong capabilities in natural language understanding and generation, making it suitable for tasks where reduced computational overhead is critical.

Offered through Hugging Face, SmolLM's architecture balances performance and resource efficiency, enabling easier integration into projects and systems with limited hardware requirements. These models exemplify the progression towards democratizing AI by providing effective tools accessible to a broader range of users, further fostering innovation and experimentation across diverse applications.

References

Blog post

Hugging Face

DETAILS

MODEL CLASS
MAX Model

MAX Models are extremely optimized inference pipelines to run SOTA performance for that model on both CPU and GPU. For many of these models, they are the fastest version of this model in the world.

Browse 18+ MAX Models

MODULAR GITHUB

Modular

CREATED BY

HuggingFaceTB

MODEL

HuggingFaceTB/SmolLM-135M

TAGS

autotrain_compatible
dataset:HuggingFaceTB/smollm-corpus
en
endpoints_compatible
license:apache-2.0
llama
onnx
region:us
safetensors
text-generation
text-generation-inference
transformers

@ Copyright - Modular Inc - 2024