smollm2-135m

MAX Model

3 versions

SmolLM2 is a family of compact language models available in three size: 135M, 360M, and 1.7B parameters.

Run this model

  1. Install our magic package manager:

    curl -ssL https://magic.modular.com/ | bash

    Then run the source command that's printed in your terminal.

  2. Install Max Pipelines in order to run this model.

    magic global install max-pipelines
  3. Start a local endpoint for smollm2/135m:

    max-serve serve --huggingface-repo-id HuggingFaceTB/SmolLM2-135M-Instruct

    The endpoint is ready when you see the URI printed in your terminal:

    Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)
  4. Now open another terminal to send a request using curl:

    curl -N http://0.0.0.0:8000/v1/chat/completions -H "Content-Type: application/json" -d '{
        "model": "smollm2/135m",
        "stream": true,
        "messages": [
            {"role": "system", "content": "You are a helpful assistant."},
            {"role": "user", "content": "Who won the World Series in 2020?"}
        ]
    }' | grep -o '"content":"[^"]*"' | sed 's/"content":"//g' | sed 's/"//g' | tr -d '
    ' | sed 's/\n/
    /g'
  5. 🎉 Hooray! You’re running Generative AI. Our goal is to make this as easy as possible.

About

y45hIMNREW7w_XpHYB_0q.png

SmolLM2 is a collection of lightweight language models designed for efficiency and versatility. Available in three sizes—135M, 360M, and 1.7B parameters—these models are capable of performing a broad range of natural language processing tasks. Despite their compact size, SmolLM2 models achieve impressive performance, making them an ideal solution for on-device applications where computational resources are limited. Their balance of accuracy and efficiency allows developers to integrate advanced AI functionalities into edge devices without sacrificing user experience.

benchmark

References

HuggingFace

DETAILS

MODEL CLASS
MAX Model

MAX Models are extremely optimized inference pipelines to run SOTA performance for that model on both CPU and GPU. For many of these models, they are the fastest version of this model in the world.

Browse 18+ MAX Models

MODULAR GITHUB

Modular

CREATED BY

HuggingFaceTB

MODEL

HuggingFaceTB/SmolLM2-135M-Instruct

TAGS

autotrain_compatible
base_model:HuggingFaceTB/SmolLM2-135M
base_model:quantized:HuggingFaceTB/SmolLM2-135M
conversational
en
endpoints_compatible
license:apache-2.0
llama
onnx
region:us
safetensors
tensorboard
text-generation
text-generation-inference
transformers
transformers.js

@ Copyright - Modular Inc - 2024