tinyllama-1.1b

MAX Model

1 versions

The TinyLlama project is an open endeavor to train a compact 1.1B Llama model on 3 trillion tokens.

Run this model

  1. Install our magic package manager:

    curl -ssL https://magic.modular.com/ | bash

    Then run the source command that's printed in your terminal.

  2. Install Max Pipelines in order to run this model.

    magic global install max-pipelines
  3. Start a local endpoint for tinyllama/1.1b:

    max-serve serve --huggingface-repo-id TinyLlama/TinyLlama-1.1B-Chat-v0.6

    The endpoint is ready when you see the URI printed in your terminal:

    Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)
  4. Now open another terminal to send a request using curl:

    curl -N http://0.0.0.0:8000/v1/chat/completions -H "Content-Type: application/json" -d '{
        "model": "tinyllama/1.1b",
        "stream": true,
        "messages": [
            {"role": "system", "content": "You are a helpful assistant."},
            {"role": "user", "content": "Who won the World Series in 2020?"}
        ]
    }' | grep -o '"content":"[^"]*"' | sed 's/"content":"//g' | sed 's/"//g' | tr -d '
    ' | sed 's/\n/
    /g'
  5. πŸŽ‰ Hooray! You’re running Generative AI. Our goal is to make this as easy as possible.

About

TinyLlama is an exceptionally compact language model boasting just 1.1 billion parameters. Despite its small size, this model has been engineered to address diverse scenarios where computational resources and memory are limited. Its lightweight architecture makes it particularly suited for applications requiring efficient processing without compromising on functionality or adaptability.

This streamlined design allows TinyLlama to serve environments where larger models may be impractical due to hardware constraints. It embodies an optimal balance between capability and resource efficiency, illustrating how scaled-down models can still deliver substantial performance across a variety of tasks.

References

Hugging Face

GitHub

DETAILS

MODEL CLASS
MAX Model

MAX Models are extremely optimized inference pipelines to run SOTA performance for that model on both CPU and GPU. For many of these models, they are the fastest version of this model in the world.

Browse 18+ MAX Models

MODULAR GITHUB

Modular

CREATED BY

TinyLlama

MODEL

TinyLlama/TinyLlama-1.1B-Chat-v0.6

TAGS

autotrain_compatible
conversational
dataset:OpenAssistant/oasst_top1_2023-08-25
dataset:bigcode/starcoderdata
dataset:cerebras/SlimPajama-627B
en
endpoints_compatible
gguf
license:apache-2.0
llama
region:us
safetensors
text-generation
text-generation-inference
transformers

@ Copyright - Modular Inc - 2024