tinydolphin-1.1b

MAX Model

1 versions

An experimental 1.1B parameter model trained on the new Dolphin 2.8 dataset by Eric Hartford and based on TinyLlama.

Run this model

  1. Install our magic package manager:

    curl -ssL https://magic.modular.com/ | bash

    Then run the source command that's printed in your terminal.

  2. Install Max Pipelines in order to run this model.

    magic global install max-pipelines
  3. Start a local endpoint for tinydolphin/1.1b:

    max-serve serve --huggingface-repo-id cognitivecomputations/TinyDolphin-2.8-1.1b

    The endpoint is ready when you see the URI printed in your terminal:

    Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)
  4. Now open another terminal to send a request using curl:

    curl -N http://0.0.0.0:8000/v1/chat/completions -H "Content-Type: application/json" -d '{
        "model": "tinydolphin/1.1b",
        "stream": true,
        "messages": [
            {"role": "system", "content": "You are a helpful assistant."},
            {"role": "user", "content": "Who won the World Series in 2020?"}
        ]
    }' | grep -o '"content":"[^"]*"' | sed 's/"content":"//g' | sed 's/"//g' | tr -d '
    ' | sed 's/\n/
    /g'
  5. 🎉 Hooray! You’re running Generative AI. Our goal is to make this as easy as possible.

About

TinyDolphin is an experimental AI model derived by training the TinyLlama model on the Dolphin dataset, developed by Eric Hartford. The combination of the TinyLlama’s lightweight architecture and the Dolphin dataset's depth enables TinyDolphin to achieve significant advancements in efficiency and natural language understanding despite its small size. This approach demonstrates the potential for obtaining robust language capabilities even with limited computational resources.

Reference

Hugging Face

DETAILS

MODEL CLASS
MAX Model

MAX Models are extremely optimized inference pipelines to run SOTA performance for that model on both CPU and GPU. For many of these models, they are the fastest version of this model in the world.

Browse 18+ MAX Models

MODULAR GITHUB

Modular

CREATED BY

cognitivecomputations

MODEL

cognitivecomputations/TinyDolphin-2.8-1.1b

TAGS

autotrain_compatible
dataset:bigcode/starcoderdata
dataset:cerebras/SlimPajama-627B
dataset:teknium/openhermes
en
endpoints_compatible
license:apache-2.0
llama
region:us
safetensors
text-generation
text-generation-inference
transformers

@ Copyright - Modular Inc - 2024