yi-6b

MAX Model

3 versions

Yi 1.5 is a high-performing, bilingual language model.

Run this model

  1. Install our magic package manager:

    curl -ssL https://magic.modular.com/ | bash

    Then run the source command that's printed in your terminal.

  2. Install Max Pipelines in order to run this model.

    magic global install max-pipelines
  3. Start a local endpoint for yi/6b:

    max-serve serve --huggingface-repo-id 01-ai/Yi-1.5-34B-Chat

    The endpoint is ready when you see the URI printed in your terminal:

    Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)
  4. Now open another terminal to send a request using curl:

    curl -N http://0.0.0.0:8000/v1/chat/completions -H "Content-Type: application/json" -d '{
        "model": "yi/6b",
        "stream": true,
        "messages": [
            {"role": "system", "content": "You are a helpful assistant."},
            {"role": "user", "content": "Who won the World Series in 2020?"}
        ]
    }' | grep -o '"content":"[^"]*"' | sed 's/"content":"//g' | sed 's/"//g' | tr -d '
    ' | sed 's/\n/
    /g'
  5. 🎉 Hooray! You’re running Generative AI. Our goal is to make this as easy as possible.

About

yi

Yi is an advanced series of large language models designed to proficiently operate in both English and Chinese. Trained on an extensive high-quality corpus of 3 trillion tokens, Yi is engineered to ensure linguistic and contextual fluency across these two major languages, making it particularly well-suited for tasks requiring bilingual or multilingual capabilities.

These models represent a significant step forward in NLP by leveraging cutting-edge training techniques and diverse datasets that capture a wide array of linguistic nuances. Yi's ability to handle complex tasks in both languages demonstrates its potential in real-world applications spanning translation, summarization, and knowledge extraction.

References

HuggingFace

DETAILS

MODEL CLASS
MAX Model

MAX Models are extremely optimized inference pipelines to run SOTA performance for that model on both CPU and GPU. For many of these models, they are the fastest version of this model in the world.

Browse 18+ MAX Models

MODULAR GITHUB

Modular

CREATED BY

01-ai

MODEL

01-ai/Yi-1.5-34B-Chat

TAGS

arxiv:2403.04652
autotrain_compatible
conversational
endpoints_compatible
license:apache-2.0
llama
region:us
safetensors
text-generation
text-generation-inference
transformers

@ Copyright - Modular Inc - 2024