codestral-22b

MAX Model

1 versions

Codestral is Mistral AIā€™s first-ever code model designed for code generation tasks.

Run this model

  1. Install our magic package manager:

    curl -ssL https://magic.modular.com/ | bash

    Then run the source command that's printed in your terminal.

  2. Install Max Pipelines in order to run this model.

    magic global install max-pipelines
  3. Start a local endpoint for codestral/22b:

    max-serve serve --huggingface-repo-id mistralai/Codestral-22B-v0.1

    The endpoint is ready when you see the URI printed in your terminal:

    Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)
  4. Now open another terminal to send a request using curl:

    curl -N http://0.0.0.0:8000/v1/chat/completions -H "Content-Type: application/json" -d '{
        "model": "codestral/22b",
        "stream": true,
        "messages": [
            {"role": "system", "content": "You are a helpful assistant."},
            {"role": "user", "content": "Who won the World Series in 2020?"}
        ]
    }' | grep -o '"content":"[^"]*"' | sed 's/"content":"//g' | sed 's/"//g' | tr -d '
    ' | sed 's/\n/
    /g'
  5. šŸŽ‰ Hooray! Youā€™re running Generative AI. Our goal is to make this as easy as possible.

About

Codestral

Codestral Hello World

Codestral is Mistral AI's groundbreaking 22B code generation model, optimized for complex programming tasks.

Fluent in 80+ Programming Languages

Trained on a diverse dataset spanning over 80 programming languagesā€”such as Python, Java, C, C++, JavaScript, Swift, Fortran, and Bashā€”Codestral excels in code completion, writing unit tests, and filling in partial code snippets through its fill-in-the-middle (FIM) mechanism.

Benchmarks

With its larger context window of 32k (compared to 4k, 8k or 16k for competitors), Codestral outperforms all other models in RepoBench, a long-range eval for code generation

Benchmarks

FIM benchmarks

Codestral outperforms all competing models in RepoBench, a benchmark for long-range code generation tasks, benefiting from its larger context window of 32k tokens (compared to 4k, 8k, or 16k in rival models).

Reference

Mistral AI - Codestral: Hello, World

DETAILS

MODEL CLASS
MAX Model

MAX Models are extremely optimized inference pipelines to run SOTA performance for that model on both CPU and GPU. For many of these models, they are the fastest version of this model in the world.

Browse 18+ MAX Models

MODULAR GITHUB

Modular

CREATED BY

mistralai

MODEL

mistralai/Codestral-22B-v0.1

TAGS

autotrain_compatible
code
conversational
license:other
mistral
region:us
safetensors
text-generation
text-generation-inference
transformers

@ Copyright - Modular Inc - 2024