codellama-7b

MAX Model

3 versions

A large language model that can use text prompts to generate and discuss code.

Run this model

  1. Install our magic package manager:

    curl -ssL https://magic.modular.com/ | bash

    Then run the source command that's printed in your terminal.

  2. Install Max Pipelines in order to run this model.

    magic global install max-pipelines
  3. Start a local endpoint for codellama/7b:

    max-serve serve --huggingface-repo-id meta-llama/CodeLlama-7b-Instruct-hf

    The endpoint is ready when you see the URI printed in your terminal:

    Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)
  4. Now open another terminal to send a request using curl:

    curl -N http://0.0.0.0:8000/v1/chat/completions -H "Content-Type: application/json" -d '{
        "model": "codellama/7b",
        "stream": true,
        "messages": [
            {"role": "system", "content": "You are a helpful assistant."},
            {"role": "user", "content": "Who won the World Series in 2020?"}
        ]
    }' | grep -o '"content":"[^"]*"' | sed 's/"content":"//g' | sed 's/"//g' | tr -d '
    ' | sed 's/\n/
    /g'
  5. 🎉 Hooray! You’re running Generative AI. Our goal is to make this as easy as possible.

About

Code Llama is a model built on top of Llama 2, designed for generating and discussing code. It aims to make coding workflows faster and more efficient for developers while simplifying the process of learning to code. Code Llama can generate both code and natural language explanations about code. It supports a wide array of popular programming languages, including Python, C++, Java, PHP, TypeScript (JavaScript), C#, Bash, and more.

Parameter Counts

Parameter Count View Model Model Name
7 billion View codellama-7b
13 billion View codellama-13b
34 billion View codellama-34b
70 billion View codellama 70b

Variations

Type Description
instruct Fine-tuned to generate helpful and safe answers in natural language
python Specialized variation further fine-tuned on 100B tokens of Python code
code Base model for code completion

Example Use Cases

  • Ask Questions: Generate concise code explanations or solutions.
  • Fill-in-the-Middle (FIM): Complete partial code snippets between written blocks.
  • Code Review: Identify bugs in a given code.
  • Writing Tests: Create unit tests for provided functions.
  • Code Completion: Auto-complete parts of code.

For more details, check the whitepaper or the CodeLlama GitHub.

DETAILS

MODEL CLASS
MAX Model

MAX Models are extremely optimized inference pipelines to run SOTA performance for that model on both CPU and GPU. For many of these models, they are the fastest version of this model in the world.

Browse 18+ MAX Models

MODULAR GITHUB

Modular

CREATED BY

meta-llama

MODEL

meta-llama/CodeLlama-7b-Instruct-hf

TAGS

arxiv:2308.12950
autotrain_compatible
code
conversational
endpoints_compatible
facebook
license:llama2
llama
llama-2
meta
pytorch
region:us
safetensors
text-generation
text-generation-inference
transformers

@ Copyright - Modular Inc - 2024