dolphincoder-7b

PyTorch

2 versions

A 7B and 15B uncensored variant of the Dolphin model family that excels at coding, based on StarCoder2.

Run this model

  1. Install our magic package manager:

    curl -ssL https://magic.modular.com/ | bash

    Then run the source command that's printed in your terminal.

  2. Install Max Pipelines in order to run this model.

    magic global install max-pipelines
  3. Start a local endpoint for dolphincoder/7b:

    max-serve serve --huggingface-repo-id cognitivecomputations/dolphincoder-starcoder2-7b

    The endpoint is ready when you see the URI printed in your terminal:

    Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)
  4. Now open another terminal to send a request using curl:

    curl -N http://0.0.0.0:8000/v1/chat/completions -H "Content-Type: application/json" -d '{
        "model": "dolphincoder/7b",
        "stream": true,
        "messages": [
            {"role": "system", "content": "You are a helpful assistant."},
            {"role": "user", "content": "Who won the World Series in 2020?"}
        ]
    }' | grep -o '"content":"[^"]*"' | sed 's/"content":"//g' | sed 's/"//g' | tr -d '
    ' | sed 's/\n/
    /g'
  5. 🎉 Hooray! You’re running Generative AI. Our goal is to make this as easy as possible.

About

Based on the StarCoder2 7B and 15B models, this Dolphin fine-tune demonstrates exceptional capabilities in coding tasks. StarCoder2, an advanced large language model optimized for programming and language understanding, serves as the foundation for Dolphin's specialization. By leveraging fine-tuning strategies, Dolphin effectively enhances its problem-solving proficiency, especially in generating, completing, and debugging code.

The fine-tuning process focuses on high-quality, code-specific datasets, refining the model’s understanding of domain-relevant patterns and syntax. This enables Dolphin to achieve remarkable accuracy and efficiency in a variety of coding environments and programming languages. Designed to be a powerful tool for developers, Dolphin exemplifies the potential of targeted fine-tuning to elevate large language models beyond their generalist origins.

Reference

HuggingFace

DETAILS

MODEL CLASS
PyTorch

MODULAR GITHUB

Modular

CREATED BY

cognitivecomputations

MODEL

cognitivecomputations/dolphincoder-starcoder2-7b

TAGS

autotrain_compatible
conversational
dataset:cognitivecomputations/dolphin
dataset:cognitivecomputations/dolphin-coder
dataset:ise-uiuc/Magicoder-Evol-Instruct-110K
dataset:ise-uiuc/Magicoder-OSS-Instruct-75K
dataset:jondurbin/airoboros-2.2.1
dataset:m-a-p/Code-Feedback
dataset:m-a-p/CodeFeedback-Filtered-Instruction
dataset:microsoft/orca-math-word-problems-200k
dataset:teknium/openhermes
en
endpoints_compatible
license:bigcode-openrail-m
pytorch
region:us
starcoder2
text-generation
text-generation-inference
transformers

@ Copyright - Modular Inc - 2024