We are excited to announce the release of our latest Large Language Model (LLM), Stable Code 3B. This model, which follows our previous release of Stable Code Alpha 3B, is designed for code completion with enhanced capabilities. Despite being 60% smaller than CodeLLaMA 7b, Stable Code 3B performs exceptionally well across various programming languages. It is based on our Stable LM 3B foundational model, which was trained on 4 trillion tokens of natural language data and supplemented with software engineering-specific data, including code. A unique feature of Stable Code 3B is its ability to run in real-time on laptops, including those without dedicated GPUs. It offers improved performance and additional features, such as Fill in the Middle capabilities and expanded context size. Stable Code is trained on 18 programming languages, and its performance surpasses that of similar-sized models on the MultiPL-E metrics. Our training process includes multiple stages, similar to Codellama, involving pre-training on natural language data and fine-tuning on code-related datasets. The model also supports Flash Attention 2. Further information and the model card can be found on our website, and we plan to release a full technical report for more transparency. If
https://stability.ai/news/stable-code-2024-llm-code-completion-release