Meta releases commercially available coding support AI 'Code Llama', free release with the same license as Llama 2



Meta, which operates Facebook and Instagram, has announced that it has released AI `` Code Llama '' that generates program code based on text input. The model is released under the same '

Llama 2 Community License ' as Llama 2, and can be used commercially for free if the number of monthly active users is 700 million or less.

Introducing Code Llama, a state-of-the-art large language model for coding
https://ai.meta.com/blog/code-llama-large-language-model-coding/


Introducing Code Llama, an AI Tool for Coding | Meta
https://about.fb.com/news/2023/08/code-llama-ai-for-coding/



Code Llama is Llama 2 , released by Meta in July 2023, further trained on a code-specific dataset. In addition to the function to generate code continuation, it is possible to generate code based on input in natural language and generate commentary about code. It supports Python, C++, Java, PHP, Typescript & Javascript, C#, Bash, and is expected to be used for code completion and debugging.

The following three models will be released this time.

・Code Llama
A basic coding model.

・Code Llama - Python
It is a model specialized for Python.

・Code Llama - Instruct
It is a model that has been fine-tuned so that it can understand natural language instructions. Use of this Instruct model is recommended for code generation tasks.

There are 7 billion, 13 billion and 34 billion parameter versions for each model. Meta said that each model was trained with 500 billion tokens of data. The 7 billion and 13 billion parameter models are additionally trained for intermediate imputation function (FIM), enabling the task of code completion to insert code into existing code.

The 34 billion parameter model returns the best results, but because it is slow, the 7 billion or 13 billion parameter model is suitable for tasks that require speed such as real-time interpolation. There is also the advantage that a model with 7 billion parameters can run on a single GPU.

Code Llama accepts inputs of up to 100,000 tokens, so debugging large codebases is no problem. In some cases there is also an option to enter the entire code.



The results of comparing performance with other models are as shown in the figure below. It boasts top performance among models that can be used commercially for free.



The model can be downloaded by applying to Meta . The inference code is published on GitHub .

in Software, Posted by log1d_ts