We’re releasing a series of decoder-only Granite code models for code generative tasks, trained with code written in 116 programming languages. The Granite code models family consists of models ranging in size from 3 to 34 billion parameters, in both a base model and instruction-following model variants. These models have a range of uses, from complex application modernization tasks to on-device memory-constrained use cases.
Image 1: Comparing the Granite-8B-Code (Base/Instruct) with other similarly sized open-source code LLMs on HumanEvalPack. Three coding tasks and six programming languages were used.
Read the full story here.