IBM introduces high performing AI models built for business

Share this article

At IBM’s annual TechXchange event the company announced the release of its most advanced family of AI models to date, Granite 3.0. IBM’s third-generation Granite flagship language models can outperform or match similarly sized models from leading model providers on many academic and industry benchmarks, showcasing strong performance, transparency and safety.

Consistent with the company’s commitment to open-source AI, the Granite models are released under the permissive Apache 2.0 license, making them unique in the combination of performance, flexibility and autonomy they provide to enterprise clients and the community at large.

The new Granite 3.0 8B and 2B language models are designed as ‘workhorse’ models for enterprise AI, delivering strong performance for tasks such as Retrieval Augmented Generation (RAG), classification, summarization, entity extraction, and tool use. These compact, versatile models are designed to be fine-tuned with enterprise data and seamlessly integrated across diverse business environments or workflows.

While many large language models (LLMs) are trained on publicly available data, a vast majority of enterprise data remains untapped. By combining a small Granite model with enterprise data, especially using the revolutionary alignment technique InstructLab – introduced by IBM and RedHat in May – IBM believes businesses can achieve task-specific performance that rivals larger models at a fraction of the cost (based on an observed range of 3x-23x less cost than large frontier models in several early proofs-of-concept).

The Granite 3.0 language models also demonstrate promising results on raw performance. On standard academic benchmarks defined by Hugging Face’s OpenLLM Leaderboard, the Granite 3.0 8B Instruct model’s overall performance leads on average against state-of-the-art-performance of similar-sized open-source models from Meta and Mistral. On IBM’s state-of-the-art AttaQ safety benchmark, the Granite 3.0 8B Instruct model leads across all measured safety dimensions compared to models from Meta and Mistral. 

The Granite 3.0 models were trained on over 12 trillion tokens on data taken from 12 different natural languages and 116 different programming languages, using a novel two-stage training method, leveraging results from several thousand experiments designed to optimize data quality, data selection, and training parameters. By the end of the year, the 3.0 8B and 2B language models are expected to include support for an extended 128K context window and multi-modal document understanding capabilities.

Related Posts
Others have also viewed

The next frontier of start-up acceleration lies in the AI tech stack

The rise of generative and agentic AI has redefined what it means to start a ...

Quantum-centric supercomputing will redefine the AI stack

Executives building for the next decade face an awkward truth, the biggest AI breakthrough may ...

The invisible barrier that could decide the future of artificial intelligence

As AI workloads grow denser and data centres reach physical limits, the real bottleneck in ...
Into The madverse podcast

Episode 21: The Ethics Engine Inside AI

Philosopher-turned-AI leader Filippo explores why building AI that can work is not the same as ...