Speaking at IBM’s AI for Business media event, Dr Nicola Hodson, CEO of IBM UK & Ireland, underscored the game-changing role that small language models (SLMs) are set to play in the deployment of AI across industries. With the limitations of traditional large language models (LLMs) becoming more apparent, such as high costs, energy consumption, and complex data requirements, Hodson stressed the advantages of SLMs as a leaner, more focused alternative for enterprises.
Small language models (SLMs) represent a significant shift from the traditional large language models (LLMs) dominating many AI applications. While LLMs are built by training on massive datasets, often incorporating large swaths of internet content, they are designed for broad, generalised language tasks. LLMs can process and generate human-like text and provide impressive results across various queries, but they come with drawbacks, including high computational costs, data privacy concerns, and significant energy demands.
SLMs, by contrast, are smaller and more efficient, trained on targeted datasets to address specific business needs. This specialisation brings several advantages. First, SLMs are far less resource-intensive, making them more economical to deploy and operate. This is a critical consideration for businesses, as the lower power requirements reduce both operational costs and environmental impact. Additionally, the smaller size of these models makes them ideal for focused applications, such as customer service, finance, and healthcare, where they can be trained on carefully selected, trustworthy data.
Security and transparency are also heightened with SLMs. With a clear understanding of the dataset used in training, businesses can ensure that data is reliable, accurate, and relevant, reducing risks associated with inaccurate outputs. Moreover, SLMs provide faster response times and are easier to integrate into existing workflows. The strategic use of SLMs allows companies to adopt AI solutions that are not only cost-effective but also scalable and secure, maximizing AI’s potential within business environments.
“The AI landscape is evolving fast,” Dr Nicola Hodson, CEO, IBM UKI, said. “Internet-scale language models have their place, but we believe in the power of small, specialised models trained on trustworthy data for specific tasks. When an organization understands how a model was trained and trusts the data it has been trained on, they can confidently use their own data with the model, knowing it i reliable. Smaller models are also more cost-effective and energy-efficient, requiring significantly less power.
“What excites us about these models is that they can deliver the same performance as much larger models but at three to 23 times lower cost. This includes IBM’s Granite models, our most advanced models to date, launched just a few weeks ago. By overcoming key challenges, trust, cost, secure data usage, and business integration, organisations can truly unlock AI’s potential. This has far-reaching implications for both business and public services, boosting productivity and driving innovation.
“For an NHS patient, for example, it could mean seeing a doctor more quickly. For banking customers, it might mean tasks that once took hours are completed in minutes. We are already seeing exciting breakthroughs in how organizations gain value from AI, especially in this new era of smaller, task-specific models.”
In this evolving AI landscape, IBM’s commitment to SLMs, exemplified by its new Granite models, reflects a shift towards AI solutions that balance performance with security and sustainability, marking a new era of scalable, accessible AI applications across sectors.




