Curiosity, control and customisation are the cornerstones of enterprise AI success

Share this article

Enterprises are facing a perfect storm of demands as AI adoption accelerates. To navigate infrastructure gaps, compliance pressures and performance trade-offs, they must embrace modularity, build hybrid architectures and rethink what makes an AI deployment truly sustainable.

Enterprise AI has moved well beyond the novelty phase. The gold rush of last year has given way to a more complex and demanding reality: models must prove their worth, infrastructure needs are multiplying, regulatory scrutiny is escalating, and the pressure to innovate responsibly is unrelenting. At the centre of all of this is the question every executive must answer: How do you deploy AI at scale without locking your business into a model, vendor, or strategy that will fail to adapt?

That dilemma has pushed modularity and flexibility to the top of the enterprise AI agenda. As Ian Quackenbos, AI Lead at SUSE, explains, the ability to mix, match and adapt infrastructure and models is not just a technical convenience. It is a foundational requirement for ethical, scalable, and future-ready AI.

Avoiding the ROI trap

The first mistake many enterprises make with generative AI is chasing outcomes before strategy. Models are deployed without clarity on where value will be delivered or how performance will be measured. Without that foundation, expectations spiral while returns remain elusive.

“AI is only as valid as the context in which it is applied,” Quackenbos says. “The biggest misconception is that you can measure ROI without a concrete strategic framework. It becomes difficult to justify the investment if you have not anchored it to a defined business outcome.”

For many, that means reassessing which models are used for which jobs. Small language models (SLMs) are often overlooked in favour of their more powerful, resource-hungry cousins. But the smarter route is not always the largest model available. “You can think of it as choosing between a PhD and a university student,” Quackenbos says. “Not every answer requires a PhD-level response. Many user questions can be handled by smaller, faster models. They are less resource-intensive and easier to maintain, which contributes directly to sustainability.”

For all the talk of innovation, most enterprises are still struggling to get the basics in place. Infrastructure remains a key bottleneck, particularly when scaling successful pilots. “GPU access is a major issue,” Quackenbos explains. “It is easy to underestimate the infrastructure needed. A small use case may only require limited GPU capacity, but when it proves valuable, the demand to scale hits hard. Suddenly, latency, stability and performance become real challenges, and many companies are caught unprepared.”

These challenges are further compounded by the deployment strategy. Few organisations can operate entirely on-premise, but equally, few can afford to place all their sensitive data in the cloud. The result is a growing preference for hybrid deployments that offer flexibility without sacrificing sovereignty.

“If every enterprise could deploy sovereign AI on-premise, they would,” Quackenbos says. “But the reality is that GPU availability, budget constraints and workload priorities all push companies toward hybrid setups. The key is to match the data sensitivity to the appropriate environment. Proprietary or regulated data stays local, and customer support or less sensitive tasks can run in the cloud. That balance is becoming essential.”

Modularity is a strategic imperative

Underpinning this hybrid evolution is a broader philosophical shift toward modularity. The days of buying into a monolithic platform and building everything on top of it are rapidly disappearing. Enterprises want control, and that means the freedom to change direction.

“No one likes being locked into a vendor,” Quackenbos adds. “Over the last decade, companies have learned that lesson the hard way. With AI, the stakes are even higher. You cannot future proof your strategy if it cannot evolve. Modularity enables that evolution, it gives enterprises the freedom to experiment, to adapt, to bring in best-in-class components, and to exit when something no longer fits.”

This mindset is not simply about technical architecture. It reflects a broader cultural demand for agility and customisation. Every enterprise has unique needs, and those needs are often more diverse than vendors expect. “When we launched our early access programme, we were surprised by just how varied the deployments were,” Quackenbos recalls. “No two customers wanted the same configuration. That forced us to lean even further into modularity, to ensure that flexibility was not just an option, but a fundamental principle.”

That principle also aligns with SUSE’s open-source heritage, which continues to shape its approach to ethical AI. Transparency, auditability and community collaboration are not optional extras; they are structural safeguards against the ethical failures that continue to plague the sector.

“You cannot determine whether a model is ethical or scalable if you cannot see how it was built,” Quackenbos continues. “Being able to inspect the training data, the weights, the code, that visibility is what ensures accountability. Open systems make it harder to hide bias or poor practices.” The lesson is clear. Ethical AI must be visible AI. Proprietary black boxes may be commercially appealing, but they invite reputational and compliance risks that many enterprises are no longer willing to tolerate.

Compliance cannot be an afterthought

That risk is growing in line with the regulatory environment. From the EU AI Act to evolving national mandates, enterprises must now design with compliance in mind. Treating regulation as a bolt-on or a box-ticking exercise is not sustainable.

“If you want to deploy AI in a regulated market, you must be prepared to meet those requirements up front,” Quackenbos warns. “Europe is raising the bar. The days of releasing a model first and figuring out the compliance later are over. Enterprises must start with the legal obligations, then build the deployment strategy around them.”

That approach demands strong collaboration between legal, technical and executive teams,  and a clear understanding that compliance is not a barrier to innovation, but a condition for its survival.

One of the more persistent myths surrounding enterprise AI is that performance and sustainability are in conflict. Efficiency is the most effective route to environmental responsibility. “Performance is sustainability,” Quackenbos says. “The better your model performs for the same workload, the less energy it uses. If you can take a smaller model, fine-tune it with just enough of your data, and skip a retraining cycle, you are saving resources and improving outcomes. You do not need to reinvent the wheel, you just need to use what works.”

This pragmatic approach extends to model selection. Enterprises can and should benefit from the work others have already done. Updating to proven, efficient models developed by major players can be a shortcut to both impact and sustainability. “There is no shame in building on top of what is already working,” Quackenbos says. “Use models that have been battle-tested. Adapt them to your data. That is how you stay competitive without burning through your GPU budget or your carbon allowance.”

From bolt-on to backbone

What emerges from all of this is a shift in AI’s role within the enterprise stack. It is no longer something that can be tacked on at the end. It must be embedded, operational, contextual, and proactive. “A year ago, AI was a novelty,” Quackenbos continues. “Companies bolted it onto their applications to answer questions. Now, users expect it to interact intelligently based on what they are doing. It is the difference between a helpdesk chatbot and an assistant that can anticipate your next action. AI is not just a layer; it is becoming the foundation.”

The implications are significant. AI must be treated as a core capability, not a peripheral tool. That means aligning development, operations, data management and security from the outset. Anything less risks turning a transformative opportunity into a costly liability.

For those building the future, that shift is already underway. Developers, DevOps engineers and even non-technical users are adopting AI at a pace. No-code platforms, AI-assisted coding and embedded intelligence are changing how software is designed and who gets to design it. “Every engineer is becoming an AI engineer, whether they like it or not,” Quackenbos adds. “Even people without a development background can now build applications. That changes the entire landscape.”

There are risks, of course. Unvetted applications, particularly in critical systems, must be carefully managed. But in most cases, the rise of AI-enabled development is improving productivity, reducing friction and accelerating innovation. “Most of these apps are not mission-critical; they are productivity tools, customer support, internal systems,” Quackenbos notes. “Still, the debate around critical software is ongoing. There are real concerns about how LLMs can be used to create malicious code. But there are also tools being developed to counter that threat. It is a constant evolution.”

The principle that matters most

Asked to distil a single guiding principle for enterprise leaders navigating this landscape, Quackenbos does not hesitate. “Curiosity,” he says. “Stay curious. What works today may not work next month. The market is shifting fast. If you want to stay ahead, you must keep learning, testing, and adapting. That mindset, more than any technology, is what will separate success from stagnation.”

It is a timely reminder. In an era of accelerating expectations and proliferating tools, it is not the loudest models or the most rigid platforms that will win. It is the enterprises that stay curious, build flexibly, and remember that intelligence, artificial or otherwise, only matters if it serves the humans who use it.

Related Posts
Others have also viewed

A new era for AI ecosystem innovation

David Terry, Schneider Electric’s AI Enterprise & Alliance Partner Director for EMEA discusses the emergence ...

AI-scale cooling enters a new phase as data centres seek waterless thermal control

As artificial intelligence reshapes the demands placed on digital infrastructure, data centres face mounting pressure ...

NVIDIA raises the stakes as AI inference enters its industrial phase

As artificial intelligence shifts from experimental models to full-scale production, the economic engine powering it, ...

AI data centres drive demand for real-time renewable energy tracking

A new energy agreement covering nLighten’s French data centres signals a shift in how AI-driven ...