Enterprises are scaling AI without understanding it

Share this article

AI is being adopted faster than the organisations meant to govern it, and the gap is beginning to show in procurement decisions, data exposure, and stalled rollouts. Shadow AI is not a fringe security problem, it is the default behaviour of decentralised enterprises.

Illuminaire normally looks at AI from the perspective of technology and infrastructure, but the deeper story often sits inside how organisations absorb it. The quieter story sits inside ordinary businesses where AI arrives not as a strategic programme, but as a browser tab. A director buys a tool because it looks useful, a department tries it because a competitor mentioned it, and the rest of the organisation discovers the consequences afterwards.

Harry Mason, Head of Client Services at Mason Infotech, works inside that reality every day. His clients are typically SMEs that are large enough to have governance obligations, but small enough to be vulnerable to improvised adoption. Their world is where enthusiasm meets identity management, where procurement meets data access, and where a simple SaaS decision can quietly change the risk posture of the entire organisation.

“We are a managed service provider, full service, so we help with everything from internet right up to now being at the point of AI,” says Mason. “We have got our service desk, internet provision, telephony, security, and all the usual pieces you would expect. We are a relatively classic case of a telephony business that migrated into an IT managed service provider over time as clients kept coming to us with new requirements.”

Who becomes the IT manager

AI adoption exposes a truth that many small and mid-sized organisations prefer not to confront. The IT manager is sometimes a person, sometimes a supplier, and sometimes a role that only materialises when something breaks. AI increases the cost of that ambiguity because it touches sensitive systems by default, and it spreads through organisations faster than formal governance can follow.

“For some businesses, we are the IT manager as well,” Mason says. “We take on that IT manager or IT director role where we help make tech decisions before they impact anybody. If the business has an internal IT team, we normally take a step back and work consultatively, but we can fill both roles depending on what is needed.”

The key problem is not that businesses buy AI, it is that they buy AI outside the structures that make technology safe and scalable. Procurement often happens at departmental level, while consequences land at organisational level. Networks, identity, permissions, and policies suddenly matter in ways that the buyer never had to think about, then the implementation burden falls on whoever can make the decision survivable.

“The sooner we can get involved before someone who is not IT literate starts buying software as a service, the better,” Mason says. “There are AI programmes sold as SaaS that do one job, where actually if you implemented something like Microsoft Copilot properly it does that job and sixteen other jobs. SaaS fails every single time in implementation, and where it fails is directors buying something and then just telling people to use it with no adoption strategy.”

Why projects fail early

The industry now trades in failure rates as casually as it trades in model benchmarks. Mason does not treat those numbers as surprising, and his explanation is almost boring in its simplicity. Most AI programmes fail because organisations try to move too quickly, then confuse resistance with technical weakness.

“The biggest thing is that the AI tools themselves are not failing,” Mason says. “Firms are trying to adopt them too quickly and without structure, without an adoption strategy. Directors get kind of magpie like about the cool new shiny thing, they try to overhaul a full process overnight, and then you guarantee resentment and resistance, so staff keep doing things the old way.”

Training is the recurring missing layer in his account, and he frames it as the foundation for any credible rollout. Businesses accept that enterprise software requires structured onboarding, documentation, and hours of training. Many then abandon that discipline the moment the word AI enters the conversation, as if novelty eliminates the need for basics.

“Very few businesses are delivering the necessary training on AI tools,” Mason says. “A good training programme for any SaaS purchase probably involves five to seven hours worth of training on that tool. Most businesses are not delivering a single hour worth of training on AI adoption, and SaaS adoption lives or dies by user adoption, so it cannot be successful if nobody is using it.”

He also points to a cultural distortion that did not exist with earlier waves of enterprise technology. Consumers have had direct access to AI tools, formed strong opinions, then carried those opinions into workplaces where the requirements and risk are entirely different. That creates a cynical starting point that leaders often underestimate, then they try to counter it with mandates rather than education.

“It is very rare you see business technology placed in the hands of consumers who are allowed to decide if it is effective,” Mason says. “With AI there is huge consumer sentiment already, and a lot of it is negative. People use examples like it cannot count letters in strawberry, and they do not understand the last thing you would use a large language model for is maths, you would use a calculator.”

Copilot is not lightweight

Few terms are used as casually as Copilot, and few are misunderstood as thoroughly. Mason sees organisations claim they are using Copilot when they mean they have access to a generic chatbot embedded in an interface, not the enterprise capability that sits inside identity controls and reaches into organisational data. That confusion leads directly to underwhelming outcomes, duplicated spending, and false conclusions about value.

“Microsoft have wrapped their web based Copilot chat bot into Teams and made it available on Windows,” Mason says. “That is not Copilot for Microsoft 365. Copilot for 365 is a different licence you plug into your estate that then has access to whatever data you want it to have access to, and its agentic capabilities are incredibly powerful within that 365-use case.”

The distinction matters because the difference between casual chat and enterprise integration is the difference between novelty and capability. The browser-based experience may help with summarising a meeting or drafting an email, but it does not change how the organisation works. The integrated version, correctly governed, can become a genuine productivity layer across information work, and Mason argues that many organisations never experience that because they never make the licensing and adoption decisions properly.

“We have seen businesses who adopt Copilot for 365 save time by factors of weeks out of years, hours out of days, by implementing it effectively,” Mason says. “The confusing thing is Microsoft attach the Copilot name to basically everything that uses AI. If you are talking about the one included in inverted commas, that is a chatbot, and unless you get a proper handle on how it works it will never be more than maybe useful, but the Copilot for 365 licence specifically is incredibly powerful.”

He also returns to the procurement problem that appears again and again in SME AI adoption. A director buys a niche AI SaaS tool because it looks specific and tangible, then discovers it duplicates capability already available in the estate. The organisation ends up paying twice, then adoption stalls because implementation was never treated as a programme.

“We find the sooner we get involved in that AI purchase process, the less likely there is to be a bunch of SaaS tools kicking around where no one really knows what each one does,” Mason says. “Quite often you did not need to buy that one thing in the first place. If you implement a platform tool properly, you avoid that sprawl, but only if you treat adoption and training as part of the purchase.”

Governance and shadow AI

AI adoption discussions often treat data governance as a back-office concern, something that can be addressed once tools are deployed and teams are enthusiastic. Mason frames that as the fastest route to an incident. AI changes the consequences of weak permissions because it turns access into output, and it makes information easier to surface at speed.

“You cannot adopt any AI tools without being very clear on data security,” Mason says. “There are legal considerations about what data is going into the system and where it is stored and processed. Then there is internal governance, because if you give an AI tool access to HR data, can members of the team who should not see that data see it, can operational teams see payroll, can they see mental health information when they are not line managers.”

The workforce knowledge gap makes this harder. Many people still treat AI as a harmless chatbot rather than a system that ingests information and can re-present it in unexpected ways. Mason says it is common for employees to be shocked when they realise that careless inputs can become exposure, and that shock is itself a sign of how underprepared many organisations are.

“It is very rare I come across an individual who is not shocked when I say, if you put customer data into the large language model you cannot assume it stays private,” Mason says. “People see it as a chat bot. Actually it is a system that predicts and learns from the data you input, so if you give an enterprise tool access to a folder and you do not secure who else has access, you have created a serious problem.”

Shadow AI is where all these weaknesses converge. It is not necessarily malicious, and it is not limited to advanced tools. It is the ordinary behaviour of departments acting at the speed of convenience, in a world where AI is a browser tab, remote work is normal, and organisational controls are uneven.

“Shadow AI is departments or individuals deciding we are going to use this tool now without running it past leadership or IT,” Mason says. “It is probably the biggest problem you have with AI security wise and adoption wise. It is difficult to block because it is browser based, but you also want employees to be innovative and efficient, so it is a fine balance between allowing room to experiment and preventing data mistakes that will hurt you.”

Agentic value and sequencing

When Mason talks about agentic AI, he does not lead with spectacle. He leads with operational reality, where the cost of delay is tangible and the cost of non-compliance can be catastrophic. His most compelling example is safety monitoring in the built environment, where attention is expensive and errors create liability.

“The best example I have seen is health and safety reporting,” Mason says. “An AI agent is plugged into cameras on site, scanning photos and live feeds, scanning reports, and flagging health and safety violations versus the risk assessment. It flags missing hard hats or missing barriers, and a manager gets a notification immediately and can fix it, which solves a lot of liability problems and could reduce accidents.”

Crucially, he does not present this as the removal of humans, but as the automation of monitoring that humans cannot practically do. The agent becomes the eyes, while the human remains the accountable decision-maker, and in many environments that human oversight is not optional. The point is not autonomy as ideology; it is autonomy as controlled efficiency.

“The best results we see is human in the loop,” Mason says. “The agent does the grunt work, humans validate outputs, feedback, and improve the systems over time. There is no reason you could not generate a report in a template too, but many businesses like the final check to stay human, and sometimes there are legislative reasons for that.”

Sequencing sits behind everything he describes, and it is where executive impatience does the most damage. He argues that the safest path is not to start small once, but to start small repeatedly, treating adoption as a series of controlled renovations rather than a single transformation event. This avoids the pattern where one successful pilot becomes a rush, the rush becomes sprawl, and sprawl becomes governance debt.

“AI adoption is more incremental change,” Mason says. “If you think of it like a house renovation, it is room by room, not we did one room and now we do the whole house. You do incremental adoption, and after a year you look back and you have made meaningful efficiency change, but you did not try to fix everything in the space of a quarter.”

His closing argument is also the simplest. Models will improve regardless of what any one enterprise does, and vendors will continue to sell the idea of easy transformation. The differentiator will be whether leaders treat adoption as a people programme with governance, training, and clear guardrails, or whether they treat AI as a tool that can be thrown into the organisation and left to find its own purpose.

“Businesses that implement AI using a people first approach, making sure the tools are being adopted and making sure users are given the correct amount of training, will be the ones that get transformational outcomes,” Mason says. “You could buy the best tool in the world tomorrow, but if you just chuck it at your workforce, you may as well not have bothered. Leadership must get involved, otherwise you get shadow AI, stalled adoption, and a mess of tools nobody owns.”

Related Posts
Others have also viewed

Meta turns to custom silicon as agentic AI shifts the balance of compute

Meta has agreed to bring tens of millions of custom processor cores from Amazon Web ...

Autonomous systems move from ambition to infrastructure as enterprise AI takes control

A deepening partnership between ServiceNow and Google Cloud signals a shift in how artificial intelligence ...
Data Centre

Europe scales up AI factories as compute demand begins to outgrow traditional infrastructure

Nebius is planning a 310 MW AI facility in Lappeenranta, Finland, a development that reflects ...

Gigawatt scale AI infrastructure begins to redefine the limits of industrial development

Crusoe has announced plans to build a 900 megawatt AI data centre campus in Abilene, ...