Organisations are investing heavily in AI, but many are doing so with a limited understanding of the skills required to succeed. The belief that low-code platforms and productivity tools can somehow replace foundational expertise is not only naive but also actively derailing transformation efforts.
The modern workplace is inundated with tools promising to simplify AI implementation. Executives hear familiar phrases, plug-and-play, democratised access, and no technical knowledge required, and assume that scaling AI is just a matter of throwing software at the problem. But the gap between proof-of-concept and production deployment is wider than many anticipate, and simplicity is often a façade.
Ash Gawthorp, Co-Founder and Chief Academy Officer at Ten10, describes the mismatch between expectations and reality. He splits AI deployment into three domains: machine learning, automation, and productivity tools. Each requires entirely different skill sets, yet they are frequently conflated into a single AI strategy.
The danger, he argues, is assuming that anyone can pick up an AI tool and start delivering value. “This narrative of simplification has been around for decades,” he says. “But every time, we end up discovering that these tools still require expertise. The idea that you can automate complex decisions without understanding the implications is where problems start.”
That complexity often emerges only at scale. A prototype built using low-code automation may function under ideal conditions, but when extended to 10,000 users, flaws in security, architecture, and performance become apparent. By then, poor decisions have been embedded into infrastructure that is expensive to unwind.
Why domain knowledge still matters
One of the most overlooked elements in AI readiness is the importance of domain-specific understanding. Automation tools, including GenAI applications, depend on knowing not just what a process is but how and why it functions within a particular organisational context.
Gawthorp draws a sharp distinction between technical skills and business fluency. It is not enough to understand models and coding. For workflow automation to succeed, those deploying AI must understand how processes operate today and identify any existing constraints or edge cases that may hinder its effectiveness. “You do not need machine learning expertise to automate a process,” he adds. “But you do need to understand the domain and speak to the people doing the work.”
This requirement makes cross-functional collaboration essential. Yet many training strategies focus only on hard skills, leaving a significant gap in communication, influence, and change management. That creates friction later when AI solutions fail to embed within teams or face quiet resistance from those expected to use them.
The need for domain knowledge becomes even more pronounced when the tools themselves begin to hallucinate or generate plausible but incorrect outputs. Without a baseline understanding of the business process, users are unable to recognise when the machine is confidently wrong. This is not a marginal issue; it is a systemic one, affecting everything from customer support to compliance. Unless staff know how to interpret, challenge and verify AI outputs, the business ends up trusting tools that neither understand its goals nor share its accountability.
Transformation cannot be thrown over the wall
The most persistent failure in AI training strategies is the assumption that formal learning alone is sufficient to create capability. Too often, organisations pick a cohort, put them through certification training, and assume they are ready to drive change. Gawthorp is unequivocal about why this fails. “If people are not interested or engaged, or if the training is not immediately applied, it will be forgotten,” Gawthorp continues. “Worse, if you certify someone who has not practised the skill, you risk creating a false sense of competence.”
Instead, he advocates for reversing the model. Candidates should demonstrate their interest and aptitude before being accepted into training programmes. Once there, they should be taught through doing, not just instruction. The training environment must mirror the reality of production systems, complete with broken configurations, edge cases, and imperfect datasets because this is how skills are truly learned.
The same logic applies to organisational adoption. Throwing AI tools at a workforce and expecting transformation is not a strategy. “Too many organisations are layering AI on top of broken processes,” Gawthorp says. “If you have not mapped your workflows, cleaned your data, and set technical guardrails, then the best-case scenario is that AI delivers inconsistent value. The worst case is a compliance or security breach.”
Innovation needs freedom, not chaos
The need for structure does not mean limiting innovation. Gawthorp emphasises that the most effective AI deployments are often bottom-up, driven by individuals who discover value through personal experimentation. But that innovation must happen within clear boundaries.
Organisations should enforce guardrails through technology, not just policy. This involves defining what data can and cannot be used, ensuring local model deployment where necessary, and establishing clear rules for verifying outputs. Without these structures, it is only a matter of time before sensitive data leaks into public models or hallucinated outputs are mistaken for truth.
At the same time, rigid top-down strategies are unlikely to deliver meaningful productivity gains. “Much of the value we are seeing is not coming from McKinsey reports or C-suite mandates.” Gawthorp continues. “It is coming from people on the ground finding ways to improve their own tasks. But they need infrastructure that supports that exploration without putting the business at risk.”
This is where many current strategies fall short. Organisations encourage innovation but fail to provide safe forums for experimentation. Teams build useful automation in isolation, but there is no mechanism to share success or failure across the enterprise. Innovation is happening, but it is fragmented, undocumented, and vulnerable to being lost as soon as staff move on or priorities shift.
Rethinking ROI and retraining the workforce
One of the most challenging conversations in AI deployment is how to measure success. Cost savings are often the default metric, but they fail to capture long-term value, particularly when AI augments rather than replaces work. There are clear productivity gains when individuals use AI to generate code, create content, or research unfamiliar topics. But the most significant impact may lie in rethinking how organisations define work itself.
Gawthorp shares a telling example: the number of hours still lost to formatting documents and fixing bullet points across enterprise software. “If AI can eliminate those micro-frustrations that consume thousands of hours, the cumulative ROI is massive,” he says. “But we rarely talk about that.”
AI training, then, must not only focus on the tools but also on building the environments that support continuous learning. This includes encouraging habits, fostering psychological safety for experimentation, and integrating training into real-world work. Certifications have a place, but only when they follow practical experience, not precede it.
Building AI literacy for everyone
The most provocative insight is that AI training should not be reserved for technical departments. Gawthorp identifies the most significant opportunities for AI adoption in industries that have traditionally avoided technology, including legal, conveyancing, and surveying, where manual processes are still prevalent, and digital disruption is overdue.
But this requires a cultural shift. AI should be introduced not as a mystery to be feared or a miracle to be trusted but as a tool whose limitations are as crucial as its strengths. People must understand how it works, where it breaks, and why guardrails are in place. That starts with AI literacy, embedded across departments in the same way cybersecurity awareness is now a standard expectation. Not every team member will build models, but every team member should understand how models influence decisions, the risks they introduce, and how to critically evaluate their outputs.
Ultimately, the most significant risk is treating AI as a bolt-on, an optional extra to be added after the real work is done. If businesses continue to see AI as something separate from culture, process, recruitment, and leadership, they will fail to realise its potential. Worse, they may expose themselves to avoidable risks. “There is a phrase we use,” Gawthorp concludes. “It is not magic; it is just maths. The moment you think it is magic, you stop asking how it works. That is when the trouble starts.”
The future of AI in the enterprise will not be defined by hype, headlines, or hallucinated roadmaps. It will be defined by the hard work of building real skills, creating responsible systems, and empowering people to question the tools they use. That is not simple. But it is the only path to scale.




