The push toward ethical and responsible AI in universities is accelerating, and the sector now stands at a decisive crossroads. Digital maturity has emerged as the new competitive frontier, reshaping how learning is designed, delivered, and assessed across global higher education.
Universities are facing one of the most layered transitions in their modern history as artificial intelligence settles into the fabric of academic life. The sector has moved beyond the early skirmishes about whether AI belongs in higher education. The real question is how institutions build the intellectual, ethical, and operational foundations for a world in which human judgment must coexist with algorithmic support. The challenge is not simply technical, but cultural, structural, and strategic.
Cristi Ford, Chief Learning Officer at D2L, sees this moment not as a threat but as a long-awaited correction. She argues that the sector has misunderstood the nature of the shift. “AI is not replacing educators. It is releasing them to focus on what is important. Educators want to transform the lives of their learners, and that requires time, presence, and deep engagement.” The argument is not about preservation but about restoring academic work to its core purpose.
Much of the current anxiety is rooted in the visible strain across the workforce. Administrative load has swollen for more than a decade, stretching staff to breaking point. Ford describes how automation clears space rather than closes it. She cites the work emerging from the UK white paper Rewiring Higher Education, which identifies the potential for AI to handle predictable tasks such as marking, accessibility preparation, and repetitive content generation. These pressures have long stifled the reflective and relational dimensions of teaching.
A sector stepping out of the policing mindset
Attempts to control generative AI by tightening restrictions, limiting usage, or relying on detection tools have offered little durability. Ford believes the sector must leave behind the defensive posture that has dominated the past two years. “The future of assessment is not catching the misuse. It is about cultivating judgment. We must teach students how to use AI ethically and responsibly.” Her view reflects the argument in The Wicked Problem of AI and Assessment, which frames assessment design as a strategic challenge rather than a compliance issue.
Many universities still imagine that integrity can be safeguarded through barriers, when students already live in a world of instant access and constant augmentation. The policing approach creates a widening gap between institutional expectations and the realities of both study and work. Ford highlights the implications for employability, noting that global companies are already restructuring roles. Amazon’s recent workforce reductions illustrate how rapidly AI is reshaping expectations for entry-level roles. Students who are prevented from using generative tools will emerge unprepared for a labour market in which AI fluency is foundational.
This is the pivot point where capability must replace control. The instinct to tighten rules often indicates an institution struggling to articulate a coherent AI strategy. Without a credible path forward, avoidance becomes a temporary substitute for governance. Digital maturity is emerging as the more accurate indicator of institutional strength and resilience.
Digital maturity becomes a competitive frontier
Digital maturity is now shaping reputation, strategic positioning, and global competitiveness. It is less about technology procurement and more about institutional posture. Ford references the cloud-based architecture at the University of Manchester and the digital library reconfiguration at the University of Leeds as early indicators of how universities are rethinking their foundations. These investments reflect a deeper acceptance that future learning environments will be adaptive, data-rich, and increasingly personalised.
Ford notes that institutions must approach this not as a rapid compliance exercise, but as a recalibration of their long-term operating model. “The next competitive frontier for UK universities is digital maturity. Institutions need programmes that create digital resiliency and prepare students for the workforce.” She references YouGov findings showing that more than half of universities expect AI-infused learning to become standard. The expectation of quality is shifting, and institutions that cannot demonstrate digital depth may find their appeal diminished.
Maturity in this context is as much cultural as technological. It reflects an institution’s capacity to adapt, to experiment responsibly, and to treat technology as an evolving partner in academic design. The universities that move decisively will set the pace of the sector’s evolution.
Automation repositions academic work
The fear that automation will deskill teaching persists across the sector, yet Ford challenges the assumption that AI diminishes academic identity. She describes a model that reinforces human authority rather than replacing it. “Automation must be led by the educator. There must be safeguards. Educators must confirm and approve any automated feedback before it goes live.” That insistence on human oversight creates a layered model in which automation carries the weight of routine labour, leaving educators to exercise professional judgement where it matters most.
One example is the development of AI-assisted study tools that interpret course content, generate revision materials, and guide learners toward clarity. These tools do not scour the open internet but work from the curriculum created by the academic. The intent is to expand the educator’s reach, not to dilute it. Ford stresses that pedagogical values must remain the anchor, and that automation must be wired to serve those values rather than steer them.
Personalisation intensifies this argument. Institutions have tried to personalise learning for decades, yet the scale required was never achievable. Generative AI changes that, but only if universities design systems that honour transparency, fairness, and inclusion. Ford highlights the need for ethical boundaries. Personalisation must adapt to learners without slipping into monitoring or bias. The objective is improved learning, not surveillance disguised as support.
Assessment faces a structural reckoning
Assessment is the area undergoing the most intense pressure. Generative AI forces a reconsideration of what universities are trying to measure. Ford notes that some academics are returning to handwritten exams but sees this as a retreat rather than a solution. She believes the sector has reached a point where assessment must be redesigned rather than preserved.
She describes the growing movement toward co-created learning, in which students submit not only their final work but also their AI interaction threads. This helps academics analyse the cognitive process behind the output. “If we want to understand how students think, we must see the steps they take,” she explains. Faculty experiments range from AI-supported peer review to multi-layered reflective assessments that capture how students evaluate, compare, and challenge information.
Ford argues that universities will need to embrace dual-track approaches in some cases, with multiple assessment routes that achieve the same learning outcome. Flexibility, iteration, and context-specific design become essential as institutions balance workload, quality, and inclusion. The Wicked Problem paper warns against the illusion of a universal solution, and Ford echoes that stance with clarity. The goal is to measure judgment, not just output.
Infrastructure, culture, and the speed of change
Universities stand at a point where infrastructure and culture must be rebuilt together. Curriculum design, IT architecture, staff development, student services, and policy governance all require alignment. Ford points to institutions that use AI agents in residence halls to handle routine queries and free up staff time. Others have restructured procurement cycles to one-year timelines because the pace of change has outstripped traditional planning models.
This is where leaders must reconsider the assumptions that shaped the last twenty years. The concept of learning debt is becoming more visible. Students may learn content that ages faster than their degree cycle. Ford argues that the antidote lies not in more content but in stronger cognitive capabilities. Problem-solving, collaboration, moral courage, and intellectual curiosity become the skills that protect students from over-reliance on systems that sound certain even when they hallucinate. These are traits that cannot be automated or replaced.
Bridging the widening education-industry divide
Employers now assume graduates will bring a baseline of AI literacy. Universities remain central to shaping this expectation, yet the gap between academic programmes and industry needs is expanding. Ford notes that universities are uniquely positioned to bridge this divide. They can teach the intellectual discipline that industry cannot, while designing shorter, modular learning pathways that help students acquire targeted skills with speed.
She argues that universities must move past the historical silo that has separated academic learning from workplace reality. Industry expects adaptable, AI-fluent graduates, and higher education is responsible for preparing them without compromising academic depth.
The impact of AI on inequality remains uncertain, but Ford sees potential for a positive trajectory if institutions act with care. She recalls interviews with students, including those with ADHD, who found that AI tools helped them manage learning challenges that traditional classrooms could not accommodate. Adaptive pathways, captioning, multilingual support, and structured study tools create opportunities that were not previously accessible.
She cautions that this potential could be lost if institutions drift toward efficiency-driven strategies that treat inclusion as optional. AI can widen participation, but only if the sector resists the temptation to reduce education to optimisation metrics. Inclusivity must remain a strategic anchor, not an operational afterthought.
Ford’s final reflection carries weight. “It is time to get in the game. AI is here to stay. Leaders cannot rest on old assumptions, as infrastructure is changing. In six months, it will be a different landscape.” This is not a warning, but a call to engagement. The sector cannot afford hesitation or superficial strategies.
The rewiring of higher education has already begun. The institutions that will lead the next chapter are those willing to confront complexity, embrace experimentation, and build systems that honour both human judgement and technological possibility.




