AI is accelerating at an unprecedented pace, but the digital infrastructure supporting it must evolve just as rapidly to meet the increasing demands of bandwidth, low latency, and scalability. But as Mark Venables explains as businesses push AI into more data-intensive applications, the shift towards Network-as-a-Service and private connectivity is becoming critical to ensure performance, security, and futureproofing.
AI is reshaping industries at an unprecedented pace, but behind every large language model, autonomous system, or real-time analytics platform is a digital infrastructure struggling to keep up. As AI adoption grows, the question is how to scale AI and build a network that can support its insatiable demand for data, bandwidth, and low-latency processing.
Peter Coppens, Vice President of Product at Colt Technology Services, sees the growing need for AI-ready infrastructure as an opportunity to redefine how networks function. “Many companies are retrenching from Europe, and some of the major US operators are focusing more on their domestic markets,” he says. “This creates a significant opportunity for Colt to become the network access aggregator for Europe, EMEA, and parts of Asia. One of the key pillars of our strategy is to act as the primary network access provider for business customers, offering a range of connectivity solutions.”
The role of a network aggregator is to link enterprises to critical digital infrastructure such as cloud providers, data centres, and AI computing hubs. “Colt has more cloud on-ramps in Europe than any other provider, offering direct connections to AWS, Microsoft Azure, and other cloud platforms,” he continues. “These providers publicly list their interconnectivity points, and we have established more connections than anyone else in the region.”
Building scalable infrastructure for AI’s heavy demands
AI’s vast data requirements mean that traditional network architectures are increasingly unsuitable. Enterprises looking to move data-intensive workloads into the cloud quickly realise that relying solely on public internet connectivity is inefficient, costly, and unpredictable.
“Cloud providers do not include connectivity as part of their services,” he adds. “Organisations must arrange their network access. The most common approach is to use existing internet access, but this has latency, reliability, and cost limitations. Internet routes are unpredictable, and cloud providers charge enterprises for every byte of data leaving their cloud environments. These charges can become significant for organisations with high data transfer volumes.”
A more effective approach is private connectivity, which ensures low-latency, high-reliability data transfer without the congestion of public networks. A dedicated, point-to-point private network ensures guaranteed latency, stability, and security. It is unaffected by public internet congestion or external network conditions. Additionally, cloud providers offer significantly lower data egress fees for traffic moving through private connections rather than public internet, which can result in substantial cost savings for enterprises with high data transfer needs.
The proper connectivity solution is necessary for AI applications involving model training, inference, and real-time decision-making. “If an enterprise is regularly transferring terabytes of data, such as moving large language models between locations, doing so over the public internet is impractical due to speed limitations and potential latency fluctuations,” Coppens explains. “Private connectivity ensures the necessary performance and stability for these critical workloads.”
Network-as-a-Service: The future of AI networking
One of the most significant shifts in networking is the adoption of Network-as-a-Service (NaaS), which allows enterprises to treat network resources with the same flexibility as cloud computing. Telecoms have traditionally operated on a rigid model, with long-term contracts, fixed capacities, and little flexibility. With NaaS, customers can provision and adjust their network capacity in real time, just as they would scale computing resources in the cloud.
Colt has embraced this model, allowing businesses to adjust their bandwidth dynamically. “Customers can interface directly with our network through a portal or API, eliminating the need for lengthy order processes or manual intervention,” Coppens continues. “If two sites are already connected to our fibre network, a customer can instantly activate a 10Gbps connection between London and Frankfurt. This was unheard of 20 years ago when lead times were measured in weeks.”
This approach aligns perfectly with AI workloads, which can fluctuate dramatically. AI workloads are dynamic, so the ability to flex network capacity in real time is crucial. Developers can opt for a traditional flat-fee or pay-as-you-go model where they are charged by the hour. For instance, a company might need to increase bandwidth from 1Gbps to 3Gbps for four hours to handle a data-intensive task and then scale back down. That level of control is now at their fingertips.
The rise of GPU-as-a-Service and hybrid AI infrastructure
Beyond networking, another key trend is GPU-as-a-Service, where companies rent GPU capacity instead of investing in costly hardware. “Providers like CoreWeave are rapidly expanding their presence in Europe, and we are actively working to integrate them into our network,” Coppens adds. “The ability to seamlessly connect to these new AI compute hubs will be a key factor in future-proofing AI strategies.”
This need for flexibility extends to hybrid and multi-cloud environments. Many operators do not want to rely on a single cloud provider. They might use Microsoft Azure but require access to Oracle Cloud, OVH in Europe, or Alibaba Cloud in Asia. “We act as a broker between these cloud providers, ensuring seamless interconnection,” Coppens says. “Instead of treating AWS, Google, and others as separate entities, we provide the infrastructure to bring them together in a flexible and scalable way.”
Sustainability and the carbon footprint of AI networks
AI’s growing energy demands have prompted governments to reconsider large-scale data centre expansion. AI consumes vast amounts of power, and some governments are already pushing back against new data centre developments. “We have taken ESG seriously at Colt, driven from the top by our CEO, Keri Gilder,” Coppens says. “We have received top ratings from EcoVadis for our sustainability initiatives, which are backed by science-based targets.”
Reducing AI’s environmental footprint means eliminating inefficiencies in network hardware. Traditional telco infrastructure can resemble a museum of outdated technology, with racks of equipment still running after 20 years. Colt has systematically retired older systems like SDH, once the backbone of telecom networks. “In our data centres, we have replaced entire rows of power-hungry legacy hardware with modern, energy-efficient systems, achieving massive power savings,” Coppens says. “By mid-next year, customers will see green ratings for the data centres and cloud providers they connect to, much like hotel booking sites display quality ratings alongside prices. Currently, telco decisions are almost entirely price-driven, but sustainability should also be a factor.”
Colt is also working on providing real-time carbon footprint estimates for network usage. For example, when a customer provisions a 1Gbps connection, Colt can show the associated carbon impact. This level of transparency will help businesses make informed choices about their infrastructure and sustainability goals.
Latency and the changing role of the network edge
Real-time AI applications demand low-latency connectivity, but defining what ‘edge computing’ means remains an industry challenge. Amsterdam and Brussels are less than two milliseconds apart in Europe over the optical fibre network. For most applications, that latency is more than sufficient. Some niche applications, such as high-frequency trading in finance, require lower latency, but those are exceptions.
While edge computing has been widely discussed, its exact role remains fluid. “Some providers even position a single data centre in Belgium as an ‘edge’ facility, despite it having been there for years,” Coppens explains. “The industry still lacks a clear definition of what edge computing truly means.”
A dynamic future for AI networking
AI is pushing digital infrastructure to its limits, demanding more bandwidth, lower latency, and greater flexibility than ever before. The traditional approach to networking, with rigid contracts and slow provisioning, is no longer viable.
“The telecoms industry has underpinned the digital transformation of so many other sectors, yet it has been slow to transform itself,” Coppens concludes. “How we hail a taxi or listen to music has changed dramatically, but telco interactions are still largely manual and outdated. That is no longer sustainable. We must fully embrace automation, and network-as-a-service is the key to enabling flexible, scalable, real-time connectivity that meets the demands of modern AI-driven infrastructure.”




