As artificial intelligence (AI) use grows, training AI models is placing huge strains on networks and nowhere more so than in datacentres, with research from technology company Ciena finding that the massive and rapid growth of AI will drive a significant increase in interconnect bandwidth needs over the next five years.
In January 2025, the survey, conducted in partnership with Censuswide, queried more than 1,300 datacentre decision-makers across 13 countries who are responsible for planning or purchasing datacentre infrastructure. The markets covered were the US, UK, Germany, KSA, Norway, Sweden, Australia, Korea, India, Philippines, Indonesia, Brazil and Mexico.
The survey found that 53% of respondents expect AI workloads to place the biggest demand on datacentre interconnect (DCI) infrastructure over the next two-to-three years, surpassing cloud computing (51%) and big data analytics (44%).
To meet such surging AI demands, 43% of new datacentre facilities are expected to be dedicated to AI workloads. Moreover, with AI model training and inference requiring unprecedented data movement, the datacentre experts predicted no less than a “massive” leap in bandwidth needs. In addition, when asked about the needed performance of fibre optic capacity for DCI, 87% of participants believe they will need 800 Gbps or higher per wavelength.
Survey respondents were said to have revealed a growing opportunity for pluggable optics to support bandwidth demands and address power and space challenges. According to the survey, 98% of datacentre experts believe pluggable optics are important for reducing power consumption and the physical footprint of their network infrastructure.
The survey found that, as requirements for AI compute continue to increase, the training of large language models (LLMs) will become more distributed across different AI datacentres. Some 81% of respondents believe LLM training will take place over some level of distributed datacentre facilities, which will require DCI solutions to be connected to each other.
Rather than deploying dark fibre, the majority (67%) of respondents expect to use managed optical fibre networks (MOFN), which utilise carrier-operated high-capacity networks for long-haul datacentre connectivity.
When asked about the key factors shaping where AI inference will be deployed, the respondents ranked the following priorities: AI resource utilisation over time is the top priority (63%); reducing latency by placing inference compute closer to users at the edge (56%); data sovereignty requirements (54%); and offering strategic locations for key customers (54%).
“AI workloads are reshaping the entire datacentre landscape, from infrastructure builds to bandwidth demand. Historically, network traffic has grown at a rate of 20-30% per year. AI is set to accelerate this growth significantly, meaning operators are rethinking their architectures and planning for how they can meet this demand sustainably,” said Ciena international chief technology office, Jürgen Hatheier.
“The AI revolution is not just about compute – it’s about connectivity. Without the right network foundation, AI’s full potential can’t be realised. Operators must ensure their DCI infrastructure is ready for a future where AI-driven traffic dominates.”