Today’s Distributed Hybrid AI requires investments in AI Fabrics, DCI/WAN, and Security

AI Networking to Approach $100B a Year by the End of the Decade

By Alan Weckel, Founder and Technology Analyst at 650 Group

As the market moves from foundational models to widespread adoption of inference, AI networking fabrics will increasingly span multiple facilities. For a hyperscaler, this could mean connecting multiple buildings across municipalities to deliver the power and capacity required for clusters with up to a million GPUs/XPUs. For an enterprise, this could mean bridging the on-premise facility to public clouds via colocation facilities. The results are a significant tailwind for Ethernet buildouts within the data center and for DCI (Data Center Interconnect) expansions, driving the demand for WAN networking.

Data Gravity, Privacy, and Regulation to drive AI Hybrid Cloud

As AI adoption skyrockets across various sectors, enterprises are rapidly embracing hybrid cloud deployments, driven by data gravity, privacy concerns, regulatory demands, cost efficiency, and low-latency requirements. In parallel, hyperscalers, Tier-2 cloud providers, and neoclouds (offering GPU-as-a-Service and AI-as-a-Service) are making massive investments in global AI infrastructure, scaling up to millions of GPUs. Notable examples of neoclouds include CoreWeave, X.AI, and regional providers in the Middle East, such as HUMAIN and G42. To ensure reliable power and continuous uptime, AI data centers are strategically clustered regionally, necessitating high-performance AI Fabrics (intra-data center), DCI, and WAN connectivity.

Foundational Models to Give Way to the Enterprise Market

Simultaneously, enterprises are deploying AI infrastructure in colocation facilities and on-premises to complement cloud-based AI workloads, driven by needs for enhanced privacy, regulatory compliance, and cost optimization. For example, a financial institution may maintain sensitive customer data on-premises to comply with strict data sovereignty regulations. At the same time, a healthcare provider could utilize colocation sites to process real-time patient diagnostics with minimal latency. Across all environments—edge, colocation, intra-DC, and DCI/WAN—security remains paramount, as organizations prioritize responsible and secure AI scaling. This requires multiple layers of security and robust multi-tenancy frameworks.

Ethernet for Inside the Data Center

Bandwidth demands within the data centers are growing at over 100% CAGR, driven by the scale-out of Ethernet networks to support GPU/XPU clusters. While early AI deployments used 400 Gbps links, most future designs are 800 Gbps or higher for the remainder of 2025 and 2026. To meet these demands, customers are deploying a mix of shallow-buffer and deep-buffer switches to support the scale-out fabric and connect to the rest of their data center and application storage via the front-end network.

Ethernet for Connecting Data Centers

The Ethernet DCI market is surging to support the AI facilities discussed above. High-density L3 switches and routers, along with ZR/ZR+ optical modules, form the backbone of these interconnects. While Ethernet DCI has been around for a while, with the advent of next-generation ZR modules (800G and ZR+), customers can now reach distances of 1000km with significantly higher bandwidth. In practical terms, this allows Ethernet to address cost-effective enterprise connectivity across most long-haul, country-level links.

AIOps and Automation for Managing Data Centers

Managing a complex, distributed infrastructure presents significant operational challenges. This is precisely where Artificial Intelligence for IT Operations (AIOps) and automation become essential, serving as critical enablers for maintaining performance, reliability, and cost efficiency. AIOps enables proactive anomaly detection and predictive maintenance, intelligent incident management, and automated resource optimization. AIOps and automation transform IT operations from a reactive, labor-intensive process into a proactive, intelligent, and self-managing system.

HPE Juniper Benefits for HPE’s Server Experience

Juniper Networks already has a strong product presence and market leadership in AI through its secure, AI-native networking platform. The company is a recognized leader in the DCI/WAN solutions for AI, leveraging a mix of custom and merchant silicon. We note that its PTX and MX Routers are widely deployed for DCI and colocation use cases. Inside the data center, Juniper’s portfolio is already utilized by several neoclouds and hyperscalers to power AI workloads. Notably, Juniper was the first branded company to ship 800G in 2024 for DC Switching. With its acquisition by HPE, Juniper will gain significant exposure to enterprise opportunities on the compute side as well as synergistic knowledge of rack integration. The combined company now holds a unique position as the only vendor with a multi-billion dollar compute and networking portfolio.