Capturing the AI Wave: New Revenue Models for Service Providers and Colocation Operators

Introduction

Service Providers (SP) and Colocation (Colo) Providers play a pivotal role in enabling and scaling the AI economy. AI workloads, whether training, inference, agentic, consumer, or enterprise, have fundamentally different compute, storage, networking, and power requirements than traditional applications. Organizations must process massive volumes of sensitive, mission-critical data, redefining where computing occurs and how data moves securely.

Beyond connectivity and hosting, AI opens entirely new revenue streams in compliance, edge, and interconnect services as customers look to trusted providers to navigate complexity so that customers can focus on their own businesses. This blog explores how SPs and Colos can capitalize on these trends to create differentiated, resilient business models for data center build-outs.

Data Sovereignty: Turning Compliance Into a Competitive Advantage

In an era of expanding regulation – from General Data Protection Regulation (GDPR) to the EU’s AI Act, HIPAA, and emerging national frameworks such as Saudi Arabia Monetary Authority (SAMA) – data sovereignty is non-negotiable. Storing and processing data within national or regional borders has become the new normal.

AI makes compliance far more complex: training global models can risk privacy violations, while inference often requires localized data processing. For example, healthcare inference for a patient in Italy can’t be done in Germany. But wait, what if the patient is German and wants that information to return to Germany? How is this model trained?

SPs and Colos are uniquely positioned to monetize compliance by offering:

Sovereign AI fabrics that route data through compliant backbones, paired with encryption overlays and sovereignty audits.

Localized Colocation zones built in key EU data regions or regulated U.S. states, offering tiered services for government, finance, and healthcare clients.

Service Providers embracing sovereign clouds gain new premium revenue tiers while mitigating compliance risk for enterprises seeking trusted infrastructure partners.

Managed Optical Fiber Networks (MOFN): Service providers are ideally positioned to offer managed optical fiber networks to deliver the scalable connectivity required for AI applications. In some regions, this is due to regulatory constraints on owning or managing fiber. In this case, for example, hyperscalers rely on local service providers for MOFN solutions to connect their data centers as well as manage the connectivity between hyperscale data centers and enterprise data centers.

Data Center Interconnect (DCI) technologies will provide fabrics for sovereign networks. A mix of existing fiber connections, new links, existing routes, and new locations will require significant capacity increases, management, and automation. Scale Across is a new form of DCI for scale-out networks that requires about 10X the market requirements just a few years ago. ZR/ZR+ coherent optics help drive efficiency and are ideal for linking very high data rate sovereign edge sites together over long distances. Cisco’s RON strategy addresses DCI from the optics, systems, and software perspective and takes a customer-first approach of the right offering for the right customer.

Edge Computing: Bringing Data and Token Factories Closer

Edge computing isn’t new, but AI multiplies its importance. As data volumes surge in multiple industries, including manufacturing, retail, transportation, and local government/smart cities, proximity to the device becomes essential for real-time AI inference. Some latency requirements will require data to stay on-premise. However, most new use cases can handle metro distance latency. SPs and Colos can use their existing edge footprint to capture these new edge workloads.

Hybrid Edge Ecosystems: SPs and Colos can partner with IoT and Industrial automation vendors to create secure, low-latency edge zones that capture new AI workloads.

Managed Edge Services: Turnkey, vendor-validated AI platforms – integrated with 5G/6G network slicing and intent-based routing, allow customers to consume AI capacity as a subscription or per-token service.

We anticipate a mix of AI and non-AI workloads in hybrid setups. This will create bidirectional connectivity between the core and the edge (and vice versa) for both AI and non-AI workloads. As AI advances, we can expect a mix of processing between edge and core, and don’t view this as an either/or. In the future, AI application processing will occur fluidly depending on latency, compute requirements, and cost. For example, in health care, basic image recognition may occur at the edge. Still, disease analysis would happen in the core on larger models with a large data center fabric supporting it such as the Nexus 9000.

Modern edge data centers are built with prefabricated modular units, accelerator racks, and liquid cooling, supporting up to 1MW per rack and minimizing latency to core sites. Edge-to-core and edge-to-cloud links are best served through dense router platforms and ZR/ZR+ coherent pluggable optics that can range from 100G to 800G links to support a diversity of use cases. For example, right before OCP, Cisco launched a new 8000 Router using new P200 silicon to create a 64-port 800 Gbps router. For customers seeking choice and openness in their network operating systems, Cisco offers robust support for SONiC on 8000 platform as well as with their own OS. Within managed offerings, vertically integrated partnerships speed deployment and ensure seamless network-compute expansion.

Partnering with Hyperscalers Unlocks Additional Value

Rather than compete with Hyperscalers, SPs and Colos can amplify their value by collaborating and providing last-mile density, sovereign coverage, and specialized facilities that complement hyperscaler regions. We note that Meta’s 1 GW Prometheus campus in Ohio uses a significant amount of Colo space and SP connectivity to connect the different data halls.

Access to Accelerators: Offering shared GPU/XPU capacity within Colos allows enterprises to burst workloads to the cloud while keeping sensitive data local. A “right-size compute” model helps balance control and flexibility. Sovereign compliance requirements do not allow the same flexibility to chase power and space, and Colos solve those compliance bottlenecks.

High-Power Engineering: With AI racks approaching 1MW per rack row, Colos with advanced cooling and power engineering are best positioned to host hyperscaler extensions. As inter-operator peering and session management grow, Smart Switches and intent-based fabrics enhance security, visibility, and time-to-cloud performance.

Main Technologies

The following technologies represent the key enablers that align technical advancement with revenue creation for SPs and Colos.

TechnologyAI ImpactRevenue UnlockCisco Offerings
Advanced RoutingAI-aware traffic engineering, secure session management, and dynamic policy controlPremium QoS tiers, sovereign-data routing, and bandwidth-on-demand servicesCisco 8000 SONiC
Coherent optics (400G/800G ZR/ZR+)400-800 Gbps reach up to 1000 km for metro/region interconnectEnables new regional and sovereign markets through lower-cost, energy-efficient links400G/800G Pluggable Modules
Full DCI StackScalable optical and switching fabric connecting distributed AI clustersInterconnect as a service and rapid provisioning for smaller enterprises.Routed Optical Network Portfolio
Data Center Ethernet SwitchingSecure, repeatable token- factory fabrics linking CPU/GPU/XPU resourcesAllows incremental adoption and secure scaling without forklift upgradesNexus 9000
Conclusion

AI represents a transformational growth opportunity, not a threat, for SPs and Colos. By combining compliance-ready infrastructure, edge proximity, and hyperscaler partnerships, service providers can unlock premium services, new recurring revenue, and build stronger customer trust.

By starting small and building sovereign and edge ecosystems, integrating smart fabrics, and investing in sustainability, position SPs and Colos to ride AI as a long-term growth tailwind.

###