Navigating the Explosion in Data Center Networking Demand

New Business Models Emerge

Introduction
The data center networking landscape is more important than ever, propelled by the growth of AI and the formation of new sovereign/Neoclouds. Rack-scale is the new unit of compute, and that is driving scale-out, scale-up, and scale-across as the next areas of growth in the market. Data center switching is the one technology that keeps pace with the revenue growth of GPU/XPUs. Data center networking should surpass $200B with an installed base of ports in excess of one billion over the next decade, with the majority of that spend and ports being Ethernet.

Scale-Out: Horizontal Expansion for Distributed Workloads
Scale-out grows capacity by adding independent servers, linked via Ethernet or InfiniBand. Ideal for distributed AI training and cloud tasks, it prioritizes scalability and fault tolerance. Ethernet is overtaking InfiniBand, with CPO innovations slashing power use further. Disaggregation enables best-of-breed integration, cutting costs 40%+.

Scale-Up: Vertical Boosts for Super GPUs
Scale-up enhances performance in a single system or rack, allowing GPU memory sharing for tight-coupled jobs. It requires ultra-low-latency links like NVLink or UALink today, and Ethernet is quickly gaining share by the end of the decade. In the longer term, this becomes the larger CPO market opportunity.

Scale-Across: Inter-Facility Connectivity
Scale-across links data centers for multi-site AI, managing cluster interconnect traffic via a dedicated network for AI. Scale-across significantly increases the DCI opportunity. Using DCI Ethernet, it’s vital for distributed inference. Disaggregation ensures dynamic, policy-aware scaling, hitting 90%+ GPU utilization.

Market Opportunities Surge as Ethernet Switch Market Gets Larger
The market breaks down into core segments, starting with hardware, where high-speed switches like 400G and 800G Ethernet are shifting toward 800G and 1.6T, capturing the lion’s share of revenue over the next few years. The ability to squeeze out maximum performance for the lowest power in both air and liquid-cooled systems enables customers to deploy more XPU/GPUs. Software is accelerating even faster amid network complexity, incorporating orchestration, security, and a blend of vendor-specific OS alongside open-source options like SONiC. Then there’s the vital role of LPO and LRO transceivers to reduce power before CPO emerges as a game-changer by decade’s end, further transforming density and efficiency. A larger market, many technology shifts, and the need for bandwidth create many multi-billion dollar opportunities that didn’t exist just a few years ago.

Hyperscalers, including AWS, Google, Meta, and Microsoft, are the primary engines, joined by enterprises transitioning to hybrid clouds. Still, AI workloads stand out as the biggest driver, demanding ultra-low-latency and high-throughput fabrics for massive GPU clusters. Our analysis reveals AI-related networking investments will swiftly dominate data center spend, outstripping traditional segments.

In 2026, the market hovers well above $50B, propelled by these AI trends and surging public cloud adoption that bolsters back-end infrastructure. The rise of Neoclouds and sovereign clouds, a new, deep-pocketed set of players that start at hyperscale levels, is diversifying the customer base beyond anything seen in previous cloud waves.

Projecting forward, we see the data center networking market climbing to $200B by 2032, with a CAGR exceeding 20%, outpacing nearly every tech sector except AI accelerators themselves, while the Ethernet switch segment alone could approach $200B annually before the middle of the next decade closes. But here’s the critical takeaway: true scaling requires network disaggregation, separating hardware from software for flexible, mix-and-match ecosystems, and a focus on simpler building blocks. This approach avoids proprietary traps, slashes TCO, and accelerates innovation in AI-optimized fabrics.

Startup Example: Nexthop AI Starts Shipping Multiple Products
Nexthop AI included three different switches in their launch (low-power Tomahawk 5, the first 2RU air-cooled Tomahawk 6, Qumran-3D scale-across spine). At the same time, the company has been a significant contributor to the SONiC operating system (OS). By taking an engineering-heavy JDM approach, Nexthop AI is rapidly developing products that align with the need for hyperscale customers to deliver an integrated solution, to move more quickly in networking.

Nexthop AI just raised a $500 million Series B funding round. Lightspeed Venture Partners, Andreessen Horowitz, and Altimeter all participated in the round. With the size of DC networking continuing to expand significantly, we see continued investor interest. We note that the funding round was oversubscribed.

As the Ethernet Switch market expands, these new market opportunities are measured in the billions and incremental to the current networking spend in the data center. By taking a unique approach, such as tuning and qualifying optics during product development and working closely with customers, Nexthop AI is focused on bringing down the cost of tokens and creating a faster pace of network innovation to match the yearly cadence the GPU/XPU is on.

Conclusion
The data center networking market is in hyper-growth mode, from $50B today and with $200B on the near-term horizon, offering massive opportunities. The magnitude of growth and absolute size of the market allow for Incumbents to continue to grow through scale and innovation, while startups bring vitality and breakthroughs. We believe this dynamic ecosystem will accelerate progress in AI and be a massive opportunity for startups and incumbent vendors.

###