Arrcus Highlights Strong AI/ML platform for Ethernet Switches

Ethernet Switches Deployed into AI/ML Use Case Expected to Exceed $10B by 2028; Grow at over 100% CAGR

2023 saw robust AI/ML networking growth, with InfiniBand and Ethernet networking surging. AI/ML deployments rapidly increased throughout the year as operators built out massive training clusters and began deploying inference engines to monetize the operator’s investments. AI/ML is a significant catalyst for growth in Data Center switching for the rest of the decade.

Arrcus’s next-generation AI/ML networking platform (ACE-AI) is a key component of 2024 AI/ML growth. Arrcus provides a consistent networking fabric from Edge to DC Core to Multi-cloud. ACE-AI provides significant telemetry options, a key feature for AI/ML performance and keeping Job Completion Time (JCT) low. Real-time streaming of telemetry and the ability to have end-to-end network visibility help operators lower JCT and keep expensive GPUs from idle.

The Arrcus architecture works with shallow buffer (L2) silicon, like Broadcom’s Tomahawk platform, and deep buffer (L3) silicon, like Broadcom’s Jericho platform. It provides low-latency networks with traditional CLOS and distributed routing architectures. AI/ML networks will use a combination of deep and shallow buffer ASICs, and as AI/ML matures and expands into inference workloads, we expect a broad mix of silicon in AI/ML fabrics. Arrcus and the ArcOS provide a robust operation system on white box switching. Arrcus already partners with Celestica, Delta, Edge-Core, QCT, and UfiSpace hardware, allowing operators to be multi-vendor and mitigate supply chain risks.

As operators trial and deploy ACE-AI, we expect to see test results that give those operators confidence in Ethernet and significant savings. We expect savings from Ethernet’s economy of scale and day-2 operations around automation, high availability, and real-time telemetry.

Arrcus also supports the traditional Telco SPs with Edge Routing, distributed routing, and routing security. AI/ML networks benefit from this institutional knowledge as Telco SPs operate some of the most extensive networking in the world. For example, distributed routing uses multiple fixed switches instead of large modular chassis to reduce cost and complexity. It was initially a Telco SP implementation but is also the preferred architecture for many cloud providers. Now, the majority of deployments are in cloud operators. With their Multicloud networking solution, Arrcus enables AI/ML workloads to be accessed seamlessly where ever they may reside. Finally with SRv6 Mobile User Plane technology, Arrcus delivers automated network slicing based on 5G. This extends the ability of operators to connect AI/ML from the Edge to multi-cloud.

2024 is essential for AI/ML as operators evaluate significant new hardware, including networking, GPU, computing, and NICs. Operators will begin moving from proof of concepts to production, and users at a consumer or business level will see significant enhancements to their daily applications. The transition from trials to production will drive significant AI/ML growth in 2024 and set the stage for a breakout in 2025.