Mobile World Congress 2026 crystallized the AI-RAN strategic split. Nokia and Samsung are betting on merchant silicon (CPUs and/or GPUs), while Ericsson, Huawei, and ZTE continue relying on purpose-built ASICs. The merchant camp believes general-purpose chips are now powerful and available enough to run real-time RAN functions plus AI workloads on the same platform — offering flexibility, faster innovation, and edge-AI revenue. The ASIC camp prefers custom chips that embed proprietary intellectual property for proven efficiency.
Nokia: Full NVIDIA GPU Commitment
Nokia demonstrated anyRAN running on NVIDIA Grace Hopper servers, with live over-the-air trials at T-Mobile, Indosat, and SoftBank proving concurrent 5G traffic and AI tasks on one server (Nokia MWC 2026 Press Release). It positions base stations as distributed AI factories, with commercial trials starting 2026 and full deployments targeted for 2027.
Samsung: Multi-CPU Merchant Strategy Intel Today, AMD Soon, and NVIDIA Grace Later.
Samsung is executing a multi-CPU merchant approach: Intel Xeon today, AMD EPYC scaling imminently, and NVIDIA Grace CPUs later — as announced at the show. It completed the first European commercial call with Vodafone using Intel Xeon 6 SoC for 2G/4G/5G + AI workloads (Samsung-Vodafone MWC Demo). Samsung is also evaluating NVIDIA GPUs for potential future AI acceleration, including AI MIMO beamforming. This flexible multi-vendor strategy avoids lock-in while enabling scalable vRAN.
Ericsson: Custom Silicon + Neural Accelerators
Ericsson launched AI-ready Massive MIMO radios with Ericsson Silicon featuring integrated neural accelerators for on-site AI inference (Ericsson MWC 2026 Product Launch). It insists custom ASICs deliver superior power efficiency and TCO, partnering with Intel only for Cloud RAN while keeping core RAN compute in-house.
Huawei: In-House ASIC + NPU Stack
Huawei highlighted AI-Centric networks using custom RAN ASICs combined with Ascend NPUs, showcasing energy-efficiency gains and agentic AI for intelligent operations (Huawei AI-Centric Showcase).
ZTE: Heterogeneous ASIC AI in AIR MAX
ZTE unveiled AIR MAX on an ASIC AI + xPU architecture, delivering 35–40% lower energy use and higher spectral efficiency for L4 autonomous networks (ZTE AIR MAX Launch).
The Operator Takeaway
Operators moving toward merchant silicon see a future where RAN workloads could run on shared or third-party infrastructure — and potentially rent out GPU/CPU capacity during non-peak hours. Those siding with purpose-built ASICs are sticking with proven, single-purpose equipment refined over five generations. Both approaches have clear merit; the coming years will show which delivers better economics and flexibility at scale.
