How large is the gap between network performance and firewall performance?
The gap between network performance and firewall performance has been closing somewhat, in that we could argue that the virtual instances can go as fast as desired, leveraging, for instance, DPUs or racks and racks of standard servers.
We’ve looked into this question recently because Palo Alto Networks announced its NVIDIA-based DPUs with its VM-Series last month. In the acceleration game, DPUs are more power-efficient than FPGAs, at least in part because they use leading-edge process geometries from fabs like TSMC.
For context, network performance, as measured at an Internet Exchange Point, might be measured by 100 Gbps Ethernet or maybe at 400 Gbps, which today compares to the throughput of an ultra high-end firewall from the likes of the Palo Alto Networks PA 7080. But architectures are shifting because firewalls are being put elsewhere, like in co-location points, interexchange points, and in SaaS delivery nodes, not just at the WAN link, as was traditional.
Should security vendors invest in a ‘competitive edge” by building their own chips to enable their firewalls to run ‘at scale’ along with the fastest moving traffic coming through the cloud, data centers, or branch offices over the WAN?
Special purpose chips have their place and will always, we expect. However, leading-edge process geometries associated with high-volume, high-performance systems like GPUs may well have an edge at the very high-end compared to ASICs that are purpose-built.
The GPU performance advantage may simply exist because the ASICs ship in lower volume than a more universally accepted chip systems like a GPU or a multi-ARM processor that has many uses.
What edge does a security vendor have in developing its own hardware chips/FPGAs to accomplish wire-rate speeds that keep pace with the data firehose coming from the network?
A well-known security vendor makes its own ASICs, Fortinet. On the other hand, Marvell offers a hedge in the form of semi-custom multi-ARM processors dubbing them as “custom.” In the case of Marvell, these are considered DPUs. With DPUs, perhaps 90-100% of the semiconductor intellectual property is developed by the DPU maker, which for this example is Marvell (leveraging ARM CPU designs, as well).
And, as mentioned earlier, NVIDIA has begun using its GPU-enhanced DPU systems to accelerate Palo Alto VM-Series. If the chip is a standard chip, like a DPU, then the system vendor will differentiate by developing unique software to run on top of the chip. And, over time, we think security vendors can also differentiate based on system-level design advantages and by developing specific systems targeted at certain markets.