Home » Technical Topics » Internet of Things

Next Generation Chip Wars Heat Up as AMD Eyes Xilinx acquisition

Integrated Circuit Design

Data Points

  • Chip manufacturer AMD is in talks to acquire Xilinx by the end of 2020.
  • With the acquisition, AMD joins Intel and nVidia as heterogeneous computing giants.
  • Xilinx adds FPGAs to the AMD portfolio, chips capable of post-production configuration.
  • While not themselves AI chips, FPGAs in conjunction with GPUs create intelligent routers

Recent reports indicate that chip maker AMD, second only to Intel in the chip market in terms of sales, is in negotiations with Xilinx, one of the largest suppliers of Field Programmable Gate Array (FPGA) chips, to fully acquire the company before the end of 2020.

This places an interesting spin on the world of AI-centric chips in particular, as it puts AMD into a strong competitive position wrt both long time rival Intel and the rapidly growing GPU focused nVidia company. Each of these companies have been busy with acquisitions – Intel most recently acquiring FPGA manufacturer Altera, while nVidia acquired ARM in order to provide CPU equivalencies.

While GPUs still carry the lion’s share of machine learning applications, due both to its highly parallel nature and the graph analytics capabilities that GPU pipelines are capable of performing, FPGAs target a somewhat different slice of the AI market in that such chips are dynamically reconfigurable after they are manufactured.

This adaptability makes FPGAs attractive in the IoT space as well as network routers and specific mobile devices. Thus, while they won’t necessarily have a strong direct impact in the AI space, they can work well in conjunction with AI models that reconfigure device routers in response to changes in data flows.

For AMD, such FPGAs would also complement their existing investment in GPU technologies including the Radeon Rx line and the Big Navi chips, originally produced for the gaming market but increasingly being repurposed for machine learning applications as well.

The upshot of this is that going into the 2020s, there are three primary chip manufacturers betting the (server) farm on heterogeneous computing, with differing architectures increasingly devoted both to just in time configuration of hardware  through FPGAs and the massive parallelism used by AI this is at the core of GPU.

This in turn relegates CPUs primarily to traffic cop roles managing the increasingly dizzying array of edge computing devices. It may be too early to say at this stage which of the three will become dominant, but there’s little doubt that the space is becoming very rich regardless of the individual stacks.