Digital transformation is driving all kinds of changes in enterprises, including the growing use of AI. Though AI and data centers have existed for decades, graphics processing units (GPUs) in data centers are a fairly recent development.
“GPUs have high levels of parallelism and can apply math operations to highly parallel datasets. CPUs can perform the same task but do not have the parallelism of GPUs so they’re not as efficient at these tasks,” said Alan Priestley, vice president analyst of emerging technologies and trends at Gartner.
He believes that GPUs are best-considered workload accelerators that are optimized for specific sets of operations to complement CPUs. However, CPUs are still necessary to manage GPUs and run the core processes necessary for whatever operating system and business logic their applications require.
NVIDIA leads the way
NVIDIA isn’t the only chip company to offer GPUs, but it is the category leader. According to Eric Pounds, senior director of enterprise data science at NVIDIA, many use cases associated with digital transformation are driving the need for GPUs, including deep neural networks. Other use cases include AR/VR, analytics, visualization, recommendation engines, work-from-home scenarios, and digital twins.
The company offers different GPUs for different purposes such as the A100, which is part of NVIDIA’s data center solution, and the A30 for mainstream enterprise workflows such as AI inference and high-performance computing (HPC).
“The infrastructure has been fairly siloed for most projects in the data center that uses GPUs, so you’ll have infrastructure that runs your legacy workloads to drive core parts of your business, and then you’ll have these new projects,” said Pounds. “There’s obviously a cost to that because it’s not just separate infrastructure. It’s often separate software, tools, processes, and even teams managing that infrastructure.”
Recognizing that challenge, NVIDIA and VMware have partnered to deliver solutions that essentially allow GPUs to be first-class citizens inside the infrastructure that runs existing and new enterprise workloads. This approach allows enterprises to leverage their existing investments in the virtualization platform and the processes used to secure and protect data at scale.
“Traditional database, email, and web serving applications have been very focused on getting as much out of the CPU as possible and this is why CPUs evolve to include higher clock speeds and more cores,” said Pounds. “These applications are evolving and demanding different types of faster processing. GPUs will provide that, but we’re not at the point where [they] are in every modern server.”
NVIDIA has reference architectures available so customers can accomplish tasks such as inferencing or scaling out Apache Spark analytics more easily. The company also has a data processing unit (DPU) that resides on a network interface card to offload data processing tasks so more CPU cycles are available to run an application.
GPUs are gaining ground
Deloitte used to staff its clients’ engagements with smart people, but now the company’s clients expect hybrid offerings that combine humans with assets, projects, and intelligent systems.
“We essentially have to demonstrate to our clients how to best use GPUs,” said Nitin Mittal, U.S. artificial intelligence strategic growth consulting leader at Deloitte. “When we’re talking about punching through billions of records, applying deep neural algorithms, and uncovering insights, a general-purpose CPU chip will not cut it. Insights have to be generated in real-time for making decisions on the spot.”
When one considers that GPUs were originally applied to gaming, a similar approach can be used for simulation, such as testing a new manufacturing plan, production line, or set of engineering products.
“Today, it’s about customer experience and digital interactions you enable with various stakeholders so you are integrated as part of the fabric of society,” said Mittal. “It’s not about the system you’ve actually implemented, but how open you are to systems outside your control. We’re seeing the confluence of cloud, AI, cyber, 5G, and IoT with quantum computing in the next four or five years. The infrastructure will change and hardware has to be adapted.”