How NVIDIA, VMware Are Bringing GPUs to the Masses

09.09.2019 Nhanh Admin
Last week virtualization giant VMware maintained its VMWorld 2019 user conference in San Francisco. The 23,000 or so attendees were treated to notable innovation the virtualization in the host company in addition to its many partners. Among the more interesting statements that I believe flew beneath the radar was that the joint NVIDIA--VMware initiative to bring virtual graphics processing unit technology (vGPU) to VMware's vSphere and the VMware cloud on Amazon Web Services (AWS). Virtual GPUs have been in use for some time but were not available to operate on virtual servers. Now businesses can run workloads, such as artificial intelligence and machine learning, using GPUs on VMware's vSphere. It Ought to step up and own GPU-accelerated servers Historically, workloads that required GPUs needed to run on bare metal servers. This meant each data science group in a company needed to buy its hardware and incur that cost. Also, because these servers were only employed for those GPU-accelerated workloads, they have been often procured, deployed and managed outside of IT's control. Now that AI, machine learning and GPUs are moving somewhat mainstream, it's time for IT to step up and take possession. The challenge is it doesn't wish to accept the task of running dozens or hundreds of bare metal servers. GPU sharing is the top use instance vGPUs The most obvious use case for vComputeServer is GPU sharing, in which several virtual machines can talk about a GPU--similar to that which server virtualization did for CPUs. This should enable businesses to accelerate their information science, AI and ML initiatives because GPU-enabled digital servers can spin up, spin down or migrate like the rest of the workloads. This may drive usage up, enhance agility and help companies save money. This innovation should also lead to businesses being able to conduct GPU-accelerated workloads in hybrid cloud environments. The virtualization capabilities combined with VMware's vSAN, VeloCloud SD-WAN and NSX system virtualization create a good foundation for a migration to running virutual GPUs at an actual hybrid . Clients can continue to leverage vCenter It's important to comprehend that vComputeServer works together with other VMware applications such as vMotion, VMware Cloud and vCenter. The extended VMware support is important because this allows enterprises take GPU workloads into highly containerized environments. Additionally, VMware's vCenter has become the de facto benchmark for data center management. At once I thought Microsoft might challenge here, but VMware has won this war. Thus it is logical for NVIDIA to enable its clients to deal with the vGPUs via vCenter. NVIDIA vComputeServer also enables GPU aggregation GPU sharing should be game changing for most businesses considering AI / ML, that ought to be nearly every company today. But vCompute Server also supports GPU aggregation, which enables a VM to access more than one GPU, which is often a necessity for compute intensive workloads. VComputeServer supports multi-vGPU and peer reviewed computing. The distinction between both is that with multi-vGPU, the GPUs may be distributed rather than connected; with peer reviewed, the GPUs are connected using NVIDIA's NVLink, making multiple GPUs look like a single, stronger GPU. A few years back, the use of GPUs was restricted to a couple of market workloads performed by technical teams. The more info businesses become, the further GPU-accelerated processes will play a key role in not only artificial intelligence but also day-to-day operational intelligence. Collectively, VMware and NVIDIA have made a way for organizations to begin with AI, information sciences and machine learning without having to break the bank. Zeus Kerravala is an eWEEK frequent contributor and also the creator and principal analyst with ZK Research. He spent 10 years in Yankee Group and prior to that held a number of corporate IT positions.

You may also concern: