NVIDIA DGX Station

NVIDIA DGX Station is the world’s fastest workstation for leading-edge AI development. This fully integrated and optimized system enables your team to get started faster and effortlessly experiment with the power of a data center in your office. DGX Station is also the only workstation with four NVIDIA Tesla V100 Tensor Core GPUs, integrated with a fully-connected four-way NVLink architecture. In addition to powerful hardware, a complete software stack including all the most commonly used environments (TensorFlow, Caffe, Torch, Theano, …). Thanks to integrated water cooling and low noise, it is also suitable for office environments.


Hardware

Let’s take a look at the NVIDIA DGX Station in more detail, first from a hardware standpoint.

ParameterDGX Station
GPUs4× NVIDIA Tesla V100 32GB
Performance (GPU FP16)0,5 PetaFLOPS
GPU memory128 GB total
CPUE5-2698 v4 2.2GHz (20 cores)
NVIDIA CUDA cores20 480
NVIDIA Tensor cores2 560
RAM256 GB
HDD4× 1,92TB SSD
Network2× 10GbE
Maximum input power1 500 W

All NVIDIA DGX systems feature the latest and fastest accelerators today — NVIDIA Tesla V100 32GB. DGX Station contains four cards, DGX-1 eight cards and DGX-2 even sixteen accelerators! Main benefits of NVIDIA Tesla cards are specialized Tensor cores for accelerating machine learning applications or large memory (32 GB for each card) secured by ECC technology. NVIDIA Tesla cards are also equipped by interface for high bandwidth card communication  — NVLink. NVLink can reach speed up to 300 GB/s. NVIDIA DGX-2 additionally offers super powerful NVSwitch that connects sixteen NVIDIA Tesla V100 with 2.4 TB/s bisectional bandwidth in non-blocking architecture.

V100 PCIe

Software equipment

What is more interesting, however, is the already mentioned software package offered by NVIDIA machines. All of these offer pre-installed and performance-tuned environments for machine learning (e.g. Caffe, resp. Caffe 2, Theano, TensorFlow, Torch, nebo MXNet) or an intuitive environment for data analysts (NVIDIA Digits). All of this is elegantly packed in Docker Containers. Such a tuned environment provides 30% more power for machine learning applications against applications deployed purely on NVIDIA hardware. The main advantage of the pre-installed environment is the deployment speed, which is in units of hours.

NVIDIA DGX systems SW stack

NVIDIA DGX systems SW stack, NVIDIA GPU Cloud

Support

The strength of the NVIDIA solution is to support the entire system. Hardware support (in case of failure of any of the components) is a matter of course. Software support for the entire environment is critical if something does not work. The customer has hundreds of developers ready to help. Support is part of NVIDIA DGX purchase. It is available for 1 or 3 years and can be further extended after this time.

With a combination of tuned hardware, software and NVIDIA support, NVIDIA DGX systems delivers significantly higher performance and acceleration in learning phase:

DGX Station performance

The difference between tuned DGX system solution for fas and powerful machine learning and DIY variant (Do It Yourself) is evident from the following video:

The NVIDIA DGX Station’s personal supercomputer is also unique in its low noise level due to its internal accelerator water cooling. It can also be operated in an office environment.