Researchers from the University of Ostrava train AI on the NVIDIA system
The Institute for Research and Applications of Fuzzy Modeling of the University of Ostrava was the first in the Czech Republic to acquire the latest supercomputer for artificial intelligence – NVIDIA DGX Station A100. And while the whole system isn’t much bigger than a regular desktop computer, don’t be fooled by its size. Beneath the golden case lies the performance of an incredible 2.5 petaFLOPS for AI.
We delivered the NVIDIA DGX Station A100 to a team of academic researchers from the Institute for Research and Applications of Fuzzy Modeling. They are a team of academic researchers working on problems concerning computer vision and 3D computer graphics. Their work is divided into purely academic research and designing solutions for the industrial sector. Solutions for the industrial sector mainly focus on defectoscopy of products, especially automotive products, and rely on deep neural networks. In academic research, they study data and its role, their pre/post processing, anonymization, and developing new loss functions. You can find their work on the project page.
The Institute for Research and Applications of Fuzzy Modeling (IRAFM) is a scientific place of work belonging to the University of Ostrava in Ostrava, Czech Republic. It is focused on theoretical research and practical development of various methods of fuzzy modeling, i.e. special mathematical methods which make it possible to develop models that cope with imprecise information.
The scientific team is equipped with standard office desktops used for prototyping and testing applications. These applications that mainly consist of deep neural networks are later migrated to NVIDIA DGX Station A100 with four GPU cards NVIDIA A100 40GB. Every team member has remote access to the Station and therefore uses it for long-lasting computations. DGX station also acts as a fast data storage/exchange device.
“Due to DGX Station’s high computation capability, we are able to select proper network parameters from a much larger search space, which directly translates to a better quality solution. This is amplified by training networks on bigger datasets in more iterations than it would be possible on a standard desktop GPU in the same amount of time. As a result, the customer receives a better solution in a shorter time,” says Petr Hurtík, the team leader.
✓ Being able to train big deep learning models thanks to GPUs memory size.
✓ Faster processing of big datasets due to having multiple GPUs.
✓ The capability of working on multiple projects simultaneously.
✓ Excellent CUDA support
✓ The fast interface between GPUs
✓ Optimized for deep neural network training
✓ Laser welding and guiding of an industrial robot
✓ Simulation of a goniophotometer device
✓ Software for discovering neuro-degenerative diseases
✓ TensorFlow / Keras
✓ PyTorch / PyTorch Lighting
✓ CUDA, cuDNN
✓ Nvidia Docker + Nvidia containers, TensorRT
✓ Detectron2, MMDetection
✓ NVIDIA Kaolin