Bow-2000
IPU unit the Bow-2000 machine is the building block for our Bow Pod systems. One Bow-2000 and host servers brings a real performance advantage over the A100 GPU in the DGX platform.
See what other benefits, such as lower cost, IPU systems bring.
Bow-2000 is a basic computing unit for machine intelligence based on IPU, which is designed for artificial intelligence. Each 1U blade is equipped with 4 Bow IPU processors housing 1.4 petaFLOPs of AI computing power with 3.6GB of In-Processor-Memory™ and up to 256GB of Streaming Memory™.
The Bow-2000 has a flexible modular design so you can start with one and expand to many in our IPU-POD platforms.
BOW POD16
Bow POD 16 provides the power and flexibility you need to create better and more innovative AI solutions. The Bow POD16 opens up a new world of innovation in machine intelligence with its 4 interconnected Bow-2000units.
Also available as a BowPod32system
Compact 5U form factor
BOW POD64
Bow POD64 brings the flexibility to maximize the use of all storage space and performance in your data center, regardless of how it is provisioned. Bow POD64 offers 22.4 petaFLOPS of AI computing for both learning and inference that you can develop and deploy on the same powerful system.
Top performance in visualisation and language technologies
Also available as a Bow Pod128 system
See how Graphcore co-founder and CEO Nigel Toon introduces IPU systems
BOW POD256
Bow POD256 brings increased AI computing power to supercomputers. Bow POD256 is designed to accelerate large and challenging machine learning models and provide AI resources to the tech giant. With support for Slurm and Kubernetes, you can easily automate application deployment, scaling, and management of Bow Pod. Virtual-IPU™ technology offers secure multitenancy. Due to the large number of Bow PODs, it is possible to create replica models and provide IPUs in many Bow PODs for very large models.
IPU for supercomputers
Also available as Bow Pod512 and Bow Pod1024
Comparison of IPU systems
Parameter | Bow POD16 | Bow POD64 | Bow POD256 |
---|---|---|---|
Bow IPU processors | 16 | 64 | 256 |
Bow-2000 IPUs | 4 | 16 | 64 |
Memory | 14.4GB In-Processor-Memory™
až 1TB Streaming Memory™ | 57.6GB In-Processor-Memory™
up to 4.1TB Streaming Memory™ | 230.4GB In-Processor-Memory™
up to 16.3TB Streaming Memory™ |
Performance | 5.6 petaFLOPS FP16.16 1.4 petaFLOPS FP32 | 22.4 petaFLOPS FP16.16 5.6 petaFLOPS FP32 | 89.6 petaFLOPS FP16.16 22.4 petaFLOPS FP32 |
IPU cores | 23 552 | 94 208 | 376 832 |
IPU threads | 141 312 | 565 248 | 2 260 992 |
Form factor | 4U + host servers and switches | 16U + host servers and switches | 64U + host servers and switches |
Supported AI libraries | TensorFlow, PyTorch, PyTorch Lightning, Keras, Paddle Paddle, Hugging Face, ONNX, HALO | TensorFlow, PyTorch, PyTorch Lightning, Keras, Paddle Paddle, Hugging Face, ONNX, HALO | TensorFlow, PyTorch, PyTorch Lightning, Keras, Paddle Paddle, Hugging Face, ONNX, HALO |
Supported software | Slurm, Kubernetes
OpenStack, VMware ESG | Slurm, Kubernetes
OpenStack, VMware ESG | Slurm, Kubernetes
OpenStack, VMware ESG |
Contact us
Interested in Graphcore solutions? Interested in more details or our installations? Feel free to contact us!
Martin Petr
M: 603 290 619
<a href="mailto:martin.petr@mcomputers.cz">martin.petr@mcomputers.cz</a>