IBM Power Systems upgrade High Performance Systems for Cognitive and AI workloads

May 13th, 2018

Ron Gordon
Director – Power Systems

IBM Power Systems has introduced two new high-performance computing servers, with unprecedented capabilities, in areas such as deep learning, machine learning, and artificial intelligence (all of which are based on open source frameworks), and at a new industry cost point. In December of 2017, IBM shipped a POWER9-based server with 4 V100 GPUs from NVidia that what was code named “Newell,” which was named after Allen Newell of Carnegie Mellon University, a pioneer in cognitive psychology. It also had the typical IBM name of AC922 and model 8335-GTH. While this high-performance computing (HPC) server was awesome, IBM has made it even better with two new models. Like the original Newell, each of these two new servers connect the imbedded GPU accelerators via NVLink, which is a direct coherent connection from the processor (and cache) to the GPU, with an aggregate transfer speed of 300 GB/sec; it is now called NVLink 2.0.

First, the new AC922 8335-GTG. This system has an upgraded POWER9 processor that operates at 2.7 to 3.3 Ghz for the 32-core option and between 2.4 and 3.0 Ghz for the 40-core option. The system supports from 128 GB to 2 TB of RAM, has 4 PCIe slots, and supports 2 disk bays for up to 7.6 TB of SSD. The Nvidia GPUs can be configured with 0, 2 or 4 Tesla Volta 100 GPUs, with up to 32 GB or RAM each. This system will require RedHat 7.5 or Ubuntu 18.04. If required for a high-performance cluster, Mellanox IB connectivity is available.

The second system is the AC922 8335-GTX. This system is water cooled, and it contains 4 or 6 Nvidia Tesla V100 GPUs, again connected via NVLink 2.0 to the processors. The system has either 36 cores operating at 3.0 to 3.5 Ghz or 44 cores operating between 2.8 and 3.1 Ghz. Similar to the GTH, the system has 4 PCIe slots, 2 disk bays and is a 19-inch rack design, with two hotswap power supplies. This system will support RedHat 7.5 only, at this time.

Both systems will ship May 25, 2018.

These systems will enable standard GPU accelerator-based computational programming to execute, with the highest response time, based on the NVLink 2.0. This IBM proprietary link between the GPUs and the processor should NOT be confused with the NVLink that is a standard bus between the GPUs that exists on all system architectures supporting Nvidia GPUs. This is based on the amount of download to the GPUs of data for parallel processing, and transition to these models is as easy as a recompile of existing computational programs. Programming with standard OpenACC or OpenML is available. Application frameworks supporting ML/DL and AI (Caffee, Torch, Theano, etc.) are available and optimized by IBM, so the entry into these data analytics-oriented application areas is expedited.

Please contact your Mainline Account Executive directly, or click here to contact us with any questions.

Mainline