The Grace processor, named for computer technology pioneer Grace Hopper, will make working with some of the most processing intense and data massive applications easier and more efficient.
At the 2021 keynote for GTC, NVIDIA’s CEO and founder Jensen Huang announced the arrival of its first data center CPU, designed to amplify by ten times the performance of the world’s fasters servers. Known as “Grace,” it’s the first data center CPU, an ARM-based processor designed to handle the most processing-intensive, big data needs on the market.
The world’s fastest supercomputing
In addition to multiplying compute power, NVIDIA worked to reduce the power required to run Grace. It uses an energy-efficient ARM CPU with a low-power memory subsystem. The new processor allows adopters to push artificial intelligence and data processing boundaries by leveraging Arm’s data center architecture and offering choice to those in the AI and HPC community.
See also: NVIDIA Supercharges Hawk Supercomputer for AI Work
Grace is named for Grace Hopper, the mathematician and rear admiral in the U.S. Navy who was a pioneer in developing computer technology. The processor is NVIDIA’s response to the growing parameters of giant AI, models distinguished by billions of parameters and only growing larger. Grace features:
- fourth-generation NVIDIA NVLink interconnect technology: 900 GB/s connection between Grace and coupled NVIDIA GPUs.
- LPDDR5x memory subsystem: Ten times more efficient and twice the bandwidth of DDR4 memory
- Unified cache coherence with a single memory address space: simplifying programmability
- Support by the NVIDIA HPC software development kit
- The full suite of CUDA and CUDA-X libraries
Grace first adopters
The Swiss National Supercomputing Center (CSCS) will be among the first to build Grace-powered supercomputers to further scientific research efforts in the Swiss community. The U.S. Department of Energy’s Los Alamos National Laboratory is also planning to develop some of the first supercomputers.
The new processor will make working with some of the most processing intense and data massive applications — think natural language processing, recommender systems, and AI supercomputing — easier and more efficient. Although this is still a niche of computing, it enables researchers and scientists to tackle some of the universe’s biggest questions. NVIDIA expects availability at the beginning of 2023.