Given the track record of the national labs in developing HPC technology that was quickly commercialized, it is a safe bet that businesses will reap the benefits of the DOE’s work in AI.
Mainstream use of continuous intelligence (CI) and artificial intelligence (AI) by businesses is only possible thanks to the availability of high-performance computing (HPC) capabilities. Many of the core HPC technologies used every day now were originally developed, deployed, and proven out in Department of Energy (DOE) labs. Anyone looking for a crystal ball about what capabilities are on the horizon should take note of a DOE released last week titled: AI for Science.
See also: Overcoming the Barriers to Successfully Scaling AI
Advances in many essential elements (power CPUs, high speed interconnects, high-performance storage, cloud services, distributed computing, etc.) were pioneered on DOE systems, including the supercomputers at Argonne National Laboratory, Oak Ridge National Laboratory, and Lawrence Berkeley National Laboratory. Much of the technology now used for CI and AI started in these labs and has since been commercialized.
The report is based on input gathered in four town hall meetings attended by more than 1,000 U.S. scientists and engineers. The goal of the town hall series was to examine scientific opportunities in the areas of AI, Big Data, and HPC in the next decade, and to capture the big ideas, grand challenges, and next steps to realizing these opportunities.
One issue identified that has great relevance to CI and AI relates to the need to analyze large volumes of event data. Specifically, the report noted that in the scientific research arena, the velocity of data is increasingly beyond the capabilities of existing instrument data transmission and storage technologies. “Consequently, real-time hardware is needed to detect events and anomalies in order to reduce the raw instrument data rates to manageable levels.” CI and AI in business are experiencing similar problems with high data rates and must analyze such streaming data in real-time to make decisions and take actions.
Key Action Areas
The report explored several core technology aspects of using AI on a grand scale. For each area, the report defined the state of the art, grand challenges, and advances to be made in the next decade. Major areas examined include:
AI foundations and open platforms
Grand challenges are how to:
- Incorporate domain knowledge in machine learning (ML) and AI
- Achieve efficient learning for AI systems
- Establish assurance for AI, addressing the question of whether an AI model has been constructed, trained, and deployed so that it is appropriate for its intended use
Software environments
Grand challenges are how to:
- Develop software for seamless integration of simulations and AI
- Develop software for knowledge extraction and hypothesis generation
- Enable self-driving experiments with AI integration and controls.
Hardware architectures
Grand challenges are how to:
- Create predictive architecture design tools to enable rapid evolution of AI accelerators
- Create integrated AI workflows and use them to evaluate emerging AI architectures from the edge system on a chip (SoC) to HPC data centers
- Meet the rapidly growing demand for memory, storage, and I/O capabilities of the emerging requirements of AI-enabled applications
AI at the edge
Grand challenges are how to:
- Improve productivity with high-speed data through AI at the edge
- Enable smart scientific infrastructures through AI at the edge
- Enhance discovery through the integration of multiple data sources
- Integrate systems of systems using AI at the edge
What to Expect
Given the track record of the national labs in developing HPC technology that was quickly commercialized, it is a safe bet that businesses will reap the benefits of the DOE’s work in AI. Progress and development are not limited to the DOE.
A recent announcement by the Department of Commerce noted that the National Oceanic and Atmosphere Administration (NOAA), an arm of the U.S. Department of Commerce, will upgrade the supercomputer it relies on to generate weather forecasts to enhance weather analytics. The new systems will triple the capacity and double the storage and interconnect speed NOAA can access, allowing the agency to create both higher-resolution models as well as build more comprehensive global models.