China’s Sunway Supercomputer Scales Neural Networks for Quantum Chemistry

China’s Sunway Supercomputer Scales Neural Networks for Quantum Chemistry

(Vector Fusion Art/Shutterstock)

Chinese researchers have claimed how artificial intelligence can extend the reach of classical supercomputing into the domain of quantum chemistry, using China’s Sunway Ocean Light System to model molecular behavior at extraordinary scales.

The team used Ocean Light, also known as the “New Sunway,” to train a neural network capable of simulating the quantum states of molecules, a task traditionally reserved for quantum computers or heavily simplified models. Their work used a method called neural network quantum states (NNQS), which uses machine learning to approximate how electrons move and interact within atoms and molecules.

Running on 37 million processor cores, the Sunway system achieved 92% strong scaling and 98% weak scaling, which means performance remains stable as processors and problem sizes increase. This high performance at this scale is rarely achieved and indicates close alignment between software and hardware, according to an article by Big Data’s head of industry relations (and predecessors). Hpcwire Managing Editor) Nicole Hemsut Perkett.

Quantum chemistry simulations require representing every possible configuration of electrons, an increasingly complex problem. Conventional methods can only model small systems because the number of possible electron configurations scales exponentially with system size. NNQS attempts to overcome this limitation by training a neural network to approximate a molecule’s waveguide, which mathematically represents how its electrons are distributed among quantum states.

Sunway Ocean Light System

In this study, the researchers modeled systems containing 120 spin-orbits, extending the scale of neural network quantum simulation beyond previous limitations. The network was trained to predict local energies for sampled electron configurations, then refined until its output matched the actual energy distribution of the molecule.

The Sunway Ocean Light system, the successor to the Tahliat supercomputer, is powered by SW26010-PRO processors, made up of clusters of tiny compute cores that use local memory instead of cache, allowing precise control over data movement. Tens of thousands of these processors are connected to form a system with more than forty million cores, capable of exascale performance, according to a report by Data Wide. While the architecture is well-suited to repetitive tasks such as deep learning training, the researchers adapted it to handle the irregular workloads of quantum simulations.

Adopting this involves developing a data-parallel NNQS-Transformer framework based on a layered machine design. The management core coordinated communication between nodes, while the lightweight compute elements performed calculations within local memory. A dynamic load balancing algorithm helped distribute the uneven workload, ensuring that no core is idle.

The project demonstrates that machine learning can accurately model quantum systems for practical chemistry and materials research using existing exascale hardware. Sunway’s study expands on earlier NNQS efforts, showing that classical supercomputers can now handle molecular problems once thought to require quantum hardware. The results also highlight a potential bridge between classical and quantum computing: using neural networks on conventional machines to find the same physical systems that future quantum computers will directly study.

Although full performance details are unknown, the research is another step in China’s development of large-scale, AI-enabled scientific computing. It also suggests that supercomputers can serve as powerful platforms for quantum-inspired simulations, bringing the discovery of new materials within reach before practical quantum processors are available. Along with recent work by Sandboxak and NVIDIA, which used AI accelerators to perform quantum chemistry simulations on GPUs, these studies show increasing convergence between AI hardware, HPC architectures, and scientific modeling.

Share this article

Leave a Reply

Your email address will not be published. Required fields are marked *