Can AI chips handle complex science? Sandbox and Nvidia show what’s possible

Can AI chips handle complex science? Sandbox and Nvidia show what’s possible

(Jackie Name/Shutterstock)

When researchers talk about “AI for science,” they often mean applying machine learning to accelerate discovery. But a new collaboration between Sandbox and NVIDIA shows how AI-optimized hardware can be adapted for high-performance computing. The project, which involves researchers from PNNL and Hungary’s Wegener Research Center for Physics, uses mixed-precision methods to achieve the precision required for scientific simulations.

In a paper released this month, the scientists demonstrated that quantum chemistry simulations, which traditionally demand extreme numerical precision, can run accurately on NVIDIA’s Blackwell GPUs. This work demonstrates a new level of precision for double-precision arithmetic simulated on AI-optimized GPUs, bringing chemical precision to challenging quantum chemistry tasks.

Research shows that the same GPU architectures that are running modern AI workloads can also accelerate physics-based simulations, provided their low-precision hardware is used in better ways. At the heart of this work is a method known as mixed-precision mathematics, which combines fast, low-precision computations with selective high-precision steps to maintain scientific accuracy.

“Said Adam Lewis, a physicist and head of innovation at Sandbucksock,” said Adam Lewis, a physicist and head of innovation at Sandbucksock. Adam Lewis, Head of Innovation for AI Simulation at Sandbox, said: Avery. “What this paper demonstrates is how to use the same techniques to get the exact results you need for quantum chemistry while efficiently exploiting machine learning hardware.”

Adapting AI hardware for science

At the heart of this study is NVIDIA’s Blackwell GPU architecture, the same platform built for training and processing large language models. Designed for performance in low-precision math, the chips can perform a vast number of calculations per second, but until now, that speed has come at the expense of accuracy. Researchers have found a way to bridge this gap.

NVIDIA’s Blackwell Ultra chip (Source: NVIDIA)

Their approach relies on a technique called FP64 emulation, first proposed by Japanese computer scientist Kensuke Ozaki, which reconstructs double-precision precision with very few precision operations. In practice, this means representing 64-bit numbers as a set of small, fixed-point chunks that can be quickly processed by Blackwell’s tensor cores. The results are then reconstructed for comparison with native FP64 calculations.

The researchers tested the approach on molecular systems that test the limits of current scientific computing methods, including Fomoco, a complex metal cluster involved in natural nitrogen production, and cytochrome P450, a key enzyme in drug metabolism. Both are large, multi-electron systems with heavy metal centers that make them notoriously difficult to model. Still, using FP64 emulation on a Blackwell GPU, the team reproduced results that closely matched those from traditional high-precision calculations.

The underlying algorithm is a tensor network method called density matrix renormalization group (DMRG), which represents quantum wavefunctions as networks of correlated tensors. Its structure makes it well suited for GPUs, where similar tensor operations drive modern neural networks. By combining this structure with augmented mathematics, the team achieved levels of performance rarely seen in scientific codes, reaching 90 to 95% GPU utilization even for complex enzyme systems. The result, Lewis notes, is “pretty wild” and suggests that features of Blackwell’s architecture are unexpectedly compatible with quantum chemistry.

Even in LLM applications, which are [GPUs] “That would be an impressive number,” Lewis said. Avery. “The high usage suggests that they’re actually doing weird optimizations to do these quantum chemistry calculations that nobody had in mind when they made them in the first place.”

Bridging AI and HPC

(Shutterstock)

The results highlight the growing overlap between AI and HPC. As GPU architectures evolve to support expanding AI models, they are also becoming powerful engines for scientific simulations. The same hardware advances that handle AI models with billions (or trillions) of parameters can now model the behavior of atoms and molecules, with the right algorithms.

Lewis framed this result as part of a larger industry trend: As AI dominates more and more of the computing landscape, hardware innovations developed for machine learning workloads are starting to spread into simulation. “The trend toward lower precision math is likely to continue over the next few generations as AI takes a higher slice of the computing pie,” he said.

Its implications extend beyond chemistry. Efficient simulation of double-precision arithmetic can make AI accelerators viable for a range of physics and materials science workloads, many of which are constrained by the need for specialized HPC infrastructure. By leveraging GPUs already optimized for AI training and evaluation, researchers can dramatically expand scientific computing power without waiting for specialized systems.

The collaboration also reflects NVIDIA’s expanding interest in AI for science, a central theme around the company in recent years. Lewis called Sandbox’s collaboration with the company “very productive,” saying that NVIDIA’s internal research teams were deeply engaged in the project. “They’re always interested in people who are using their hardware in new ways to do good science. They’ve been extremely helpful,” he said.

Next Steps

The next step is to make the approach faster and more robust across a wider range of molecular systems, Lewis said. Although this method achieves high accuracy, it still relies on considerable GPU resources. “If you have to use an entire DGX cluster for each data point, it’s not necessary from an ROI perspective,” he said. “The next step is to create a more efficient approximation and speedup. And to make the algorithm more stable and robust so that we can use it for arbitrary systems without any handholding.”

(Macchi/Shutterstock)

Another direction, he added, is combining these physics-based simulations with machine learning. “A big application would be to train models on this data,” he said. In practice, this could mean using simulated quantum chemistry calculations to generate training data for foundational models in materials discovery or molecular design, creating a feedback loop between AI and simulation that strengthens both.

For Sandbucksack, the research reinforces his mission to develop large-scale quantitative AI models that combine the rigor of physics-based simulations with the adaptability of AI. It also points to a future where the same chips that train LLMs can model chemical reactions, discover new drugs, or design new materials. Explore the paper at this link.

Share this article

Leave a Reply

Your email address will not be published. Required fields are marked *