The physics, the math – evolution of computational physics

[“What’s changed in the last ~50 years” series]

Some of my more interesting work as a systems engineer at Hughes was on projects with satellite hardware engineers. In the days where they still wrote much of their own “software” for operating payloads. Maybe a few thousand lines of code. Often quick-and-dirty. Over the decades, that evolved to millions of lines of code. And career software engineers.

Something similar has evolved for particle physics. The tango / interplay of doing the physics and doing the math. The evolution of computational physics and the challenge of viable career paths and institutional knowledge.

• Symmetry Magazine > “The coevolution of particle physics and computing” by Stephanie Melchor (9-28-2021)

(quote) The incredible computational demands of particle physics and astrophysics experiments have consistently pushed the boundaries of what is possible.

As computing has grown increasingly more sophisticated, its own progress has enabled new scientific discoveries and breakthroughs.

Historical recap: Fermi Lab … mainframes … Tevatron … analyzing data from millions of particle collisions per second … microprocessor clusters … CERN … World Wide Web … SLAC National Accelerator Laboratory … Moore’s Law … computational nodes … supercomputers … simulations … Department of Energy’s Exascale Computing Project (exascale computers) … machine learning … quantum computers … National Quantum Initiative Act of 2018 …

(quote) In 1989, in recognition of the growing importance of computing in physics, Fermilab Director John Peoples elevated the computing department to a full-fledged division.

For more than a decade, supercomputers … have been providing theorists with the computing power to solve with high precision equations in quantum chromodynamics, enabling them to make predictions about the strong forces binding quarks into the building blocks of matter.

And although astrophysicists have always relied on high-performance computing for simulating the birth of stars or modeling the evolution of the cosmos, [Berkeley Lab astrophysicist] Nugent says they are now using it for their data analysis as well.

To properly correct for detector effects when analyzing particle detector experiments, they need to simulate more data than they collect. “If you collect 1 billion collision events a year,” [Berkeley Lab physicist] Calafiura says, “you want to simulate 10 billion collision events.”

Machine learning has been important in particle physics as well, says Fermilab scientist Nhan Tran. “[Physicists] have very high-dimensional data, very complex data,” he says. “Machine learning is an optimal way to find interesting structures in that data.”

In quantum computers, qubits rely on superposition in quantum physics, and someday perhaps permit simultaneous analysis of particle interactions for all possible paths. So, we might better examine multi-dimensional ripples (in space-time). Quantum fluid dynamics.

(quote) “Quantum chemistry problems are hard for the very reason why a quantum computer is powerful” — because to complete them, you have to consider all the different quantum-mechanical states of all the individual atoms involved.

Multiple forces are always at play, so to accurately model real-world complexity, you have to use more complex software—ideally software that doesn’t become impossible to maintain as it gets updated over time. “All of a sudden,” says Dubey [computational scientist at Argonne National Laboratory], “you start to require people who are creative in their own right—in terms of being able to architect software.”

That’s where people like Dubey come in. At Argonne, Dubey develops software that researchers use to model complex multi-physics systems—incorporating processes like fluid dynamics, radiation transfer and nuclear burning.

Complexity requires visualization, which is another computational story.

One thought on “The physics, the math – evolution of computational physics

  1. Here’s a different twist on computational physics: a digital assistant using regression algorithms to fit an equation to expansive raw data sets. Finding the correct model (critical variables and algebraic expression) for describing that data vs. using a mathematical model to compute predictive data.

    When there is indeed a pattern hidden in raw data from complex physical events, there’s no magic bullet in finding a simplifying equation, especially for noisy data sets. But “machine scientists” can help.

    • Quanta Magazine > “Powerful ‘Machine Scientists’ Distill the Laws of Physics From Raw Data” by Charlie Wood (May 10, 2022) – Symbolic regression reports relationships in complicated data sets as short equations.

    (image caption) Astrophysicists modeled the solar system’s behavior in two ways. First, they used decades of NASA data to train a neural network. They then used a symbolic regression algorithm to further distill that model into an equation. In these videos — which show true positions as solid objects, and model predictions as wire mesh outlines — the neural network (left) does far worse than the symbolic regression algorithm (right).

    The goal of symbolic regression is to speed up such Keplerian trial and error, scanning the countless ways of linking variables with basic mathematical operations to find the equation that most accurately predicts a system’s behavior.

    Machine scientists are also helping physicists understand systems that span many scales. Physicists typically use one set of equations for atoms and a completely different set for billiard balls, but this piecemeal approach doesn’t work for researchers in a discipline like climate science, where small-scale currents around Manhattan feed into the Atlantic Ocean’s gulf stream.

    As an early proof of concept, the group [cited in article] applied the procedure to a dark matter simulation and generated a formula giving the density at the center of a dark matter cloud based on the properties of neighboring clouds. The equation fit the data better than the existing human-designed equation.


    curve-fitting function
    compressing a data set
    regression analysis
    sparse regression
    Darwinian pressure
    mutating an equation
    Bayesian theory
    symbolic regression algorithm
    deep neural network
    artificial intelligence algorithm
    “AI Feynman” – A physics-inspired method for symbolic regression (MIT)

Comments are closed.