From self-driving cars to facial recognition, modern life is growing more dependent on machine learning, a type of artificial intelligence (AI) that learns from datasets without explicit programming.
Despite its omnipresence in society, we’re just beginning to understand the mechanisms driving the technology. In a recent study Zhengkang (Kevin) Zhang, assistant professor in the University of Utah’s Department of Physics & Astronomy, demonstrated how physicists can play an important role in unraveling its mysteries.
“People used to say machine learning is a black box—you input a lot of data and at some point, it reasons and speaks and makes decisions like humans do. It feels like magic because we don’t really know how it works,” said Zhang. “Now that we’re using AI across many critical sectors of society, we have to understand what our machine learning models are really doing—why something works or why something doesn’t work.”
As a theoretical particle physicist, Zhang explains the world around him by understanding how the smallest, most fundamental components of matter behave in an infinitesimal world. Over the past few years, he’s applied the tools of his field to better understand machine learning’s massively complex models.
Scaling up while scaling down costs
The traditional way to program a computer is with detailed instructions for completing a task. Say you wanted software that can spot irregularities on a CT scan. A programmer would have to write step-by-step protocols for countless potential scenarios.
Instead, a machine learning model trains itself. A human programmer supplies relevant data—text, numbers, photos, transactions, medical images—and lets the model find patterns or make predictions on its own.
Throughout the process, a human can tweak the parameters to get more accurate results without knowing how the model uses the data input to deliver the output.
Machine learning is energy intensive and wildly expensive. To maximize profits, industry trains models on smaller datasets before scaling them up to real-world scenarios with much larger volumes of data.
“We want to be able to predict how much better the model will do at scale. If you double the size of the model or double the size of the dataset, does the model become two times better? Four times better?” said Zhang.
A physicists’ toolbox
A machine learning model looks simple: Input data—>black box of computing—>output that’s a function of the input.
The black box contains a neural network, which is a suite of simple operations connected in a web to approximate complicated functions. To optimize the network’s performance, programmers have conventionally relied on trial and error, fine-tuning and re-training the network and racking up costs.
“Being trained as a physicist, I would like to understand better what is really going on to avoid relying on trial and error,” Zhang said. “What are the properties of a machine learning model that give it the capability to learn to do things we wanted it to do?”

In a new paper published in the journal Machine Learning: Science and Technology, Zhang solved a proposed model’s scaling laws, which describe how the system will perform at larger and larger scales. It’s not easy—the calculations require adding up to an infinite number of terms.
Zhang applied a method that physicists use to track hundreds of thousands of terms, called Feynman diagrams. Richard Feynman invented the technique in the 1940s to deal with hopelessly complicated calculations of elementary particles in the quantum realm. Instead of writing down algebraic equations, Feynman drew simple diagrams—every line and vertex in the diagram represents a value.
“It’s so much easier for our brains to grasp, and also easier to keep track of what kind of terms enter your calculation,” Zhang said.
Zhang used Feynman diagrams to solve a model posed in published research from 2022. In that paper, the physicists studied their model in a particular limit. Zhang was able to solve the model beyond that limit, obtaining new and more precise scaling laws that govern its behavior.
As society runs headfirst into AI, many researchers are working to ensure the tools are being used safely. Zhang believes that physicists can join the engineers, computer scientists and others working to use AI responsibly.
“We humans are building machines that are already controlling us—YouTube algorithms that recommend videos that suck each person into their own little corners and influence our behavior,” Zhang said. “That’s the danger of how AI is going to change humanity—it’s not about robots colonizing and enslaving humans. It’s that we humans build machines that we are struggling to understand, and our lives are already deeply influenced by these machines.”
MEDIA & PR CONTACTS
-
Lisa Potter
Research communications specialist, University of Utah Communications
949-533-7899
lisa.potter@utah.edu