Computer circuit (iStock).

The circuits that make up the brain of a computer require a lot of energy to operate — so much so that running computers accounts for ~5 percent of U.S. energy consumption. As the demand for computing power increases with the rapid technological advances that bring us ever-smarter cell phones, laptops, and other devices, there’s tremendous interest among manufacturers and scientists in developing more energy-efficient computer systems. A new paper by SFI Professor David Wolpert and Program Postdoctoral Fellow Artemy Kolchinsky in the New Journal of Physics tackles one of the major problems that computer scientists will need to overcome to achieve that goal: the layout of a circuit limits which parts of it can interact — and those limitations, in turn, affect the thermodynamic costs of operating the circuit. 

The paper lays out specific equations for surveying a small patch of a vast and unexplored territory in physics — namely if we go beyond the simple case of erasing a bit, how much energy it takes to manipulate information, and how much of that energy is lost to heat? Wolpert says “[We are just beginning] to dip our toes into the swimming pool” of the fundamental physics of computing. Nonetheless, the equations for thermodynamics of circuits represent a significant step forward. They show that “there is water in the pool, and we’ve now got the tools to start to analyze it.”

“Once you start to put constraints on the physical system that performs your computation, such as the fact that it has to be built out of simple interconnected units, a lot of richness emerges in terms of minimal thermodynamic costs,” Kolchinky says.

Physicists have been grappling with the relationship between information and thermodynamics since 1867, when James Clerk Maxwell proposed a famous thought experiment: A microscopic “demon” that could corral fast and slow gas molecules into separate chambers, thereby increasing order and violating the second law. By gauging the speed of the molecules, the demon would presumably be performing some form of computation. The question of how much energy that computation requires was picked up by Rolf Landauer in 1961, and again in 1982 by Charles Bennett, who calculated the amount of energy lost to heat when a single bit of information — a 1 or a 0 — is erased.

But after Landaur and Bennet’s insights, the field was stymied by “computers” more elaborate than bit-erasers, according to Wolpert,. He says the problem was that the physicists who were analyzing computation didn’t have the right set of tools at the time. They were using methods from equilibrium statistical physics to investigate the inherently non-equilibrium phenomenon of computation.

“Computers, living systems — frankly, anything interesting is highly non-equilibrium,” Wolpert says. If you unplugged your laptop and let it run out of charge, it would reach an equilibrium and simply stop working. However, since the work of Landauer, Bennett and others, there has been a revolution in statistical physics, providing us with a thorough and deep understanding of non-equilibrium physics (Jarzynski and Crooks, 1996). In 2018, Wolpert and his collaborators saw an opportunity to revisit the entire issue of the physics of computation, by capitalizing on these recent advances in non-equilibrium physics. In 2019, the group published one of their first successful applications of the new physics, revealing what had been completely hidden complexities in the simple process of flipping a bit from 1 to 0.    

The current paper represents three significant advances for the physics of computation. First, it scales up the application of non-equilibrium physics from bits to circuits. Circuits are more complex than bits — they take in individual inputs, pass them through a series of logical gates to perform a function, then output the results. Second, the authors derived a formula for how much less energy it would require to calculate the same input-output function as a given circuit if it weren’t for the physical constraints embodied in the layout of the circuit. Finally, the move to circuits creates an overlap between the thermodynamics of computation and the sub-field of computer science known as circuit theory. The authors hope their findings will attract the attention of more computer scientists who might apply some of the new approaches and join the larger effort to re-engineer computers based on the physics of information.

Wolpert is not convinced that these result results on circuits would have effects on the energy expenditures of real computers – they may prove more significant for understanding biological brains and cells. However, for Wolpert, the promise of more efficient technology is secondary anyway.  His primary interest is in forging a new understanding of the deep relationship between the laws of physics and computation. “After all,” he says, “abstractly speaking, computation is just a dynamical process transforming information, and the laws of physics are about how the information in the physical world changes dynamically.”

“It’s this down in the deep, wrestling with God kind of stuff where there's this relationship between information processing and the foundations of how the universe works,” Wolpert says. “We had thought that we understood it by just looking at what happens with erasing bits, using tools that weren't quite right. But now that we have the right tools, we can start to analyze what are the fundamental relationships between the physical rules of reality and processing and transformation of information. And it's a far, far more elaborate,far richer structure than what we had realized.”

Read the paper, "Thermodynamics of computing with circuits," in the New Journal of Physics (Jun 24, 2020)

Read the article, "To make smartphones sustainable, we need to rethink thermodynamics," in New Scientist (March 11, 2020)