Scientists believe that time is continuous, not discrete — roughly speaking, they believe that it does not progress in “chunks,” but rather “flows,” smoothly and continuously. So they often model the dynamics of physical systems as continuous-time “Markov processes,” named after mathematician Andrey Markov. Indeed, scientists have used these processes to investigate a range of real-world processes from folding proteins, to evolving ecosystems, to shifting financial markets, with astonishing success.
However, invariably a scientist can only observe the state of a system at discrete times, separated by some gap, rather than continually. For example, a stock market analyst might repeatedly observe how the state of the market at the beginning of one day is related to the state of the market at the beginning of the next day, building up a conditional probability distribution of what the state of the second day is given the state at the first day.
In a pair of papers, one appearing in this week’s Nature Communications and one appearing recently in the New Journal of Physics, physicists at the Santa Fe Institute and MIT have shown that in order for such two-time dynamics over a set of “visible states” to arise from a continuous-time Markov process, that Markov process must actually unfold over a larger space, one that includes hidden states in addition to the visible ones. They further prove that the evolution between such a pair of times must proceed in a finite number of “hidden timesteps”, subdividing the interval between those two times. (Strictly speaking, this proof holds whenever that evolution from the earlier time to the later time is noise-free — see paper for technical details.)
“We’re saying there are hidden variables in dynamic systems, implicit in the tools scientists are using to study such systems,” says co-author David Wolpert (Santa Fe Institute). “In addition, in a certain very limited sense, we’re saying that time proceeds in discrete timesteps, even if the scientist models time as though it proceeds continually. The scientists may not have been paying attention to those hidden variables and those hidden timesteps, but they are there, playing a key, behind-the-scenes role in many of the papers those scientists have read, and almost surely also in many of the papers those scientists have written.”
In addition to discovering hidden states and time steps, the scientists also discovered a tradeoff between the two; the more hidden states there are, the smaller the minimal number of hidden timesteps that are required. According to co-author Artemy Kolchinsky (Santa Fe Institute), “these results surprisingly demonstrate that Markov processes exhibit a kind of tradeoff between time versus memory, which is often encountered in the separate mathematical field of analyzing computer algorithms.
To illustrate the role of these hidden states, co-author Jeremy A. Owen (MIT) gives the example of a biomolecular process, observed at hour-long intervals: If you start with a protein in state ‘a,’ and over an hour it usually turns to state ‘b,’ and then after another hour it usually turns back to 'a,' there must be at least one other state ‘c' — a hidden state — that is influencing the protein's dynamics. "It's there in your biomolecular process,” he says. “If you haven’t seen it yet, you can go look for it.”
The authors stumbled on the necessity of hidden states and hidden timesteps while searching for the most energy-efficient way to flip a bit of information in a computer. In that investigation, part of a larger effort to understand the thermodynamics of computation, they discovered that there is no direct way to implement a map that both sends 1 to 0 and also sends 0 to 1. Rather, in order to flip a bit of information, the bit must proceed through at least one hidden state, and involve at least three hidden time steps.
Above: The minimal configuration for flipping a bit of information from 1 to 0 requires three states and three sequential time steps. (Image: David Wolpert)
It turns out any biological or physical system that “computes” outputs from inputs, like a cell processing energy, or an ecosystem evolving, would conceal the same hidden variables as in the bit flip example.
“These kinds of models really do come up in a natural way,” Owen adds, “based on the assumptions that time is continuous, and that the state you’re in determines where you’re going to go next.”
“One thing that was surprising, that makes this more general and more surprising to us, was that all of these results hold even without thermodynamic considerations,” Wolpert recalls. “It’s a very pure example of Phil Anderson’s mantra ‘more is different,’ because all of these low-level details [hidden states and hidden timesteps] are invisible to the higher-level details [map from visible input state to visible output state].”
“In a very minor way, it’s like the limit of the speed of light,” Wolpert muses, “The fact that systems cannot exceed the speed of light is not immediately consequential to the vast majority of scientists. But it is a restriction on allowed processes that applies everywhere and is something to always have in the back of your mind.”
Read the paper, “A space-time tradeoff for implementing a function with master equation dynamics,” in Nature Communications (April 15, 2019)
Read the National Science Foundation news brief (April 15, 2019)