
The arrow of time is probably one of the most written fundamental topics in physics. I myself have written about it once. To the right are some writings and movies related to time and these are only the movies or books I have seen. My favorite math book pair about time is probably Mackey’s “Time’s arrow” or “Space time matter” by Herman Weyl, the two favorite movies the “Groundhog day” and “The time machine”, (both with original ideas. time loops have not appeared before groundhoig day and time machines not before Wells)., the favorite physics book pair, the book of Kip Thorne about “time warps” (proving that the Einstein equations allow for time travel if some strange matter is allowed) and the “brief history of time” of Hawking (which floats the original idea that the universe might have no boundary). Originality is important to me. We should honor it because if we don’t who comes up with new ideas. The nihilistic approach is not to worry and just let AI do it in the future.
Eddington’s interpretation why there is an arrow of time because of entropy. Indeed, even very simple processes like T(x,y)=(2x-y+10 sin(x),x) lose information even after small time and are already Markov processes that increase entropy [if one looks at it from the engineering perspective, for a mathematician of course it is a diffeomorphism of the 2-torus that is (as the name tells) invertible. Whenever we describe the map with a finite partition (finitely many data measurements), then with respect to this partition, we have a Markov process, with transition probabilities. Markov processes of course are not invertible]. The point is not really properly defined in real arithmetic (without using an insane amount of machine precision) and $T^{-100}(x,y)$ does not get you back to
. Try it: T[{x_, y_}] := {2x – y + 10Sin[x], x}; S[{x_,y_}]:={y,2y-x+10 Sin[y]}; Nest[S,Nest[T,{0.4,0.6},10],10] gives you again {0.4,0.6} as S is the inverse of T. But Nest[S,Nest[T,{0.4,0.6},90],90] gives completely different point. We have lost the ability restore the initial point. You can with time t=90 still control it by increasing the machine precision but for $t=10^{20}$ there is no memory even to keep a single point in the computer. The motion of this completely deterministic, reversible system but from the practical point of view has become a Markov chain. Real floating point arithmetic does not satisfy even basic laws of arithmetic: try T[x_]:=4x(1-x); S[x_]:=4x-4x^2; {Nest[T,0.4,100],Nest[S,0.4,100]} . The two processes, one for T and one for S are mathematically the same but produce completely different trajectories. There is obviously an arrow of time, but it is only because we can not keep track in even the simplest cases. We do not have to look at the complicated process how the Charles river cormorant relives itself, which is a complicated motion of about
waste molecules, all moving perfectly deterministically and reversibly, even on a quantum level as the Schroedinger equation
is a reversible system for the wave function describing that system as well as its surrounding (considering everything within a light minute). The solution of that simple equation is
and
gives you the initial wave function back. A physicist might say, but we can not find the wave function
. This is not the point. Mathematically, the process is reversible. In principle, we could reverse it. There is a basic clash of culture in mathematics and engineering. The later often do not accept perfectly valid mathematicial arguments, because they are “impractical” or “unlikely”. Poincare recurrence is a mathematical fact. It is dismissed by almost everybody because the waiting time is so large. Mathematicians however do not care about “practicalities”. They prove theorems. And Poincare-Recurrence is one of the simplest results for any measure preserving system. But even mathematicians (like Zermelo) dismissed it as a valid argument. When I was at the university of Texas in Austin in 1998, there had been in a book talk of Ilya Prigogine which of course drew attention like any Nobel prize winner does. I had at that time entertained myself to go talks or seminars of famous people like Weinberg’s physics seminar. I remember also a talk of Edsger Dijekstra who was at Austin. These were all usual talks in usual seminar rooms or lecture halls. Prigogine’s talk was completely different. He then promoted his book “The end of Certainty” that came out in 1997.
The abstract of the book tells:
Time, the fundamental dimension of our existence, has fascinated artists, philosophers, and scientists of every culture and every century. All of us can remember a moment as a child when time became a personal reality, when we realized what a “year” was, or asked ourselves when “now” happened. Common sense says time moves forward, never backward, from cradle to grave. Nevertheless, Einstein said that time is an illusion. Nature’s laws, as he and Newton defined them, describe a timeless, deterministic universe within which we can make predictions with complete certainty. In effect, these great physicists contended that time is reversible and thus meaningless.
The book talk was fascinating as an event, tea was served and Prigogine was treated like a god. Prigogine made the bold claim that fundamentally because simple maps like the Bakers map are chaotic that determinism is untenable. As a scientist, he is certainly correct that even the simplest chaotic systems (the standard map and the logistic map above illustrate this already) are not reversible in experiments. Mathematically they are but the experiments tell otherwise even when doing experiments with small time like 100 steps. Its not the complexity of the system like in statistical mechanics, it is the fundamentally built in sensitive dependence on initial conditions that multiplies errors in each step which make the process look irreversible. There are lots of places where experiment and mathematics clash and were we usually do not mind. Mathematically, the harmonic series diverges. Physically it does converge, because we never can witness getting something larger than 100, when summing it up. Tryi t out: the code for the experiment is one word: s=0; Do[s+=1/n; Print[N[s]],{n,1,10^10}] We know that a billiard gas system in a box is chaotic and that if we shake the box and start with all balls in one half of the room, we end up with a situation of an equilibrium where all parts of space have similar density. Mathematically, we know however that since the motion is reversible that we will end up with a situation where all balls are on one side again. Zermelo already objected to that but the objection is of practical nature. We can not wait multiple times the age of the universe for this to happen. But mathematically, it is simply not true that entropy increases. There is an arrow of time because of our limitations of accessing the information of the current state of even a small system which makes us lose information. But in principle, things are reversible. Claiming that our entropy increases in a process is just restating that whenever we make measurements, we deal with finitely many data, with finite sigma algebras of events and that this finite sigma algebra does not cover the full sigma algebra. A probabilistic model can then be more realistic. But the real system is perfectly reversible. Probabilistic descriptions of finite dynamical systems like for some large n like
is currently better described using probabilistic methods. Note that this is a permutation on the attactor of the system. We do not have the mathematics to decide whether there are
such that
. Any such technique would allow us to factor the large number n. The physicist approach is to model it statisticially. This is what cryptologists do. They teat the system as a random number generator and use the Birthday paradox to predict how long it takes to find a factor of n (Pollard rho algorithm). What Prigogine did was to declare that irreversibility is baked into the fundamental laws of nature. This is pure speculation. We do not know of any “fundamental law” that is not reversible. For me, Prigogine’s talk (and book) felt shallow not because of philosophical objections, but because already then, the “Chaos Frenzy” has been overdone. It had already entered Jurassic park in Hollywood. But it was a memorable event.
The step from “not being able to do or see something” to “it can not be done or does not exist” is a fallacy which has been done since antiquity: Demokrite claimed that there must be atoms because he was unable to divide dust particles. In principle this was lazy thinking. The step to see that there are “atoms” needed much more work and insight like by understanding Brownian motion (Einstein) or see quantizations of charge (Millikan oil experiment) or then Rutherfold experiments realizing that atoms have structure or then the splitting of the atom by Hahn, Strassmann, Meitner and Frisch. “Speculation” is quite cheap. What is lasting are mathematical theorems or experimental evidence that can be replicated. Even now in retrospect, Prigogine’s “The end of certainty” did not add any new angle to the story of time. It just made a bold claim that “physical processes are fundamentally irreversible”. If we shuffle a deck of cards completely randomly we of course will end up in general with a “random sequence of cards”. The shuffling is not irreversible however. Do the same shuffle enough times and you are back to the original situation. Still, one can see stuff like that “shuffling increases entropy”. This is what I got reminded off, when filming the “Cormoran sh..” on the Charles river.