The second law of thermodynamics appears to be a mysterious concept in physics that is full of contradictions. It says that physical states evolve irreversibly, yet the underlying physical laws are reversible. It says that in closed systems, states of high entropy never evolve into states of lower entropy, yet the Poincaré recurrence theorem says that a state in a closed system always eventually almost return to its original configuration.

A few weeks ago ddarius pointed me to the works of E. T. Jaynes where he clears up the mystery surrounding entropy and the second law of thermodynamics. I will present a quick overview of what I learned about entropy from this paper.

Firstly throw away any ideas that entropy has anything to do with order or disorder. That was an analogy created over 100 years ago and it does more harm than good for understanding entropy.

Now, the most important thing to understand about entropy is that it is not a function of state. You cannot point to a flask and say the entropy of this particular flask is such and such. Instead entropy, is a function of a *description* of state. A description of a state characterizes the set of states that satisfies the description. For example, I can describe a flask containing a certain volume of a certain number of moles of ideal gas at a certain pressure. There are many different collections of molecules at various positions and momenta that satisfies such a description. This set of states will form some (hyper-)volume in phase space. The logarithm of this volume is the entropy of the description. Actually, for historical reasons, entropy is really defined as the logarithm of this volume times Boltzmann’s constant, but this constant factor is not important.

Liouville’s theorem says that the volume of a subset of phase space is preserved under time evolution of the system. Therefore, if you want to build a system that *reliably* takes you from any state satisfying description `A` to some state satisfying description `B`, the volume of phase space of states satisfying `B` had better be larger than the volume of phase space of states satisfying `A`. In other words, the entropy of `B` has to be greater than the entropy of `A`.

That is all there is to the second law. It is not so much a law of physics as it is a law of inference.

Notice that it is quite possible for a state with description `A` to evolve into a state with description `B` where the entropy of `B` is less than the entropy of `A`; however, this process will necessarily be unreliable. If you keep trying the same process again and again with random starting states satisfying description `A`, the probability that you will end up in a state satisfying description `B` is at most (and likely much smaller than) exp((S(`B`) - S(`A`))/k) where S(`A`) is the entropy of `A` and S(`B`) is the entropy of `B` and k is Boltzmann’s constant.

Jaynes, in "The Evolution of Carnot’s Principle", notes that if S(`B`) is smaller than S(`A`) by only (1 microcalorie / room temperature) then the probability of reaching a state satisfying `B` from a random state satisfying `A` is less than exp(-10^{15}) which is approximately 10^{-440000000000000}, an astronomically low probability. Stated differently, this is an upper bound on the probability of being able to extract an extra 1 microcalorie of energy than the second law allows from a heat engine operating with a cold sink at room temperature.

A more optimistic point of view says that there could be up to a 79% chance of getting an extra 1 zeptojoule of work out of a heat engine operating at room temperature. I will leave it as an exercise to the reader to look up how much a zeptojoule is. But remember the 79% chance is an upper bound on the probability. The actual probability could be much lower depending on the exact process of the engine.