Which confirms my suspicions, but also sheds lights on how old the confusion is !
There are a bunch of assumptions that are easy to make (because they almost always are true), but very hard to get rid of when they aren't :
- that entropy is objective/ontological rather than subjective/epistemic
- that entropy is equivalent to disorder
- that temperature can always be defined
- that entropy is extensive
(- I think there was at least another one, but I had to do something else in-between and I don't remember now)
- oh yeah, maybe it was that there's a difference between a distribution and a macrostate ? (not sure about it myself)
Now, I don't know what the Bayesian framework can bring to the table here (not being sufficiently familiar with it down to the nuts and bolts of calculations), but if it can prevent us (and future students) from making these mistakes over and over and over again, it would be real progress.
They claim to fix the criticisms, see the section "The Bayesian arrow of time."
Who knows how well they did though.
As far as I can tell they're still making impossible assumptions, because certain Bayesian problems can't be calculated under a certain amount of energy, and some can't be calculated at all while embedded in spacetime (excepting time travel, and sometimes even then).
I think it's necessary to increase (on expectation) the entropy in a closed system when measuring, unless you take measuring to be magic and not a physical process.
Thanks, I need to look into those whenever I can find the time, however it also sounds like they are trying to fit General Relativity in there ?
I can't say for sure if this is doomed in the first place because GR infamously is not compatible with quantum mechanics (and trying to match them by force is only going to produce confusion and nonsense), or potentially revolutionary in actually managing to reconcile them...
I think practically though, even before you hit anything "quantum", the requirement that you physically interact with the system is what dooms you.