When I first started learning QM in undergrad, I was skeptical of the idea of randomness. Many years later after taking QFT in grad school... I'm still skeptical. Non-locality doesn't bother me one bit, but personally speaking, there's something deeply unsettling about the idea of "true" randomness.
First, I think it's important to define what randomness even is, for which I'll use the most universal definition, i.e., Kolmogorov randomness. A string of data is Kolmogorov random if, for a given universal Turing machine, there is no program shorter than the string that produces the string (yes, you can arbitrarily choose which universal Turing machine, but the invariance theorem makes this fact inconsequential for the most part).
So if we repeatedly set up and measure a quantum system that's not in an eigenstate and then apply the probability integral transform to the individual measurement values, we should expect to find a sequence of values drawn from a uniform distribution, and this sequence should not be compressible by any computer program.
This is where it gets interesting though, because it may very well be the case that this sequence of measurement values is incompressible only because we lack external information, i.e., we are looking at the Kolmogorov complexity of the string from our perspective as experimenters, but from the perspective of a hypothetical observer outside the universe, the conditional Kolmogorov complexity (conditioned on some missing information) could indeed be less than the length of the string.
So where could this missing information be stored? My guess is that it's at the boundary of experimenter/experiment (not referring to a spacetime-local boundary here), since you can't represent the overall quantum state of experimenter + experiment as a separable state. That is, the information necessary to perfectly predict the result of a measurement on a quantum system is inaccessible to us precisely because we — the experimenters — are part of the system itself.
In this way, quantum randomness would be truly random from our perspective in the sense that the future is to some degree fundamentally unpredictable by humans, but just because it's genuinely random to us doesn't imply the universe is indeterministic.
I wonder if there's some way you could design an experiment that distinguishes true indeterminism from the merely unpredictable...
I can understand what you are saying, but the thing I always wonder about is why should the universe be deterministic? We're used to determinism because we experience that at a macro level. However, why should we prefer that condition at a QM level? To a certain extent, I'm actually more comfortable with the idea that it isn't deterministic and that all determinism essentially derives from chaos theory: things happen randomly, but the system constrains its output.
I guess I don't really have a reason for my preference other than thinking, if a universe pops into existence, what would I expect it to act like? Deterministic, or indeterministic? It just seems simpler to assume indeterministic because I can't think of a reason why it should be deterministic.
Edit: I know that chaos theory is built on determinism :-) I'm thinking of things like strange attractors. My ignorance leaves me with no better word for what I'm talking about, unfortunately.
Everything else we've ever observed has initially appeared to be random. There was a long slow process of proving individual bits weren't random and then a sudden jump in progress after several thousand years of recorded history when Newton and contempories showed it all to pseudorandom. That is to say, it was never obvious that the macro level was deterministic.
For a statistics perspective, given the youthful nature of Quantum Mechanics as a field it is more likely than not that we're observing a deterministic phenomenon from the wrong angle, so it appears random. This is because that is how first contact played out with pretty much everything else.
It's an interesting way to think about it, but I have to say that even after reading what you've typed I don't really see it. Although Newton and contemporaries had mathematical rigour, there always seemed to be order in the universe. If you push on a cart softly, it moves slowly. If you push on it hard, it moves quickly. In fact, Gallileo had to show that heavier things do not fall faster than light things as it was imagined to be. Never did we drop a ball and expect its speed to be random. Even the word "random" comes from the mid 1600s and it comes from the idea to run quickly (i.e. without paying attention to a purpose). I don't even think the concept of randomness as we see it was a concept long ago. Everything was either a result of something else, or pre-ordained. Things that you couldn't explain were "God's purpose". Something with no purpose and no mechanism would be pretty alien to early thinkers, I think.
Super interesting, thank you for the very informed and insightful comment.
In the same line, from my perspective, randomness appears where the "limit of our measurement resolution" is. In other words, when your only way to measure something is to sample it, then the maximum resolution of your measurements is going to depend on the maximum speed/frequency at which you can sample the thing you are measuring. So in the end all our measurements will always be limited by whatever thing is the fastest that we can handle/operate/understand. Anything faster than that will appear to be random to us. And this is also pretty much the same as what the Nyquist-Shannon sampling theorem says about any wave/information.
Relating to the Kolgomorov randomness under the above, something would be random when we can't sample it fast enough to rebuild its waveform with perfect fidelity within the time frame that it appears to be random.
Nyquist-Shannon only deals with measurements of power spectral densities in sampled systems, not measurements in general. The resolution of measuring for instance the location of a particle in QM has nothing to do with Nyquist-Shannon and does not depend on any sampling frequency.
You are correct. Now, why wouldn't it apply to any measurements?
With some creativity I believe Nyquist-Shannon can be applied to all measurements. For example you could think of a single measurement as the equivalent of a sampling rate of 1 in the time period in which the measurement was made.
> Kolmogorov random if, for a given universal Turing machine, there is no program shorter than the string that produces the string.
That’s not a definition of unbiased randomness. A true unbiased random number could be all 0’s. Nothing about an unbiased random number demonstrates it’s random, otherwise whatever that distinction is would be a bias in it’s generation.
Kolmogorov complexity is it’s own thing, and sequences that seem very complex can have extremely low complexity. Such as long sequences of hashes of hashes.
I'm not sure what unbiased randomness is. I haven't heard that phrase before. For Kolmogorov randomness, I was using Wikipedia's description of it (https://en.wikipedia.org/wiki/Kolmogorov_complexity#Kolmogor...), although there are more technical descriptions available.
Crypto cares a lot about unbiased randomness. X bits of entropy is kind of a measurement of this.
Anyway, I suggest you reread the end of that paragraph:
“A counting argument is used to show that, for any universal computer, there is at least one algorithmically random string of each length. Whether any particular string is random, however, depends on the specific universal computer that is chosen.”
Kolmogorov complexity is really referring to the fact you can’t have lossless compression of arbitrary bit strings. You can’t encode every possible N+1 bit string using an N bit string. The computer chosen can make an arbitrary 1:1 mapping for any input though. So, it’s got nothing to do with randomness in the context of coin flipping as the mapping is predefined.
Just remember, you’re choosing the computer and at that point any input can be mapped to any output. But, after that point limits show up.
If all "randomness" of the universe rises from a one extremely long bit string which has been created once from the source of true randomness, the bit string could contain unimaginable number of zeros only and it would be still random. For example, Lotto numbers 1,2,3,4,5,6,7 may be completely random.
Kolmogorov complexity works only if we have a big sample of random numbers, but we do not know if we have such in this universe or not.
Kolmogorov complexity is only meaningful for a specific architecture.
Without access to the architecture of the machine the universe runs on you can’t tell if the initial random string would be one bit or nigh infinite bits.
I haven’t really heard much talk about “Kolmogorov randomness” before, and so I’m wondering if you might be running up against the limits of the Wikipedia paradigm when it comes to pioneering scholarship.
The citation for that paragraph is a peer-reviewed journal article covering Kolmogorov complexity and randomness. It’s actually a really good article, by someone pretty famous named Per Martin-Löf. Which is all great, except that paper is from 1966, and in 2019 a more studied concept is something called “Martin-Löf randomness” :)
Well they're related. There is a slightly circular argument that, since it's impossible to determine for sure what the Kolmogorov entropy of a sequence is, the only way to generate a long sequence with high Kolmogorov entropy with high probability would be to use truly random numbers. Any pseudorandom shortcut has by definition lower Kolmogorov entropy as long as the generating program is shorter than the sequence.
A random string, in this case, is a string whose bits (or characters) are random, and the definition talks about the process that produces those bits. A string of all zeros (i.e., in this perspective, a process that produces only zero bits) is not random, but a short subsequence of a random string could be all zeros.
> Kolmogorov randomness. A string of data is Kolmogorov random if, for a given universal Turing machine, there is no program shorter than the string that produces the string
I know nothing about this field, but it strikes me as wrong intuitively. Say that I have a certain amout of data. I can find specific patterns on it (for example, a chain of 10 ceros) that are compressible (0*10). For a large enough amount of data, that can save me enough space to include a program that can print the decompressed string in less space than the original string, thus implying my original string wasn't random - but we've then reached the absurd, because it is perfectly understandable that randomness could create locally compressible substrings.
>For a large enough amount of data, that can save me enough space to include a program that can print the decompressed string in less space than the original string
It can't. You'll find that, in a truly random sequence, the "compressible substrings" will be infrequent enough that you will use up your data budget just specifying where they go.
Let's take your run-length example. Let's work in bits to make it simple. Your chain of 10 zeros - 10 bits of information - happens on average every 2^10 bits. Let's say we magically compress this sequence down to 0 bits - we just assume that statistically it's in there somewhere, so we don't need to store it. All that's left is to specify where it goes! How many bits do we need for that? Well... the sequence occurs on average every 2^10 bits. We need all 10 of the saved bits just to say where the sequence goes! We haven't saved anything!
The more compressible the substring, the less frequent it is, and the more information is required to specify its location. This is also why we can't compress files by specifying their offset in the digits of pi, incidentally.
If you are familiar with programming I suggest doing the following experiment:
1. make an image of one solid color
2. make another image of the same size that with each pixel being a random RGB value
3. losslessly compress both images any way you can
4. compare the compressed file sizes to the uncompressed bitmap file size
Make sure to make a hypothesis before the experiment!
To dig a little deeper on this: all subsequences are equally probable in a random sequence. The full implications of that requires a bit more playing around and reading. I think if you explore it you’ll find that there is an intuition that can be built up.
Also a bitstring might be random/incompressible in reference to a Turing machine, but become compressible in reference to a halting oracle. E.g. "list the Nth Turing incompressible bitstring" is possible with a halting oracle, so for some N and S it is the case that log2(N) < len(S) and thus S is compressible with regard to a halting oracle but not with regard to a Turing machine.
So... you're saying randomness would be subjective / observer-dependent? Something can be random for one observer and deterministic for other, and that in all cases we can imagine an "outer" observer for whom anything can be deterministic?
...dunno why but this is one of those things that seem to me so incredibly intuitively obvious down to the bones of your mind, like in "how could even the thought of it being any other way" be possible :) I'd say it's just that modern physics doesn't want/need to have anything to do with such hypothetical "outer observers", so that's why we accept the convention of "true randomness" and work with it. And, it makes sense, otherwise you'd end up with science being polluted with useless metaphysics blabberings.
I am not sure if we can discuss inside or outside of universe in this context. If "outside" defines somehow how "inside" works, then question just changes to: Is "outside" deterministic or indeterministic. Is there truly randomness or not in the "outside"?
It is very difficult to imagine root source for truly randomness? If there is truly randomness in the universe, source of it would be perhaps the most important discovery of science ever.
True randomness and infinity are horrible potential features of the universe - especially if both are true.
To my way of thinking this is a paradox; on the one hand the measurement outcomes of the experiment are conditionalized (in a global sense) on the choices of the experimenter; on the other it's easy to believe that the experimenter's choices exert no local causative effect on measurement outcomes.
Under any practical consideration, free will is nothing more than an emotion; it offers you no capabilities, only a propensity to respond to things in a certain way. Without a useful definition of free will that offers something different to this, you won't have a key to anything.
So you're fine with a key that doesn't open anything? How will you know it is a key at all then?
>We try to understand countless things without have a use case in mind at the time of study.
You're making a pretty clear reference to mathematics & science here, but in those disciplines we study things with well-defined structures. We don't study flighty nonsense because it's not ever going to be useful. You shouldn't invoke this phrase to excuse a lack of precision and clarity.
Which means that we should act as if we have free will, since no other approach is reasonable given our perspective.
Interesting to note that the Bible implies something like this paradigm, since it describes God as having total control of the universe but also says we have free will.
When you first encounter that pair of ideas in the text they seem contradictory, but further reflection eventually leads many to some variation on this idea.
I don't believe it's the full picture of free will and predestination in Christian theology - just think it's interesting that it fits with this perspective that would have been quite non-obvious to its authors (at least in regards its relationship to modern physics).
"The overall Kolmogorov complexity of a string is thus defined as K(x)=|p| where p is the shortest program string for language L such that L(p)=x and we consider all programming languages."
This is false. No one considers "all programming language", we consider one fixed language (by fixed you can/should mean: independently of the input, or in a different way: you define the language for all possible inputs).
(But it can be ANY universal language chosen from all languages (due to the provably constant overhead) - that's true, but according what follows in your link, your definition not meant it this way.)
First, I think it's important to define what randomness even is, for which I'll use the most universal definition, i.e., Kolmogorov randomness. A string of data is Kolmogorov random if, for a given universal Turing machine, there is no program shorter than the string that produces the string (yes, you can arbitrarily choose which universal Turing machine, but the invariance theorem makes this fact inconsequential for the most part).
So if we repeatedly set up and measure a quantum system that's not in an eigenstate and then apply the probability integral transform to the individual measurement values, we should expect to find a sequence of values drawn from a uniform distribution, and this sequence should not be compressible by any computer program.
This is where it gets interesting though, because it may very well be the case that this sequence of measurement values is incompressible only because we lack external information, i.e., we are looking at the Kolmogorov complexity of the string from our perspective as experimenters, but from the perspective of a hypothetical observer outside the universe, the conditional Kolmogorov complexity (conditioned on some missing information) could indeed be less than the length of the string.
So where could this missing information be stored? My guess is that it's at the boundary of experimenter/experiment (not referring to a spacetime-local boundary here), since you can't represent the overall quantum state of experimenter + experiment as a separable state. That is, the information necessary to perfectly predict the result of a measurement on a quantum system is inaccessible to us precisely because we — the experimenters — are part of the system itself.
In this way, quantum randomness would be truly random from our perspective in the sense that the future is to some degree fundamentally unpredictable by humans, but just because it's genuinely random to us doesn't imply the universe is indeterministic.
I wonder if there's some way you could design an experiment that distinguishes true indeterminism from the merely unpredictable...