Hacker News new | past | comments | ask | show | jobs | submit login

This is the bottom line:

"It's been demonstrated experimentally that you can't have a local, deterministic, real, definite theory (i.e. that would lead to demonstrably false conclusions). People intuitively expect all of those things to be true, but we can prove that it isn't the case. The various interpretations all try to give up one of "local", real", "definite", or "deterministic" in order to preserve the others, thus preserving at least a modicum of their intuition.

The pilot wave idea gives up on locality"

And I will add:

Multiple worlds gives up on definiteness.

"Zero worlds" (my personal favorite) gives up on realism.

And, of course Copenhagen gives up on determinism.

Take your pick. Or go with decoherence, which kind of lets you turn a knob to give up a little bit of all four and dial in whatever setting you like.




But if someone makes a conscious choice to choose one that preserves locality, shouldn't they change their mind when "spooky action at a distance" starts happening? Non-locality is staring them in the face and they refuse to adopt a formalism that makes it explicit. They seem to like things that appear strange and non-intuitive.


There is no spooky action at a distance. When you measure one member of an EPR pair, nothing actually happens to the other member.

See https://www.youtube.com/watch?v=dEaecUuEqfc or http://www.flownet.com/ron/QM.pdf for a detailed explanation.


I had a look at part of that pdf and I don't think the author really gets it. The essence of most nonlocality experiments is that if you send entangled photons off to two separate detectors with polarising filters in front of them then the correlation between the two photons getting through varies dependent on the angle between the two polarisers in a way that can only be explained by photon 1 kind of knowing what the angle of photon 2's polariser is or visa versa. How do they know that when far apart? See Bell's Theorem for more details.


Really it boils down to the fact that (to quote a very insightful comment I can no longer recall the author of) “if I flip a coin and look at it whilst hiding it from your sight, the outcome is perfectly deterministic for me but it is still perfectly random for you” (and, I'll add, the maximum speed at which I can “de-randomise” the outcome for you is by telling you what I measured (at the speed of light)).


No, there really is a difference between a flipped coin and an unobserved particle: the latter can interfere with itself. A coin will not interfere with itself after you flip it even if you don't look at the outcome.


Fair enough: single-particle-at-the-time double-slit-experiment is relevant here, but I didn't have it in mind when I was thinking of remote entangled particles being measured. My bad.


This is just getting at the basic insight that probabilities are most usefully thought of as being subjective. The coin will land on either side depending on the torque applied and air resistance and whatnot. 50% is only a measure of our subjective uncertainty. This isn't about quantum mechanics.


I'm the author, and I assure you I am (and was) aware of Bell's theorem, notwithstanding that I didn't actually mention it in the paper. Bell's theorem in no way invalidates what I said. It is in fact a theorem of its own that measuring one EPR particle cannot affect the outcome of any experiment performed on its partner.

https://en.wikipedia.org/wiki/No-communication_theorem


Your parent is the author ...


I've argued that for years. If "collapsedness" is property of a particle, then measuring one branch of a stream of entangled particles would modulate the collapsedness of the other branch and could be used for FLT communication. The bottom line is that nobody can tell the difference between a particle whose wave function collapsed and one that didn't. The conclusions from that is that wave function collapse is fiction. That video makes the same point in a more formal way.


How convenient that this supposed "spooky action at a distance" can't actually send information ftl. Locality is very important for coherent laws of physics. Many worlds is still the most parsimonious explanation which is why it enjoys the most support from theoretical physicists. It's literally just extrapolating what we already see in small scale experiments to the macro scale, it's staring you in the face!


Could you touch on why many worlds is more parsimonious than pilot wave? I'm not sure that I understand that reasoning. Is it because a given "universe" is locally simpler? Would it be accurate to describe the comparison as an infinite number of universes (separated via some higher dimension) vs. a single universe that is holistically connected throughout spacetime?


The theory is more parsimonious. Whether it generates untold amounts of parallel universes is immaterial to its own Kolmogorov complexity.

Many worlds basically says "the wave function is real". You just take what the math says at face value, and the math says there's a blob of amplitude where the cat is alive, and another blob of amplitude where the cat is dead, and those blobs do not interact.

Copenhagen adds something on top of the math: a kind of "collapse" where the blob you did not observe gets mysteriously zeroed out. There's only one universe, which looks simpler, but the theory itself is more complex, because you just added that collapse.

Pilot Wave (of which I know nothing) seems to add a similar complexity. There's no collapse, but there's this additional non-local "wave" that's laid out on top of everything, and determines which of the blobs is real (the dead cat blob or the live cat blob).

Think Chess vs Go. Chess has rather complex rules, with an initial position, moves for 6 different pieces, and a couple special cases. Go's rules on the other hand can fit on one page. So, Go is simpler. However, in terms of possibilities, the universe generated by the Go rules is orders of magnitudes bigger than Chess'. Simpler rules can lead to more diverse possibilities. Quantum physics interpretations are similar: Many Worlds have the simplest rules, but it also describes the biggest universe.


Can I just say this was very beautifully laid out. Much simpler and more convincing than what I could have done. Thank you.


However, doesn't many worlds also necessarily require one or more dimensions for separating the universes from each other?


Actually, configuration space have an infinite number of dimensions. Current physics, (or at least QM) doesn't describe our universe as having 3 spatial dimensions. That's a projection, the classical illusion —which is a logical consequences of the underlying physics of course.

Imagine 2 pearls on a thread. You have 2 ways to represent them: the obvious one is 2 points on a line. A less obvious (but just as valid) is a single point on a plane: the X axis would represent the first pearl, and the Y axis would represent the second pearl. Similarly, 2 billiard balls on a billiard can be represented by a single point in a 4 dimension configuration space. And the entire universe require many many more dimensions than that.

There are many more subtleties. I suggest you read the Quantum Physics Sequence for a comprehensible explanation of all this mess. http://lesswrong.com/lw/r5/the_quantum_physics_sequence/


Many worlds is parsimonious in some ways in that all the stuff allowed by the equations happens and that's it (sort of in theory at any rate - not sure it really gives the Born rule). With pilot wave or Copenhagen you have to tack on a wave or observer respectively. On the other hand all the stuff is a lot of stuff.


The complexity of a theory is in how long the theory itself is, not how big the objects it generates are. The peano axioms are extremely simple, despite generating an Infinity of numbers.


Nope, you can get around that with superselection rules (which give you a mathematically rigorous thing that sort of looks like many worlds). Pilot wave and Einselection are the two most promising avenues into making sense of apparent wavefunction collapse, in my opinion.


I agree. I thought that the scientific world already gave up on the idea of locality, since entanglement has been experimentally demonstrated.

Furthermore, I don't have a problem with accepting non-locality from a theoretical perspective either, now that I know about the Holographic Principle. If the universe is a hologram, then the apparatus that is projecting the hologram can be connected in ways that aren't obvious in the hologram.


It's not that kind of a hologram... all the hologram idea says is that all of the information in our universe can be represented on a 2 dimensional surface. It doesn't claim there is literally a hologram projector somewhere projecting a hologram of our universe. Such a thing would be nonsensical. If the projector were outside our universe it would, by definition, be causally disconnected from our universe and thereby never capable of influencing it in any way. If it were capable, it would, again by definition, be part of our holographic universe.


Just because something is outside something else does not mean it is causally disconnected.

The computer program runs inside the computer. The computer is outside the program. But nevertheless the computer's hardware is causally connected to the program. Further still, the programmer is outside the program, but he very much can influence it. Indeed, he brought it into being.


The idea is that to be outside the universe is to be causally disconnected from it, not to be outside of some arbitrary system. A "universe" is not an arbitrary system -- it is the whole of reality.

A computer program is not a universe independent of the programmer. They both exist in the same universe because they are causally connected. If they were in different universes, they must be causally disconnected. You're whole argument rests on the assumption that the program and the programmer reside in different universes, which is clearly not the case.


Field theories like QED have the concept of a field, which is something that permeates all of space, which makes it non-local. Electrons all have the same electric charge because they are all different excitations of the same electric field. Does that in some way mean they are all the same particle? They are all connected by the same electric field and have the same charge.


You can reproduce entanglement in a number of local theories. I always mention Einselection in these sorts of threads, because it's a very clean way to get something that looks like many-worlds but isn't hand-wavy and can reproduce entanglement via local mechanisms. In particular, contradictory measurements may take place, but they are forbidden from interacting via superselection rules (such that <x|A|y> = 0 for any observable A for contradictory states x and y). The math isn't as developed as Pilot Wave theory, but it's at least as aesthetically pleasing (deterministic, local, etc.). I guess it gives up definiteness, but in a less objectionable way than many worlds.


Are you implying that Many-Worlds is hand-wavy? It's not.


A large fraction of working physicists subscribe to the Many-Worlds Interpretation, which preserves locality even though it has entanglement.


"Spooky action at a distance" does not violate locality in the sense that information does not travel faster than the speed of light.


You should be very suspicious if your theory claims non-locality, but in a way that does not allow any transfer of information.


Copenhagen gives up determinism, definiteness and locality. It's by far the worst of the interpretations, but chugs along due to inertia


If you repeat the word "collapse" enough you can trick your brain into thinking you've actually explained something :P


> It's by far the worst of the interpretations

True that.


Apart from for experiments. The basic idea is your experiment does odd quantum stuff until your equipment detects something whereupon you get a definite result so it's a simple way of thinking about that. Hence probably its origin and ongoing popularity.


> Apart from for experiments.

Who cares? Experiments are a tie: those different interpretations all make the same observable claims.

From there, which is best depends on what you care about most: is the first interpretation we came up with best, because "science"? Is the simplest interpretation best, because "Bayes"? Do stuff like locality, determinism etc. matter?

Me I side with Bayes, and that means Many Worlds/decoherence, which is simplest. Others would make another choice.


> The basic idea is your experiment does odd quantum stuff until your equipment detects something whereupon you get a definite result so it's a simple way of thinking about that.

You're probably thinking of "instrumentalism" aka "shut up and calculate". Copenhagen is not instrumentalism.


The Copenhagen interpretation also seems most in line with how we think of Quantum computing?: You manipulate the complex wave forms (interference), and at some point you make it collapse (experiment) and get results with the predicted probabilities.


Every interpretation of QM has its own story for quantum computing. However, other interpretations have a challenge: where does quantum speedup come from?

Copenhagen can't actually explain where the quantum speedup actually comes from.

Other interpretations try to explain it, but are similarly problematic. For instance, in many worlds, Everett tried to argue that pieces of the computation are shuffled off to other worlds and then brought back, but this is problematic because information isn't supposed to be shared with other worlds, and since other worlds also shuffle off an equal amount of computation to those worlds, how is overall speedup achieved exactly?

In pilot wave theories, quantum computing has a simple story: because the world is deterministic, the entire history of time leading up to your computation was predetermined and part of the computation. "Programming" a quantum computer simply creates the conditions where you can read out the answer.

The apparent quantum speedup is actually part of an illusion that we and our equipment are separate from the rest of the universe, but quantum computation is really a classical computation that had lots of time and plenty of resources (all particles in the universe) to run.


I have a question about wave function collapse, especially tied into many worlds. Let's say that a photon in the double slit experiment could travel straight to a detector, go at a 15 degree angle, or go at a 30 degree angle. Those 3 particles exist simultaneously, leading them to interfere with each other. Does the straight particle "split off" from the other universe right when it reaches the detector? Are the two angled photons left in the "original" universe in a partially collapsed wave function? When does the wave function fully collapse, when the longest possible path has been completed? If I point the double slit at The Great Void, is it possible the wave function would never collapse because the particle would never be detected?

Note: I understand that a wave function of just 3 photons or photons that travel in straight lines could not cause an interference pattern.


I'm not a subscriber to the many-worlds interpretation so I'm not the right person to defend it, partly for this very reason: the question of when one universe "splits off" doesn't actually have an answer, for the same reason that the question of "when does a measurement actually happen?" doesn't actually have an answer. Many-worlds and Copenhagen both have this rhetorical problem: they want to draw a sharp line when something "happens" (collapse, universe-split) and there is no such sharp line. The transition from quantum to classical is gradual, not abrupt. It typically happens very fast (picosecods or femtoseconds to get to the point where the state of the system is no longer distinguishable from a classical state) which is why it appears to be an abrupt transition, but it's not. This is one of the many reasons that I find the QIT/zero-worlds interpretation to be the most satisfactory. It and decoherence are the only ones that don't have this problem. But decoherence is too mathy :-)


IANAP, but I think part of the problem with MWI is that popularizers of it wanted to sound cool by calling it the Many-Worlds Interpretation in the first place.

The idea was originally known as the Theory of the Universal Wave Function, which makes a lot more sense as long as you're unafraid of mathematics.

There is no "splitting off" of universes; I agree with you that that wouldn't make much sense. Instead, the observer simply becomes entangled with the observed physical system during the measurement process. This entanglement is a gradual process, though it certainly happens very quickly.

This basically pushes all the weirdness out of quantum mechanics and into the fact that we just don't understand consciousness very well. Why don't we perceive the full linear combination of quantum states? And can we somehow map our subjective experience of probability and statistics to what happens in the Universal Wave Function? It's much easier to ignore that weirdness, since we're used to ignoring it in our daily lives anyway. It's also a more parsimonious interpretation than the others, because it doesn't postulate anything special about us (the "observers").

(This does not mean that consciousness is a quantum or even meta-physical phenomenon. Quite the opposite, actually: I find that the mystery lies more in why consciousness is unable to perceive quantum states despite existing in a universe that has quantum physics.)


> Why don't we perceive the full linear combination of quantum states?

My personal belief is that it is because consciousness is a classical information-processing phenomenon. In other words, we can only directly perceive things that can be described as real numbers rather than complex numbers because we are Turing machines, and Turing machines are classical.


That's an interesting thought, especially with the addendum of the no-cloning theorem.

This fits with a Hofstadter-like perspective that consciousness is about "strange loops", where we somehow repeatedly evaluate simplified models of the world including yourself. Doing such repeated evaluations requires (at least partial) "cloning" of the state of the world outside for the purpose of evaluation, and cloning quantum states is impossible, hence consciousness must be a classical phenomenon.

I like that line of argument.

Note: I wouldn't state it as being unable to perceive things that can be described as complex numbers, but rather complex linear combinations.


Then wouldn't an appropriately constructed sensor be able to record the full linear combination of quantum states?


No, because records are necessarily classical. This is because of the no-cloning theorem. You cannot copy quantum information, only classical information.


Wait a minute, since when is there any difference between Many Worlds and Decoherence? I always assumed they referred to the same interpretation, do they not?


There are actually three different ideas in play here: decoherence as described by Zurek, the relative-state interpretation as described by Everett, and the popular account of the universe "splitting" when a measurement is made. All three of these are different, at least rhetorically, even though at a high enough conceptual level they all amount to the same thing.


I have never understood what the classic Many World Interpretation actually is. Splitting of universes has the same problem as collapse and probabilities become rather nebulous since presumably all possibilities do happen, after all.

But there is a version of it which does make sense: https://arxiv.org/abs/0903.2211

The basic question for all theories is, "what is fundamentally real?" and the wave function of quantum mechanics just doesn't work in that role. This is why Bohmian mechanics adds particles; that is what is real in that theory.

But to make a MW theory, one can integrate out the wave function to create a mass density function on 3-space. In any given instance, this is a mess and useless. But if you watch it evolve, then you could see multiple different stories evolving. And those are the different "worlds".

There is no splitting, just regimes that are no longer relevant.


> I have never understood what the classic Many World Interpretation actually is.

There's little to understand: at its core, Many Worlds is real close to "shut up and calculate". The only significant assumption is that the wave function as described by the math is real. And when the math says there are 2 non-interacting blobs, well, this translates to 2 universes.

That's about it.

As for why people seems to reject it instinctively, it's probably because our subjective experience is linear —or mono-threaded. However, linear histories aren't contradictory with trees. Imagine a Git History that never merges: each leaf is the product of a linear set of modifications, a perfectly clean history. From that final commit's perspective, other branches might as well not exist. You'd need merges to break that illusion, and our universe doesn't have macro-merges —we only observe them at a very small scale, for instance with photon interference.


So the wave function, a function say on configuration space, is real. And that's it?

But what does that mean? What is the mapping to my experience? Is it that if the wave function is non-zero at a configuration point, then the configuration is realized as an actual universe? One big problem with this is that wave functions can be non-zero at all configuration points and the dispersive nature of the Laplacian tends to make that happen.

So then it becomes one of size of |psi|^2? Is there a cutoff for considering that to be real? Having a probability of something is needed to deal with this, but it is not clear to me what the probability would correspond to here.

In BM, the probability is that of finding particles in some region. In collapse theories, it is the probability of collapsing to a specific state.

As for the Git History, my above concern could be phrased as a history in which all possible streams of texts are possible though maybe some have a large font-size. This is library of babel kind of stuff. How is the evolution of the text that I consider myself to be a part of handled? How are nearby configurations connected by the evolution of a wave function?

Also, macro-mergers are not theoretically ruled out; it is practically ruled out.

Let's say that I convinced you that 2 non-interacting blobs did not objectively evolve to occur, but rather a smearing over all. Would this break your interpretation?


> Let's say that I convinced you that 2 non-interacting blobs did not objectively evolve to occur, but rather a smearing over all. Would this break your interpretation?

If I understand correctly, what you speak of should actually be observable —you could make falsifiable predictions about it. That would break more than my interpretation, I think.


I didn't say that well. I mean, non-zero everywhere. There can be bumps with most of the support, but integrating |psi|^2 over any region gives a non-zero result, if mostly below the order of, say 10^-100 outside of your two bumps. That is, from a probabilistic point of view, it can be discounted without a thought, but from a "this is a real world" it does not seem to me to be so easy to discard.

That is to say, where is the magnitude of the wave function being used in this interpretation?


> Imagine a Git History that never merges

That is a really excellent analogy. It is also important to understand that there is a reason we don't have "macro merges", and that is because some irreversible process is required to create classical information, which is what we (the entities having this conversation) are actually made of. See:

http://blog.rongarret.info/2014/10/parallel-universes-and-ar...


Doesn't that also require at least one additional dimension of spacetime to separate the two universes?


Could have sworn that the evidence towards non-locality have been piling up over the years.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: