"It's been demonstrated experimentally that you can't have a local, deterministic, real, definite theory (i.e. that would lead to demonstrably false conclusions). People intuitively expect all of those things to be true, but we can prove that it isn't the case. The various interpretations all try to give up one of "local", real", "definite", or "deterministic" in order to preserve the others, thus preserving at least a modicum of their intuition.
The pilot wave idea gives up on locality"
And I will add:
Multiple worlds gives up on definiteness.
"Zero worlds" (my personal favorite) gives up on realism.
And, of course Copenhagen gives up on determinism.
Take your pick. Or go with decoherence, which kind of lets you turn a knob to give up a little bit of all four and dial in whatever setting you like.
But if someone makes a conscious choice to choose one that preserves locality, shouldn't they change their mind when "spooky action at a distance" starts happening? Non-locality is staring them in the face and they refuse to adopt a formalism that makes it explicit. They seem to like things that appear strange and non-intuitive.
I had a look at part of that pdf and I don't think the author really gets it. The essence of most nonlocality experiments is that if you send entangled photons off to two separate detectors with polarising filters in front of them then the correlation between the two photons getting through varies dependent on the angle between the two polarisers in a way that can only be explained by photon 1 kind of knowing what the angle of photon 2's polariser is or visa versa. How do they know that when far apart? See Bell's Theorem for more details.
Really it boils down to the fact that (to quote a very insightful comment I can no longer recall the author of) “if I flip a coin and look at it whilst hiding it from your sight, the outcome is perfectly deterministic for me but it is still perfectly random for you” (and, I'll add, the maximum speed at which I can “de-randomise” the outcome for you is by telling you what I measured (at the speed of light)).
No, there really is a difference between a flipped coin and an unobserved particle: the latter can interfere with itself. A coin will not interfere with itself after you flip it even if you don't look at the outcome.
Fair enough: single-particle-at-the-time double-slit-experiment is relevant here, but I didn't have it in mind when I was thinking of remote entangled particles being measured. My bad.
This is just getting at the basic insight that probabilities are most usefully thought of as being subjective. The coin will land on either side depending on the torque applied and air resistance and whatnot. 50% is only a measure of our subjective uncertainty. This isn't about quantum mechanics.
I'm the author, and I assure you I am (and was) aware of Bell's theorem, notwithstanding that I didn't actually mention it in the paper. Bell's theorem in no way invalidates what I said. It is in fact a theorem of its own that measuring one EPR particle cannot affect the outcome of any experiment performed on its partner.
I've argued that for years. If "collapsedness" is property of a particle, then measuring one branch of a stream of entangled particles would modulate the collapsedness of the other branch and could be used for FLT communication. The bottom line is that nobody can tell the difference between a particle whose wave function collapsed and one that didn't. The conclusions from that is that wave function collapse is fiction. That video makes the same point in a more formal way.
How convenient that this supposed "spooky action at a distance" can't actually send information ftl. Locality is very important for coherent laws of physics. Many worlds is still the most parsimonious explanation which is why it enjoys the most support from theoretical physicists. It's literally just extrapolating what we already see in small scale experiments to the macro scale, it's staring you in the face!
Could you touch on why many worlds is more parsimonious than pilot wave? I'm not sure that I understand that reasoning. Is it because a given "universe" is locally simpler? Would it be accurate to describe the comparison as an infinite number of universes (separated via some higher dimension) vs. a single universe that is holistically connected throughout spacetime?
The theory is more parsimonious. Whether it generates untold amounts of parallel universes is immaterial to its own Kolmogorov complexity.
Many worlds basically says "the wave function is real". You just take what the math says at face value, and the math says there's a blob of amplitude where the cat is alive, and another blob of amplitude where the cat is dead, and those blobs do not interact.
Copenhagen adds something on top of the math: a kind of "collapse" where the blob you did not observe gets mysteriously zeroed out. There's only one universe, which looks simpler, but the theory itself is more complex, because you just added that collapse.
Pilot Wave (of which I know nothing) seems to add a similar complexity. There's no collapse, but there's this additional non-local "wave" that's laid out on top of everything, and determines which of the blobs is real (the dead cat blob or the live cat blob).
Think Chess vs Go. Chess has rather complex rules, with an initial position, moves for 6 different pieces, and a couple special cases. Go's rules on the other hand can fit on one page. So, Go is simpler. However, in terms of possibilities, the universe generated by the Go rules is orders of magnitudes bigger than Chess'. Simpler rules can lead to more diverse possibilities. Quantum physics interpretations are similar: Many Worlds have the simplest rules, but it also describes the biggest universe.
Actually, configuration space have an infinite number of dimensions. Current physics, (or at least QM) doesn't describe our universe as having 3 spatial dimensions. That's a projection, the classical illusion —which is a logical consequences of the underlying physics of course.
Imagine 2 pearls on a thread. You have 2 ways to represent them: the obvious one is 2 points on a line. A less obvious (but just as valid) is a single point on a plane: the X axis would represent the first pearl, and the Y axis would represent the second pearl. Similarly, 2 billiard balls on a billiard can be represented by a single point in a 4 dimension configuration space. And the entire universe require many many more dimensions than that.
Many worlds is parsimonious in some ways in that all the stuff allowed by the equations happens and that's it (sort of in theory at any rate - not sure it really gives the Born rule). With pilot wave or Copenhagen you have to tack on a wave or observer respectively. On the other hand all the stuff is a lot of stuff.
The complexity of a theory is in how long the theory itself is, not how big the objects it generates are. The peano axioms are extremely simple, despite generating an Infinity of numbers.
Nope, you can get around that with superselection rules (which give you a mathematically rigorous thing that sort of looks like many worlds). Pilot wave and Einselection are the two most promising avenues into making sense of apparent wavefunction collapse, in my opinion.
I agree. I thought that the scientific world already gave up on the idea of locality, since entanglement has been experimentally demonstrated.
Furthermore, I don't have a problem with accepting non-locality from a theoretical perspective either, now that I know about the Holographic Principle. If the universe is a hologram, then the apparatus that is projecting the hologram can be connected in ways that aren't obvious in the hologram.
It's not that kind of a hologram... all the hologram idea says is that all of the information in our universe can be represented on a 2 dimensional surface. It doesn't claim there is literally a hologram projector somewhere projecting a hologram of our universe. Such a thing would be nonsensical. If the projector were outside our universe it would, by definition, be causally disconnected from our universe and thereby never capable of influencing it in any way. If it were capable, it would, again by definition, be part of our holographic universe.
Just because something is outside something else does not mean it is causally disconnected.
The computer program runs inside the computer. The computer is outside the program. But nevertheless the computer's hardware is causally connected to the program. Further still, the programmer is outside the program, but he very much can influence it. Indeed, he brought it into being.
The idea is that to be outside the universe is to be causally disconnected from it, not to be outside of some arbitrary system. A "universe" is not an arbitrary system -- it is the whole of reality.
A computer program is not a universe independent of the programmer. They both exist in the same universe because they are causally connected. If they were in different universes, they must be causally disconnected. You're whole argument rests on the assumption that the program and the programmer reside in different universes, which is clearly not the case.
Field theories like QED have the concept of a field, which is something that permeates all of space, which makes it non-local. Electrons all have the same electric charge because they are all different excitations of the same electric field. Does that in some way mean they are all the same particle? They are all connected by the same electric field and have the same charge.
You can reproduce entanglement in a number of local theories. I always mention Einselection in these sorts of threads, because it's a very clean way to get something that looks like many-worlds but isn't hand-wavy and can reproduce entanglement via local mechanisms. In particular, contradictory measurements may take place, but they are forbidden from interacting via superselection rules (such that <x|A|y> = 0 for any observable A for contradictory states x and y). The math isn't as developed as Pilot Wave theory, but it's at least as aesthetically pleasing (deterministic, local, etc.). I guess it gives up definiteness, but in a less objectionable way than many worlds.
Apart from for experiments. The basic idea is your experiment does odd quantum stuff until your equipment detects something whereupon you get a definite result so it's a simple way of thinking about that. Hence probably its origin and ongoing popularity.
Who cares? Experiments are a tie: those different interpretations all make the same observable claims.
From there, which is best depends on what you care about most: is the first interpretation we came up with best, because "science"? Is the simplest interpretation best, because "Bayes"? Do stuff like locality, determinism etc. matter?
Me I side with Bayes, and that means Many Worlds/decoherence, which is simplest. Others would make another choice.
> The basic idea is your experiment does odd quantum stuff until your equipment detects something whereupon you get a definite result so it's a simple way of thinking about that.
You're probably thinking of "instrumentalism" aka "shut up and calculate". Copenhagen is not instrumentalism.
The Copenhagen interpretation also seems most in line with how we think of Quantum computing?: You manipulate the complex wave forms (interference), and at some point you make it collapse (experiment) and get results with the predicted probabilities.
Every interpretation of QM has its own story for quantum computing. However, other interpretations have a challenge: where does quantum speedup come from?
Copenhagen can't actually explain where the quantum speedup actually comes from.
Other interpretations try to explain it, but are similarly problematic. For instance, in many worlds, Everett tried to argue that pieces of the computation are shuffled off to other worlds and then brought back, but this is problematic because information isn't supposed to be shared with other worlds, and since other worlds also shuffle off an equal amount of computation to those worlds, how is overall speedup achieved exactly?
In pilot wave theories, quantum computing has a simple story: because the world is deterministic, the entire history of time leading up to your computation was predetermined and part of the computation. "Programming" a quantum computer simply creates the conditions where you can read out the answer.
The apparent quantum speedup is actually part of an illusion that we and our equipment are separate from the rest of the universe, but quantum computation is really a classical computation that had lots of time and plenty of resources (all particles in the universe) to run.
I have a question about wave function collapse, especially tied into many worlds. Let's say that a photon in the double slit experiment could travel straight to a detector, go at a 15 degree angle, or go at a 30 degree angle. Those 3 particles exist simultaneously, leading them to interfere with each other. Does the straight particle "split off" from the other universe right when it reaches the detector? Are the two angled photons left in the "original" universe in a partially collapsed wave function? When does the wave function fully collapse, when the longest possible path has been completed? If I point the double slit at The Great Void, is it possible the wave function would never collapse because the particle would never be detected?
Note: I understand that a wave function of just 3 photons or photons that travel in straight lines could not cause an interference pattern.
I'm not a subscriber to the many-worlds interpretation so I'm not the right person to defend it, partly for this very reason: the question of when one universe "splits off" doesn't actually have an answer, for the same reason that the question of "when does a measurement actually happen?" doesn't actually have an answer. Many-worlds and Copenhagen both have this rhetorical problem: they want to draw a sharp line when something "happens" (collapse, universe-split) and there is no such sharp line. The transition from quantum to classical is gradual, not abrupt. It typically happens very fast (picosecods or femtoseconds to get to the point where the state of the system is no longer distinguishable from a classical state) which is why it appears to be an abrupt transition, but it's not. This is one of the many reasons that I find the QIT/zero-worlds interpretation to be the most satisfactory. It and decoherence are the only ones that don't have this problem. But decoherence is too mathy :-)
IANAP, but I think part of the problem with MWI is that popularizers of it wanted to sound cool by calling it the Many-Worlds Interpretation in the first place.
The idea was originally known as the Theory of the Universal Wave Function, which makes a lot more sense as long as you're unafraid of mathematics.
There is no "splitting off" of universes; I agree with you that that wouldn't make much sense. Instead, the observer simply becomes entangled with the observed physical system during the measurement process. This entanglement is a gradual process, though it certainly happens very quickly.
This basically pushes all the weirdness out of quantum mechanics and into the fact that we just don't understand consciousness very well. Why don't we perceive the full linear combination of quantum states? And can we somehow map our subjective experience of probability and statistics to what happens in the Universal Wave Function? It's much easier to ignore that weirdness, since we're used to ignoring it in our daily lives anyway. It's also a more parsimonious interpretation than the others, because it doesn't postulate anything special about us (the "observers").
(This does not mean that consciousness is a quantum or even meta-physical phenomenon. Quite the opposite, actually: I find that the mystery lies more in why consciousness is unable to perceive quantum states despite existing in a universe that has quantum physics.)
> Why don't we perceive the full linear combination of quantum states?
My personal belief is that it is because consciousness is a classical information-processing phenomenon. In other words, we can only directly perceive things that can be described as real numbers rather than complex numbers because we are Turing machines, and Turing machines are classical.
That's an interesting thought, especially with the addendum of the no-cloning theorem.
This fits with a Hofstadter-like perspective that consciousness is about "strange loops", where we somehow repeatedly evaluate simplified models of the world including yourself. Doing such repeated evaluations requires (at least partial) "cloning" of the state of the world outside for the purpose of evaluation, and cloning quantum states is impossible, hence consciousness must be a classical phenomenon.
I like that line of argument.
Note: I wouldn't state it as being unable to perceive things that can be described as complex numbers, but rather complex linear combinations.
No, because records are necessarily classical. This is because of the no-cloning theorem. You cannot copy quantum information, only classical information.
Wait a minute, since when is there any difference between Many Worlds and Decoherence? I always assumed they referred to the same interpretation, do they not?
There are actually three different ideas in play here: decoherence as described by Zurek, the relative-state interpretation as described by Everett, and the popular account of the universe "splitting" when a measurement is made. All three of these are different, at least rhetorically, even though at a high enough conceptual level they all amount to the same thing.
I have never understood what the classic Many World Interpretation actually is. Splitting of universes has the same problem as collapse and probabilities become rather nebulous since presumably all possibilities do happen, after all.
The basic question for all theories is, "what is fundamentally real?" and the wave function of quantum mechanics just doesn't work in that role. This is why Bohmian mechanics adds particles; that is what is real in that theory.
But to make a MW theory, one can integrate out the wave function to create a mass density function on 3-space. In any given instance, this is a mess and useless. But if you watch it evolve, then you could see multiple different stories evolving. And those are the different "worlds".
There is no splitting, just regimes that are no longer relevant.
> I have never understood what the classic Many World Interpretation actually is.
There's little to understand: at its core, Many Worlds is real close to "shut up and calculate". The only significant assumption is that the wave function as described by the math is real. And when the math says there are 2 non-interacting blobs, well, this translates to 2 universes.
That's about it.
As for why people seems to reject it instinctively, it's probably because our subjective experience is linear —or mono-threaded. However, linear histories aren't contradictory with trees. Imagine a Git History that never merges: each leaf is the product of a linear set of modifications, a perfectly clean history. From that final commit's perspective, other branches might as well not exist. You'd need merges to break that illusion, and our universe doesn't have macro-merges —we only observe them at a very small scale, for instance with photon interference.
So the wave function, a function say on configuration space, is real. And that's it?
But what does that mean? What is the mapping to my experience? Is it that if the wave function is non-zero at a configuration point, then the configuration is realized as an actual universe? One big problem with this is that wave functions can be non-zero at all configuration points and the dispersive nature of the Laplacian tends to make that happen.
So then it becomes one of size of |psi|^2? Is there a cutoff for considering that to be real? Having a probability of something is needed to deal with this, but it is not clear to me what the probability would correspond to here.
In BM, the probability is that of finding particles in some region. In collapse theories, it is the probability of collapsing to a specific state.
As for the Git History, my above concern could be phrased as a history in which all possible streams of texts are possible though maybe some have a large font-size. This is library of babel kind of stuff. How is the evolution of the text that I consider myself to be a part of handled? How are nearby configurations connected by the evolution of a wave function?
Also, macro-mergers are not theoretically ruled out; it is practically ruled out.
Let's say that I convinced you that 2 non-interacting blobs did not objectively evolve to occur, but rather a smearing over all. Would this break your interpretation?
> Let's say that I convinced you that 2 non-interacting blobs did not objectively evolve to occur, but rather a smearing over all. Would this break your interpretation?
If I understand correctly, what you speak of should actually be observable —you could make falsifiable predictions about it. That would break more than my interpretation, I think.
I didn't say that well. I mean, non-zero everywhere. There can be bumps with most of the support, but integrating |psi|^2 over any region gives a non-zero result, if mostly below the order of, say 10^-100 outside of your two bumps. That is, from a probabilistic point of view, it can be discounted without a thought, but from a "this is a real world" it does not seem to me to be so easy to discard.
That is to say, where is the magnitude of the wave function being used in this interpretation?
That is a really excellent analogy. It is also important to understand that there is a reason we don't have "macro merges", and that is because some irreversible process is required to create classical information, which is what we (the entities having this conversation) are actually made of. See:
Would this be kind of the physics equivalent of nonstandard analysis in mathematics? Nonstandard analysis (NSA) is an approach to analysis that essentially makes rigorous the vague notions of infinitesimals that Newton and Leibniz and the other pioneers of calculus based their work on.
Newton's fluxions and Leibniz's differentials provided great guidance, and in a lot of cases they even worked and gave correct results, but they were not really rigorously defined, and later mathematicians started running into serious problems with that approach. Those were resolved with the approach that we all now known and love (loath?): limits and the epsilon/delta approach to them.
NSA extends the real field to contain a new kind of number that is positive but is less than 1/N for any integer N, giving a new field called the hyperreals. You can then do things like define the derivative of f at x to be (f(x+h)-f(x))/h where h is an infinitesimal, work out that using ordinary algebraic manipulations, and then just drop any terms that are infinitesimal.
NSA goes far beyond just calculus. You can use these ideas all through analysis.
Seems pretty cool, and a lot clearer than a limits approach. So why has it not taken over?
I believe that a big part of it is that it has been shown that NSA and standard analysis are equivalent. Anything that you can prove in one can be proven in the other. Perhaps more important, one doesn't seem any better than the other at leading to deep insights. If you can't solve a problem in standard analysis, you probably aren't going to figure out the solution if you use NSA, and vice versa.
So while it might be overall nicer if we were starting from scratch to use NSA instead of standard analysis everywhere, so much is already done in standard analysis that switching now would be too painful.
Interesting analogy but I'm not sure if it holds. Infinitesimals are an alternative to the limit-based formalism typically seen. It changes the way you do calculus.
However Bohmian Mechanics is a different interpretation, not a different formalism. You can accept the interpretation, but you still end up solving the Schrödinger equation in the same way, using the same mathematical methods.
The analogy works better for matrix mechanics as compared to Schrödinger's wave mechanics. These are mathematically quite different even though they give the same results.
You can use the same formalism for most things, but Bohmian mechanics is a different formalism in some cases. For instance, you can derive the Born rule for systems in equilibrium, but that leaves open the possibility of non-equilibrium regimes, which simply don't exist in orthodox QM.
An introduction for the interested [1]. It would be neat if someone could come up with an experimental test. The very existence of non-equilibrium would immediately falsify Copenhagen.
The Heisenberg picture and the Schrodinger picture the same, sometimes using one basis is more useful and more intuitive than using the other.
Pilot Waves, on the other hand, are pretty much always superfluous. Please link me, if you have seen a situation where using Pilot Waves simplifies a calculation.
I think you mean - you can always reparameterize things to have a 'pilot wave' around. That doesn't mean they're 'there', really.
I would think that the 'default' interpretation would be excluding them, since (clearly) they're not necessary (evidence being: interpretations without them exist). That would be the definition of superfluous, right? Unless they give different experimental results somewhere, which I'm pretty sure in't the case.
> I think you mean - you can always reparameterize things to have a 'pilot wave' around. That doesn't mean they're 'there', really.
The wave equation is always there, or you wouldn't get quantum behaviour. There exist no-go theorems demonstrating that the wave function can't just be a reflection of our ignorance, it must be "ontic", ie. exist, in some real way. What's disputed is whether the particles need to be given separate existence from the waves that are necessarily there.
> I would think that the 'default' interpretation would be excluding them, since (clearly) they're not necessary (evidence being: interpretations without them exist). That would be the definition of superfluous, right?
No, because the interpretations you speak of can't actually explain the measurement problem without positing additional axioms which then make them less plausible. It's well known that pilot wave theories are more axiomatically parsimonious, meaning that they require fewer assumptions overall to explain all of our observations.
For example, the Born rule must be assumed by most interpretations of QM, with little rhyme or reason other than we know it's empirically valid. But because pilot wave theories posit real physical entities with well understood properties, we can actually derive the Born rule. This is just one example that demonstrates how pilot wave theories positing additional real entities can make for an overall simpler set of assumptions.
Pilot Wave posits a bunch of extra crap and derives the Born rule. Regular / MWI QM posits the born rule and skips the extra crap. They're both positing something - but the Born rule is a far simpler claim (just in, like, mathematical complexity, to me).
Also: aren't we to the point where measurement makes perfect sense (besides the values of the probabilities, as given by the Born rule), via entanglement between experiment+lab frames, and decoherence of unrelated Degrees of Freedom? Cause I thought we were. (see, for example, http://www.preposterousuniverse.com/blog/2014/06/30/why-the-...)
Also, pilot wave fails badly in relativistic extensions, for the obvious reason that a universe-wide pilot wave function is hard to make covariant. There are attempts at fixing this but last I heard none of them are doing a good job. So that's another strike against it, in my book.
> Pilot Wave posits a bunch of extra crap and derives the Born rule. Regular / MWI QM posits the born rule and skips the extra crap. They're both positing something - but the Born rule is a far simpler claim
Except it's not, because you also have to posit the measurement postulates in orthodox QM. This so-called "extra crap" reproduces the measurement postulates and the Born rule, thus replacing a large set of assumptions with a much smaller set.
Many-Worlds is indeed much simpler than orthodox QM, but it's still not simpler than pilot waves. They are roughly comparable, with many-worlds still having unresolved conceptual difficulties surrounding probabilities, among other issues [1]. Which is more parsimonious between many-worlds and pilot waves is hotly debated among philosophers of science.
> Also: aren't we to the point where measurement makes perfect sense (besides the values of the probabilities, as given by the Born rule), via entanglement between experiment+lab frames, and decoherence of unrelated Degrees of Freedom?
"Measurement now makes perfect sense" is an interpretation-specific claim. Measurement still doesn't make sense in Copenhagen, measurement mostly makes sense in Many-Worlds, modulo some of the difficulties I mentioned earlier [1].
> Also, pilot wave fails badly in relativistic extensions, for the obvious reason that a universe-wide pilot wave function is hard to make covariant.
It's actually pretty trivial if you're willing accept a preferred foliation of space-time, as long as the preferred frame is unobservable. This seems aesthetically unappealing, hence why people perpetuate this myth of "difficulty", but it's not a priori wrong.
Fortunately, a preferred foliation can actually be derived from the wave function itself, which means this foliation exists in every interpretation of QM [2].
This is the kind of surprising result that probably no one would have even bothered looking for, and I think it proves John Bell's position that non-locality is the unresolved problem of QM [3]. Other interpretations just let you paper over it, to our detriment IMO.
The most amazing aspect of Non-Standard Analysis is that the "tiny delta-x becomes h and you can then just take the limit by stripping out all the h symbols" approach was how I was first taught "Calculus" in International Baccalaureate (IB) Maths Higher in 1997-1999... an extraordinarily rickety foundation to build future intuition upon.
Not to start a big discussion about infinitesimals, but don't they generalize a lot worse than limits?
For example, what are their analogies in non-metric spaces? Say in topology or category theory?
I quite agree with millstone's comment below: interesting analogy, but not sure it is quite right here. And an off-topic, comment: I do agree with the sentiment that some things may be more easily expressed in nonstandard analysis. For an interesting experiment in that direction, see https://web.math.princeton.edu/~nelson/books/rept.pdf .
A big difference is that standard analysis is well-defined on its own. The standard Copenhagen interpretation is simply not a well-defined theory because measurement is not well-defined, e.g., the cat problem.
A more appropriate analogy to the SA v NSA would be GRW formulation vs Bohmian or even a well-defined many worlds (that does exist, but it does not involve splitting of worlds which is as problematic as collapse). These are different theories and they can lead to different generalizations.
In particular, GRW seems more amenable to being relativistic without adding a foliation or some other structure.
Bohmiam mechanics, on the other hand, does requires using a foliation though there are possibilities to use within the existing structures:
https://arxiv.org/abs/1307.1714
I also think NSA has a naming problem. By its very name, how can it ever be standard?
I think the issue is also that the limit fits in with numerical analysis in that it talks about errors. NSA sounds like it skips the error analysis and is only concerned with stuff at the limit which is certainly useful in many instances, but not always.
> I also think NSA has a naming problem. By its very name, how can it ever be standard?
"Nonstandard" isn't referring to how people view the field, but to additional elements you add to the model. Just to throw out a weird example, here's Edward Nelson's text on the subject:
"Theorem 4. There is a finite set that contains every standard object."
That is, in Nelson's system, the set of standard natural numbers is finite. That finite number is a nonstandard natural number, larger than any standard natural number.
Mixing "nonstandard"(in the sense of an undefined predicate added to the base logic) with "nonstandard"(in the sense of some professors thinking it's weird) is a type error.
If you're interested in this sort of thing as a lay person (as I am), I would recommend the PBS Space Time YouTube channel. Be sure to stick around for the end of each video, as he often calls out comments on earlier videos that contain corrections or clarifications (in many cases, from _very_ highly qualified physicists and other subject matter experts).
Thanks for the video. He mentioned that the pilot wave is made up of "some stuff", but never got back to that being one of the unknowns of pilot wave theory. Seems like that questioned will need to be answered before it is taken more seriously.
Pilot waves aren't made of anything, they are a description of configuration space. Think of it like this: to describe particle motion, take the ordinary/classical equations of motion, and now add another term to account for quantum behaviour. The force exerted by this term at time t is a function of the position of every other particle in the universe at time t, no matter the distance.
That's pilot wave theory, where this term is sometimes called the "quantum potential" and is governed by Schroedinger's equation. As you can see, this extra term goes to zero when quantum effects become negligible and we recover classical mechanics straight away. The difficulty is the obvious nonlocality and what that means for special relativity.
Sort of. There is no real entanglement in this formalism, not in the way it's typically portrayed. I suppose you could say entanglement is there all the time. Macroscopic observation of entanglement via non-local correlations is really just a context in which we make the non-local connection observable.
The impression I have of pilot wave theories is that when you look closely at them, the pilot waves are doing basically all the work with the particles just along for the ride. At that point it's tempting to delete the particles and see what happens, and then when you give the result a fresh paint job it turns out to be isomorphic to the many worlds interpretation (or the Copenhagen interpretation if you take the computational step of not following branches that can no longer be significant to currently observed results). Am I missing something?
It's important to remember that the math of quantum mechanics, the math people use to model and predict phenomena, the math thats given us transistors, lasers, quantum computing etc...
That's all very accurate and extremely successful so anything that tries to explain why these models are the way they are is going to be pretty much the same as any other explanation if you tilt your head to the side and squint because they all have to produce the same very well specified set of mathematical models.
What is truly irksome about this 'reformulation' is that the supposed Pilot Wave is presumably 'pushing ' on the billiard-ball-like elementary point-particles, but they are not able to "push back" on the Pilot Wave. The Pilot Waves of two particles are, however, assumed to interact between themselves in a manner that resembles the elementary interaction of the particles... and this is quite an extraordinary scenario, if you think about it. It's quite aether-like, and I mean that in a mildly derogatory manner.
I have a fond memory of taking an advanced (graduate) seminar class with taught by one of the oldest professors at my university -- and attended by another. The lecturer was describing something to do with manifolds and various obscure formulations of classical field theories and he mentions that he was at a lecture once where someone had related this theory to Bohmian mechanics -- the other professor cut him off
"It's wrong"
A student held up his hand and asked what Bohmian mechanics was. Before the lecturer could answer
The biggest problem with pilot wave theory is that extending it to field theory hasn't been done (or at least there's several published formalisms with no approach that everyone agrees upon). Right now its useless for practical physicists at the LHC to compute anything about collisions. Its mathematically equivalent to the other mathematical formulations of QM, but the only utility it offers so far is entirely philosophical. And at some point physicists don't spend their entire lives noodling on the underpinnings of QM and need to move on and really try to compute the g-factor of the electron, or analyze the collision data coming out of the LHC. There needs to be a Lorentz invariant field theory model of pilot waves that can answer some question easier than the other mathematical formalisms for it to catch on.
I would have though the fact that the pilot wave theory uses the (multi-particle) Schroedinger equation, while most modern physics is based on field theory, would be more important. So far I've seen no formulation of the pilot wave theory in a way that incorporates fields, or provides some other mechanism to account for things like the annihilation of particles.
> a continuous wave function is equivalent to a countably infinite vector space
I think you mean that the function is a point in the vector space. More precisely, it is a point in a particular separable Hilbert space, i.e., a complete vector space equipped with an inner product that has countable, dense basis.
There are various versions, but the one I recommend to date loses determinism (that was never the point) and allows for annihilation and creation of particles:
https://arxiv.org/abs/quant-ph/0208072
In addition, the ideas of Bohmian mechanics has led to a new direction in dealing with the divergences in QFT, that is, in finding a version of it that is mathematically well-defined:
https://arxiv.org/abs/1506.00497
There are plenty of field theory extensions for Bohmian mechanics. Due the relatively few people working on it, no single approach has become dominant.
I think the most problematic issues is that physicists reject the wave function. I have never understood this since if you reject the wave function in standard quantum mechanics, you have nothing left.
But the pilot wave is just the wave function of quantum mechanics, albeit on a universal scale. It is in all the theories. And just because one has a "collapse", which is always approximate anyway (no delta functions for a position measurement, for example), one still has a wave function as the only object in standard QM and that is an expanding object defined on 3*number of particles in universe. But we do not experience that. We experience 3 dimensional space with rather point-like objects moving about.
Pilot wave theory starts with "We have particles moving about. How do they move?" That's the question it starts with and it answers it in the simplest way possible given the wave function: the wave function tells the particles how to move using Bohm's equation which is a very easy thing to derive. Indeed, you can derive it even more quickly than Schrodinger's equation from the same basic facts of Einstein's light quanta hypothesis and de Broglie's hypothesis.
It was a choice to inject mysticism into quantum mechanics. If you stick with trying to describe the evolution of particles, Bohm is natural. If you want to describe something else whose evolving configuration would give us our experience, then that is fine, but you have to say what that is. The standard interpretation does not as it is solely concerned with experiments and measurements with no definition of them in a fundamental way. It is more like they were trying to do regression fitting on experimental data without any understanding of what the underlying stuff was nor any interest in such a question.
The other difficulty is spin. Spin is trivial if you put the spin degrees of freedom in the value space of the wave function. But many think of spin as real as position. It is not. It matters what the experiment is; the spin value for a particle is not defined independent of the experiment.
Bohmian mechanics gives a theory while standard quantum mechanics gives a computational formalism. Depending on your goals, the latter can be sufficient. The former tells us why stuff happens and also allows us to derive that computational formalism.
Here is my theory. Someone smarter than me please prove me wrong.
A particle's 'Zitterbewegung' motion generates wake fields in the vacuum energy, causing the particle to generate its own pilot wave as energy fluctuations in the vacuum.
For me, what is most important on the subject is that, even if we can't differentiate from one interpretation to another experimentally (yet), it doesn't mean they are all equally correct. Specifically in connection with general relativity, which is an open issue, the exploration of alternative interpretations of QM may lead to actual experiments differentiating one interpretation from another, effectively tuning it into a theory.
There's one major shortcoming that the answer doesn't seem to touch on: pilot wave theory doesn't explain relativity whatsoever, while quantum field theory does. We've experimentally confirmed relativity, so at best, pilot wave theory is incomplete.
You're confusing quantum mechanics with quantum field theory. Pilot wave theory is just QM, although some field theory extensions have been discussed. There's no particular reason such an extension would be impossible.
I think the best answer would be, "If true it doesn't add anything new." At least given our current understanding. If some new discovery showed some measurable difference between the two interpretations then we'd have to actually decide.
Since the pilot wave theory is deterministic, wouldn't someone be able to simulate a quantum computer with a classical one in reasonable time and storage?
It is deterministic, but you still need to keep track of a field that covers your whole configuration space (basically you need all the information contained in a function R^n->C for n degrees of freedom). Keeping track this high-dimensional field is what makes a quantum computer hard to simulate in the first place, not the non-determinism. So no, it doesn't make anything easier to simulate.
This makes a lot of sense if you think of it that way: Pilot wave theory gives exactly the same results as any other decent interpretation of quantum mechanics. So the work necessary to simulate it will also be exactly the same. Simulating quantum mechanics is not hard because we don't understand it. It is hard because it is an inherently hard problem (otherwise, how could quantum computers be faster than classical computers?).
However, there are techniques that allow one to compute the wave function from the Bohmian trajectories. It is just a mathematical trick that certainly does not make BM philosophically more relevant, but it rather neat. This is explored in quantum chemistry contexts. Spin is an issue that stops it from being generally useful, but depending on the context, it is possible.
Supposing "deterministic" means "classical", and so only classical computers are possible: since it's deterministic, the entire timeline leading up to turning on a quantum computer is also part of the computation, a prep phase you're simply not counting. So QCs aren't really faster, they just cheat in a way that's not typically observable.
What's interesting is that this actually explains the source of quantum speed up, something no other interpretation can satisfactorily do at this time.
Because the computation is just physics too. It's the particles of your brain that moved your hands that typed the program into the computer that will eventually compute the final result by what appears to be a quantum phenomenon localised in space-time, but that really stretches back infinitely.
"just physics", "stretches back infinitely"
I feel like You could say this about, well, anything. Afaik, you could take any event and trace its causes all the way back to the Big Bang. But that seems a bit... impractical? Why would we do this with a quantum computer?
Am I missing/misunderstanding something?
You're not "doing" anything with a quantum computer. That's just the reality of quantum computation in this conceptual framework.
And no, you can't say that about anything. In particular, you can't say that about non-deterministic interpretations of QM (most of them) because causal chains can only be traced back to the most recent non-deterministic event. In fact, pilot wave theories are among the only deterministic interpretations of QM.
What would make you think that? Deterministic doesn't imply predictable, reducible, or low complexity. Even a purely deterministic simple system can be capable of (and most are) chaotic behavior whose information content would grow faster than any system smaller than itself could predict. In other words, the only way to "simulate" the system would be to completely reproduce every bit of it, and let it evolve in realtime. And practically speaking, we can't even approach that.
It's really funny that something as simple is impossible to predict over longer times, because you would need perfect information. Yet climate science is 'settled'.
Take an undamped double pendulum and divide the space up into 2 sections. Suppose you can only check which section the pendulum is every N seconds. Every time you check, you record the value, and over time you build up a sequence of K numbers: 1,0,1,0,0,1,1,1,1,0,0,1,0,...
Suppose you know the exact initial state (dx/dt, x) of the pendulum.
What bounds on K and T are required such that the sequence of numbers is asymptotically indistinguishable from a purely random sequence?
So, because of a silly thought experiment, one might think there is very simple order to the apparent randomness. But, as Pauli would say, "it's not even wrong"...
There is one other interpretation of Pilot Wave theory that is sort of the "inverse" of Pilot Wave. and to me makes more sense. The PW may be correct but the particles are not following a wave that is moving around them. What is happening is that each particle is 'emitting' (spherically and radially outward from itself) a wave of its own. It is the interaction of all other waves that cause the particle to move. Just like a particle with mass distorts the spacetime around it, in a 'static way', perhaps charged particles distort spacetime in a way that is directly tied to the speed of light, and thus distort spacetime in a 'dynamic way'. But the fact that the slit experiment "works" even when we know only a single particle at a time is being sent thru, it just makes more intuitive sense that the particle is neither interacting with itself, nor with other universes (copies of itself). So what i'm saying is PW theory is FAR more intuitive than Copenhagen many-worlds to me. I believe there is genuinely one particle going thru the slit experiment when we think there is, and it follows the path thru in a way that is statistically determines much more by a PW-type theory than any other theory.
"It's been demonstrated experimentally that you can't have a local, deterministic, real, definite theory (i.e. that would lead to demonstrably false conclusions). People intuitively expect all of those things to be true, but we can prove that it isn't the case. The various interpretations all try to give up one of "local", real", "definite", or "deterministic" in order to preserve the others, thus preserving at least a modicum of their intuition.
The pilot wave idea gives up on locality"
And I will add:
Multiple worlds gives up on definiteness.
"Zero worlds" (my personal favorite) gives up on realism.
And, of course Copenhagen gives up on determinism.
Take your pick. Or go with decoherence, which kind of lets you turn a knob to give up a little bit of all four and dial in whatever setting you like.