> Bohr postulated a distinction between the quantum world and the world of everyday objects. A “classical” object is an object of everyday experience. It has, for example, a definite position and momentum, whether observed or not. A “quantum” object, such as an electron, has a different status; it’s an abstraction. Some properties, such as electrical charge, belong to the electron abstraction intrinsically, but others can be said to exist only when they are measured or observed.
This is a common error. Macroscopic "everyday" objects don't have a definite position and momentum. Macroscopic objects are quantum objects. But when the mass is big enough, the position and momentum can be defined simultaneously with an error that is so small that you can just ignore the uncertainty and approximate them as classical objects.
(Looking at them as classical objects is just a good approximation, like ignoring the gravity force of the Moon in most common situations.)
Anyway, the measurement problem is a real problem and nobody knows how to solve it. The current fade is to use decoherence to explain it. It is a promising idea, so I hope that in a few years/decades/centuries we can give a good explanation of the measurements that avoid anything that looks like magic.
> This is a common error. Macroscopic "everyday" objects don't have a definite position and momentum. Macroscopic objects are quantum objects. But when the mass is big enough, the position and momentum can be defined simultaneously with an error that is so small that you can just ignore the uncertainty and approximate them as classical objects.
To put this into simpler terms:
Whenever we measure something, we need to throw something at it and then have that something rebound and hit us again.
In most experiments, we throw photons and have them rebound into our eyes.
Throwing a photon against a "classical object" - a chair, a ladder, bacteria - is like throwing a tennis ball against a skyscraper. You throwing that does not have no effect at all, but it's very much negligible.
But when trying to measure quanta, you're now throwing your tennis ball at a football, or at another tennis ball. You're gonna be lucky, if it rebounds at all, instead of just pushing the object that you're trying to measure out of the way. (You also don't have any smaller balls to throw.)
That's why when you measure something in quantum physics, you only know that it has this exact value in the moment that you measure it. It's going to be pushed away because you threw something at it, so after your measurement it has a different value.
You also can't observe it over a longer period, so there's no way to know whether it was only in that moment at your measured position or a long time beforehand.
This is not a correct description at all of QM complementary observables. This is a purely classical explanation (and was one of the first layman "explanations" back in 1920, but that was 100 years ago and QM is much better understood now).
Could you elaborate on that? From my extremely limited knowledge it does seem like a just-so explanation (what you're responding to), but I'm not sure why.
Yet the observer effect is not the reason why we can't know an object's position and velocity at the same time. There are two ways we can see that this supposed explanation is a red herring:
I don’t think this analogy holds up. Consider the double slit experiment: throw a bunch of basketballs at a wall and see what pattern of hits they leave by looking at where they hit the wall. If the wall is being looked at (observed), we see one pattern. If we look away, conduct the experiment, then check it, we find another.
To me that suggests the act of “observance” effects the probability distribution of likely states. If a tree falls in a forest and no one is around, then it doesn’t really fall, it just has a probability of having fallen that is not resolved until someone goes to check. How does your analogy account for those effects? For me, it looks like quantum collapse is causing the states of these objects to become “resolved” where at first they were “unresolved” and this suggests we live in a universe that knows how to save on memory and is fundamentally probabilistic.
If you ever ran into a space leak in Haskell, you would see how having unresolved thinks can use more memory than eager evaluation.
But that has some merit to it in that you can describe QC as merging equivalent paths and then sampling from a wave distribution afterwards.
One fun variant on the double slit experiment is taking a coherent laser beam (everything is in phase) and splitting it, sending it through two paths, A and B, then merging it and shining it on the wall.
If the two path lengths are equal, there is no effect from splitting it. But if we make B take slightly more time we can get a interference pattern. If we have it get shifted by half a wavelength the light will cancel out!
Now if you insert a polarizing filter along path B, when you merge the streams, you could tell with path the light came from, and the interference pattern disappears. This is not exactly measuring which path it took, but making it possible if you added a sensor to tell.
Observation is not required just making the streams distinguishable.
But now if we add another polarizing filter downstream we can erase the distinction between them, and now you get interference effects again!
Adding a polarizer is a nice variant of the experiment. I think I never heard it before. I like it, but I disagree with the expected result.
If the slit A has no polarizer and slit B has a polarizer, then in the "wall" you will the sum of 50% of the interference pattern and 50% of the diffraction pattern of A(I'm not sure about the 50%-50% split, something like that.) I.E. you will see the interference pattern, but it will not be so sharp, the black lines will not be so black, the white lines will not be so white.
I think it's better to put an horizontal polarizer on A and a vertical polarizer in B. If you don't add any other polarizer you will see the sum of the diffraction patters of A an B, without interference lines.
If you put a polarizer, the result depends on the direction:
* If it is horizontal you will see only the diffraction pattern of A (without interference lines).
* If it is vertical you will see only the diffraction pattern of B (without interference lines).
* At 45° you will see the diffraction pattern like in the original double slit experiment.
* At the other 45° you will see the inverted diffraction pattern, the black lines will be white and the white lines will be black. (All of this bounded by the diffraction pattern.)
* At other angles, you get some mix of the diffraction patterns and the interference patterns.
It would be nice to see an experimental realization of this.
There are two walls. One wall has two slits, the other wall is where the particles/waves/balls/whatever colide and form the interference pattern (or not).
You don't need someone observing the second wall to get the interference patters. You can replace the person with a photographic plate, a CCD sensor of a camera, or other equipment. All off them are more precise, reliable and even cheaper than a graduate student with paper and pencil.
The problem is if you try to add some type of equipment to first wall to collect information about how the particles/waves/balls/whatever passed thru it. Whatever equipment you add it will disturb the flow and it will kill the interference pattern.
This is not a technological problem. It is how the universe work. If you propose to use some particular method (like using light to detect the balls) you will sooner or later find that there is something that gets broken (see the former comment).
An important detail is that if you use a macroscopic object like a basketball, the slits size and the slits separation must be tiny (less than a millionth of the size of the nucleus of an atom, probably much less). So you intuition about how thinks work in the macroscopic level is not a good guide to how thinks work in the microscopic level. In the macroscopic level you can approximate the basketball as a perfect classic solid. It's just an approximation, a very good approximation.
> This is not a technological problem. It is how the universe work. If you propose to use some particular method (like using light to detect the balls) you will sooner or later find that there is something that gets broken (see the former comment).
what confuses me in various explanations like this is that the whole 'act of observing affects what you observe' thing seems to be rather particular in that it turns the wave-like behavior into particle-like behavior, which strikes me as rather weird/counter-intuitive. Why don't we just get slightly different interference patterns? Or some spectrum of effect between wave-like and particle-like?
Is my confusion mostly a result of the limits of the analogies presented to me as a layman?
>Consider the double slit experiment: throw a bunch of basketballs at a wall and see what pattern of hits they leave by looking at where they hit the wall. If the wall is being looked at (observed), we see one pattern.
If the basketball was of energy 1 quantum, if the energy used to observe is 1 quantum or more the (shining light to see the result in realtime) then the pattern is different due to interference. If we don't use any energy to see the result in realtime, then result is different due to non-interference.
I could be wrong, but I have a different understanding on how all of that works. You keep talking about basketballs instead of waves or probability fields and I guess this is where we diverge.
> If the basketball was of energy 1 quantum, if the energy used to observe is 1 quantum or more the (shining light to see the result in realtime)
How would we observe the light that bounced off the basket ball? Would we need to hit it with another light in order to detect where that light is? How would we detect that second particle of light; would we hit it with a third? And so on.
The answer is that we don't need to shine light to see the basketball. We can detect the basketball itself; for example, if the basketball were representing a photon of light, we could cover the wall with photomultiplier tubes ( https://en.wikipedia.org/wiki/Photomultiplier_tube )
As sibling comments have pointed out, the parent is wrong in saying that observing the wall will change the pattern. Rather, it's observing the slits that will change the pattern.
If we don't observe the slits, but we do mark the point on the wall where the basketball hits, and we do this over and over again, then the marks on the wall will show an interference pattern. Note that we're not throwing anything at the basketball: we're just waiting for it to hit the wall on its own. Also note that the marks themselves don't change anything; we could note them down on some paper instead, or type them into a spreadsheet, or whatever.
What if we do observe the slits, e.g. by putting a baseball in one and a cricket ball in the other? In this case, we'll detect the basketball hitting the wall and either a baseball or cricket ball. After many goes, the pattern on the wall made by the basketball will have two peaks (one in front of each slit), not an interference pattern. This seems analogous to your 'bounce a photon off it' explanation.
However, what if we got rid of the cricket ball? Half the time we would detect the baseball hitting the wall too, the other half we wouldn't (when the basketball went through the other slit). Yet the basketball will still make the two-peak-no-interference pattern, even though we didn't interact with it half of the time!
In fact, we could randomise which slit we put the baseball in, and mark only those goes that the basketball didn't hit the baseball, and we would still see two peaks without an interference pattern, even though those basketballs didn't hit anything (they always went through empty slits)!
This hopefully shows that your explanation (known as the observer effect) doesn't explain the interference pattern in the double-slit experiment.
That's a nice explanation but doesn't it give the impression that if we could find a better way to do that experiment, we could find a way around the problem, when instead it's a fundamental limit on what we can know about a quantum system?
No, under most interpretations of QM, things literally behave differently at that scale. Under Copenhagen, the wave literally collapses into a fixed position/momentum. The pre measurement wave isn't a statement of our ignorance of the system but rather a description of reality. The many worlds is even more serious in its quantum literalism. Far from pushing around the subject of your experiment with a too-big measuring device, you're actually branching worlds where all predictions of the wave function occur.
To me, many worlds + time (as an inviolate observed vector) being merely a consequence of our inability to observe without moving foreward in time based on our entropic process driven cociousness, seems by far the most comprehensive explanation of observable phenomenon.
That observational uncertainty increases as the probability of direct interaction decreases (distance, time) strongly supports the hypothesis that observable phenomena are dictated strongly by the presentation and characteristic relationship of the observer to the phenomenon.
We know on the micro scale that all possible states exist simultaneously.
It seems logical, even axiomatic then that on the macro scale the same applies, but that we can only observe the bandwidth of states in which it is possible for us to exist to make the observation.
To claim that this state uncertainty is magically resolved in all cases and coherently for all possible observers into a single set of states seems an extraordinary claim requiring extraordinary evidence.
> The pre measurement wave isn't a statement of our ignorance of the system but rather a description of reality.
Post measurement particle is description of our ignorance not a description of reality.
It still evolves according to Schrödinger equation (which degrades to newtonian dynamics for sharp and narrow waves) but for historical reasons we choose to talk about it as it was little billiard ball, not still a wave just sharpened and narrowed down by intraction we call measurement.
The problem is that fundamentally, there is a fixed amount of information that there is, that has to be distributed over two dimensions. Particles that are constrained to a small area (e.g. photons going through a slit, electrons bound to an atom) simply do not have a well-defined momentum. In fact the effect is something that you can experience with a sharp enough camera lens: as you close the aperture (therefore forcing the light going through it to be in a specific place) you slowly lose resolving power as the light stops behaving nicely and diffracts around/through the aperture.
I agree with both. This explanation is easier to understand, but it makes it look like a technological problem that can be solved, instead of a fundamental property of the universe.
I think david927's intuition is more correct here. The uncertainty in the position and momentum is intrinsic to quantum mechanics - it's built into the 'wave function'.
The suggestion that if one pushes away something by throwing something else builds on a purely classical intuition and wouldn't require quantum mechanics to explain if this was all we observed. The uncertainty in quantum mechanics is fundamental (to quantum mechanics) and emerges through a different, as yet unknown, mechanism.
3Blue1Brown has an extremely good explanation[1] of the intrinsic uncertainty, and why it's separate from measurement uncertainty. (the previous episode[2] is a recommended prerequisite for background on how the Fourier Transform works)
> emerges through a different, as yet unknown, mechanism.
In 3Blue1Bron's explanation[1], he shows how the intrinsic uncertainty is an inherent trade-off of trying to measure both position and frequency. A short wave packet only a few wavelengths long correlates with a narrow (precise) range of positions, but also correlates well with a very wide range of frequencies due. A Heisenberg-like uncertainty exists any time you are working with weave packets with length near the wavelength. 3Blue1Brown gives a very good example using Doppler radar.
Yes, I like these sources too. Good for building intuition. I would just add that Heisenberg-like here means that both systems share features of wave mechanics. Doppler type effects aren't quantum mechanical though.
When I suggest the mechanism is unknown, I mean that Heisenberg uncertainty is a postulate of quantum mechanics. In other words the fundamental reason that quantum mechanics should appeal to wave mechanics isn't really established - we don't really know yet the fundamental objects and interactions that lead to quantum mechanics (despite much effort).
I'm not sure about the historical part, but now the uncertainty principle is not an independent postulate. It's deduced form the non commutation of the operations to measure the position and the momentum of a particle. This can be done in the wave representation or in the matrix representation.
Moreover, similar calculations can be done with other measurements that don't conmute. One that is very important is the spin of a particle in the x, y, and z axis.
Another is the polarization of a photon in directions that are at 45°. For example, most of (all?) the experiments of the EPR paradox are done with polarization instead of position-momentum, because polarization is much easier to measure. https://en.wikipedia.org/wiki/EPR_paradox
It's a good point that uncertainty relations exist for all kinds of physical observables. But whether they're expressed as commutation relations or as in Heisenberg's original formulation, or whatever formulation you choose (wave mechanics, matrix mechanics, dirac representation, qft, or anything else one can think of) it's still asserted, rather than derived from an underlying set of fundamental physical objects and interactions.
Why unknown? Heisenberg's uncertainty principle can be derived mathematically, using a property of the Fourier transform. It has nothing to do with disturbing the system during measurement.
I'd say that's more a mathematical statement than physical derivation. The effort of subjects like string theory is to lay down fundamental objects and interactions from which other theories (quantum mechanics, gravity) emerge. But I don't think there is a final word at the moment of what the fundamental theories than result in quantum mechanics should look like.
> Macroscopic objects are quantum objects. But when the mass is big enough, the position and momentum can be defined simultaneously with an error that is so small that you can just ignore the uncertainty and approximate them as classical objects.
Exactly. And, due to quantum tunnelling, there's a teeny tiny chance I could walk through a wall, but because there are a lot of particles in me that all have to tunnel a relatively large distance, and the probability of even one of my particles tunnelling that far is so tiny, it won't happen.
Intuitively, the difference between quantum and classical objects is a lot like the central limit theorem. Add up a bunch of uncertainties, all of which have similar distributions, and you're going to get something with a very small variance (uncertainty).
I skimmed a paper about twenty years ago that pretty much explains the deference to me.
Macroscopic objects are subject to quantum mechanical cascades. The end result is the probability constraints for an object composed of an enormous number of interacting particles rapidly goes to zero.
The example was a playing card stood on a knife edge. The probability cascade causes it to fall one direction or the other. We can't predict which side it will fall over on. But it will fall.
I thought the current fad was to use holography to explain it. The problem is that to measure something in quantum mechanics requires separating the world into two systems. To make exact measurements, the observing system has to be infinitely large to avoid quantum fluctuation, and the experiment has to be repeated infinitely often because it is probabilistic.
To deal with the infinite size, particle physicists push the observer systems out to infinity. This is fine for scattering experiments because collider detectors are so large and far away from a collision it might as well be infinity, but this poses a problem for quantum gravity and cosmology.
It seems that the most popular approaches to dealing with it have been based on Juan Maldecena's AdS/CFT correspondance, which is a toy model that allows those observers off at infinity to live on the boundary of spacetime and to describe the behavior of the interior entirely in terms of the projection of that behavior onto the boundary.
The challenge is that our universe doesn't look like Anti de Sitter space, but rather like (plain) de Sitter space, and so there is no natural boundary to project onto, which has led some physicists to question the legitimacy of dividing the quantum mechanics need to divide the world into 2 systems.
The measurement problem seems like an artifact of that, and what's really happening is that measurement causes the two systems to become entangled. Wave function collapse would then be the subjective result of the two systems entering a mixed state, because the observed system can no longer be described independently of the system that just observed it.
A route other than "dividing the world into two categories," has been available since the early days of quantum mechanics, if you believe in many worlds you don't have that problem. It is important to recognize that we are not talking about physics here, we are talking about philosophy.
If anything, the greatest shock of quantum mechanics is how it shows that "questions we naturally want to ask about the natural world" and "totally unresolvable philosophical disputes" are not completely disjoint.
And that is exactly why it isn't a paradox in quantum mechanics. Quantum mechanics is consistent for the domain of questions it was designed to ask: particle scattering experiments. It's when people push it into philosophy or other unintended domains that they run into problems.
Quantum mechanics is also totally consistent for materials science problems, astrophysical problems (pending gravity), chemical problems (including human beings), and so on. It's really the philosophy that has a quantum mechanics problem. ;)
A paradox is an internal inconsistency. Noone claims that quantum mechanics is the fundamental theory of everything. Like the convergence of a taylor series, there is a range of energies within which it produces correct calculations, and if you go outside that the result is undefined.
Strictly speaking that is a QFT problem, or really a standard model problem, moreso than a quantum mechanics problem. In order for it to be a quantum mechanics problem it would have to happen everywhere in quantum mechanics, and the single electron Schrodinger equation hydrogen atom definitely has no (theoretical) problem.
> with an error that is so small that you can just ignore the uncertainty and approximate them as classical objects.
I’d say that’s the gist of it, that we cannot “just ignore” the uncertainty because it’s too small, because if you do that then your model and the real world are indeed different. Also, at the end of it all what does “too small” mean? “Too small” compared to what? To a galaxy? To a super-nova? To a planet? To a cat? To the things we try to “discover” at CERN? To things smaller than them? To say nothing of the fact that comparing a number to physical stuff will eventually bring you head on against Zeno’s paradox, one way or the other.
I agree though that using the “too small” trick does generally allow us to do great things, like send stuff to the confines of the solar system or to build nuclear bombs, i.e. it allows us to be efficient, but that does not mean that by being efficient our models are also identical representations of what reality really is, so to speak.
What I’m saying is that maybe the “mathematization” of the physical world is a leaking abstraction, and that maybe we’d be better off by saying “we’ll never really know what the world is made up of”. But the problem is that they haven’t awarded science Nobel prizes to people saying “there’s really no way for us to learn how the Universe really works”, at least not that I know of, at best you’re seen as a mysticist when saying that, at worst as a know-nothing or a cynic.
> To say nothing of the fact that comparing a number to physical stuff will eventually bring you head on against Zeno’s paradox, one way or the other.
Zeno's paradoxes are soluble by basic calculus. Once you distinguish between countable and uncountable infinities, the problem of crossing a bounded interval in finite time ceases to be paradoxical.
This is basically to say I don't think this is a particular problem for the resolution of outstanding inconsistencies in theoretical physics.
>Zeno's paradoxes are soluble by basic calculus. Once you distinguish between countable and uncountable infinities, the problem of crossing a bounded interval in finite time ceases to be paradoxical.
They're not mathematically paradoxical, but that doesn't necessarily mean that the paradoxes are solved, because there's more than just math going on. A lot of the paradoxes hinge on the question of whether it is in fact possible to traverse an infinite series of positions in space or moments in time. I have no idea whether it is or it isn't, but the issue isn't settled by calculus. Calculus allows you to figure out what the result would be if such a traversal were to occur.
No, calculus does in fact resolve them. More specifically, formalizing continuity and completeness obviates the issue. Like I said, if you distinguish between countable and uncountable infinities, there is no longer a paradox.
The only reason it appears to be paradoxical is because you're mandating someone move from a real coordinate (a, b, c) to another real coordinate (a', b', c') on the interval [x, y] while also passing through the set of all real points between them, without first defining a notion of distance of time. That's not possible for the same reason you can't ask someone to count all reals on an interval, because continuity implies uncountability. Between any pair of real numbers is another real number, and it takes an equal amount of effort (and time) to count any given number.
To a first glance, this seems like a paradox because we can clearly move from (a, b, c) to (a', b', c), yet we shouldn't be capable of any movement whatsoever. Calculus solves this problem by formalizing Zeno's demand as a geometric series with a notion of distance. The requirement is that you move from one position to another position while passing through every halfway position between them. Equip the vector space ℝ^3 with the Euclidean metric so you have a metric space (defined distance). Then we have the sequence of steps
(a, b, c) -> |(a, b, c) - (a', b', c')|/2 -> ... -> (a', b', c')
More concretely: an infinite expansion such as 0.99999999... is equal to 1. Each half step will take only half as long to traverse as the half step preceding it to it once you've defined Euclidean distance on a continuous space. The first step to formalizing sequences and series like this is by constructing the real numbers as a continuous set and distinguishing between different types of infinities. Then you can define limits, and from there you're essentially done.
Note that at no point am I talking about what happens when you reach 1, or (a', b', c'), or anywhere else. I'm just explaining how you reach it in finite time. If you can get arbitrarily close to a point, you can get to the point itself.
I guess I should be more technical and say that real analysis solves this problem, because what's really doing the heavy lifting here is the topology induced by defining a metric on ℝ^3 in combination with the notion of limits.
Clearly, not all infinite sequences can be summed. So e.g., 1, -1, 1, -1, … has no sum.
Now suppose that Achilles takes alternate forward and backward steps a (countably) infinite number of times. The first step takes one second, the second step takes half a second, and so on. (Each step covers the same distance.) Where does he end up after 2 seconds?
There’s no sensible answer to that question. Does that mean that Achilles can’t in fact traverse that particular sequence of steps? But then, why should he be unable to traverse a particular infinite sequence of steps merely because its sum is undefined? After all, the result of each individual step is perfectly well defined. If it’s possible in general to traverse infinite sequences, what stops him traversing that one?
To me, this just seems like Zeno’s paradox all over again. The mathematical treatment is more sophisticated, but the underlying paradox remains.
Zeno himself probably wouldn’t have distinguished carefully between summing an infinite sequence and spatially or temporally traversing it, since both notions would have seemed equally absurd from his point of view. Modern mathematics has shown us that the former isn’t in fact absurd. But Zeno’s paradoxes are arguably about the latter.
> Clearly, not all infinite sequences can be summed. So e.g., 1, -1, 1, -1, … has no sum.
Your geometric series is not a summation of the steps or positions, but rather the time required to complete each step. Therefore your example is characterized by an identical geometric series to the model I used in my previous comment.
More generally, Zeno’s paradox can be succinctly resolved by citing the monotone convergence theorem. Every bounded, monotonically decreasing function converges. The time required to complete the infinite series of half steps converges, because (again, with the definition of a metric) the time required to complete each individual step decreases commensurate with the change in distance.
>Therefore your example is characterized by an identical geometric series to the model I used in my previous comment.
I am not sure what you mean here. You can calculate the sum of the time series, but you can't calculate Achilles' final position, which is the question at issue. The question remains: if it's possible in general to traverse an infinite sequence of steps in space, why is it not possible to traverse the one that I specified? "Solving" Zeno's paradox by admitting the possibility of traversing an infinite series of points in space or time seems to give rise to paradoxes just as deep as the originals.
> The time required to complete the infinite series of half steps converges, because (again, with the definition of a metric) the time required to complete each individual step decreases commensurate with the change in distance.
Yes, that was Aristotle's observation and a key part of his proposed solution to the paradox. The problem is that this explains why it's possible to sum the series, not why it's possible to traverse it. You seem to be taking the position that any series that cannot be summed cannot be traversed. But why should that be so?
Thinking about this a bit more, I think what I'm trying to say is that Zeno's paradox is more about supertasks than it is about the problem of computing the sum of an infinite series. There's a nice summary article that I found here:
What??? Every Physicist believes that the uncertainty principle apply to any object from galaxies to elementary particles. So if you try to not apply the uncertainly principle in a particle collision in the CERN, they will think that you are crazy. But if you try to add the uncertainty principle to the simulation of the movement of the objects in a galaxy, they will think that you are crazy too because the difference is very small and the calculations are much more complicated.
There are some application of the uncertainly principle to neutron stars, and IIRC to the background radiation. Nobody thinks that the uncertainly principle doesn't apply to big objects, it just that in most cases the difference is ridiculously small.
Perhaps the “mathematization” of the physical world is a leaking abstraction, perhaps no. We don't know. If you can prove that “we’ll never really know what the world is made up of” you will get a Nobel prize. But you will need a real proof, not handwaving.
"Too small" means either it can not be detected experimentally, or even in principle, depending on whether you are allowing gedankenexperiments.
I don't think it is reasonable to expect our models to be identical to reality. Otherwise, they wouldn't be models, they would be reality. A theory is correct if it produces experimentally verified predictions.
No. It would take far more digits of accuracy than we can measure.
For example, here's an object of 1 kg mass. The product of the uncertainty of its position and momentum is h-bar / 2, or h / 4 pi, which is about 5e-35 kg m^2/s. We're going to measure the position to one wavelength of visible light (say, 500 nm wavelength, so 5e-7 m). That means that we need to be able to measure the momentum to an accuracy of 1e-28 kg m/s, which for a mass of 1 kg means measuring the velocity to within an accuracy of 1e-28 m/s. Good luck with that...
One of my homework assignments in a quantum mechanics class was to calculate the uncertainty of a cheetah that was running at 65 mph (and given its weight).
You can in theory but the numbers get so small that they're clearly unmeasurable.
I believe gus_massa was just saying that we simply define out of existence this supposed problem with measurement (for macro objects). This is done because even though the 'furniture of the world'/everyday objects/macroscopic objects are still quantum entities, at this level, error rates are such that they can be treated in a classical manner.
It's a bit ungenerous to physicists to suggest they've simply shrugged off the paradoxes of quantum mechanics. The first couple of paragraphs outline reasons why quantum mechanics has gained so much traction. The various attempts to understand its more difficult results have turned out to be notoriously difficult to substantiate or disprove experimentally.
I don't actually dispute the socialogical forces the author describes as they apply to physics research but I don't think apparent lack of progress is due to lack of concern amongst the physics community.
There have been several books recently that describe the saga of twentieth century quantum physics, culminating in the realisation that entangelement is a real thing that has consequences. My favourite is "The Age of Entanglement: When Quantum Physics Was Reborn" by Louisa Gilder. It's a fun book.
The history is fascinating. I'm waiting for a book that describes the deeper sociological aspect. A book with attitude. For instance, several of the founders of quantum physics had mystico-philosophical things to say about the quantum. But all of this was thrown out the window with the advent of the second world war. Physics became overrun by calculations. It took quite a while for people to get around to thinking about entanglement again. John Bell struggled for decades to get people to notice his work on this. Even the guy that invented the laser was met with stern disbelief when he presented his results to the "elders". Partly this is science just doing its thing, but there also does seem to be a history of science being dragged, kicking and screaming. Especially with quantum physics.
Essentially because of Feynman’s teaching that quantum mechanics works accurately to predict the results of experiments, that that is EVERYTHING, and that “paradoxes” and “intuition” are suspect.
Note, despite the title of somebody presenting this segment of his talk, he definitely doesn't talk just about "magnets" but really about "the meaning" and "why" questions (the youtube poster got it). About the needed "expected framework" to avoid "perpetual why questions" etc. Even the things that we take for granted, e.g. using electricity in our day to day life, or the eventual "why" we don't fall through the floor can't be properly explained using just our "common" intuition.
In another direction, it works, we are "used" to see some effects of the electrical forces all the time, so that's what is "intuitive" to us because we're familiar with it.
So back to the original title of the article submission here: "why ... shrugged off the paradoxes..." it can be easily seen that once one is getting an answer inside of the expected framework, one doesn't "shrug it off," one is actually satisfied with the answer.
I think the observes are in super position and measurement collapses us and the world. Thus there are no nonlocal effects rather we just get rid of multiple universes which were averages together and pick one, or at least the subset of possible universes that correspond to that particular measurement. Thus instead of non local effects one selects from multiple existing solutions at present and discard the rest.
I think this makes a lot more sense.
The other alternative was we are in a simulation and it has variable levels of resolution (wave verses particle) and it will computer higher resolutions on demand (eg observers), but that these on demand increases in resolution affect the simulation going forward. This seems to be overly complex thus I prefer the first interpretation.
I am a layman in this area so I am probably wrong.
There must already exist a formal name for this? Isn't this where multiverses comes into play?
quantum mechanics has two different laws that describe how a system changes in time.
Rule 1 says that except during a measurement, the wave evolves smoothly and deterministically, exploring every possibility.
Rule 2 says that during a measurement of position, the wave collapses around the position where it's seen, with a probability proportional to the square of the height of the wave, before the collapse.
It is only Rule 2 that mentions probabilities at all.
And, Rule 1 and Rule 2 are in catastrophic tension with each other.
Rule 1 says that there are always all possible outcomes.
Rule 2 says there is only 1 outcome and it's picked out with some probability rule.
It's pretty contradictory - is there one outcome or all outcomes?
> youtube.com/watch?v=Zri9gS1w5ok Perimeter’s Lee Smolin will argue that the problems that have bedeviled quantum physics since its inception are unsolved and unsolvable for the simple reason that the theory is incomplete.
And I never understand what is a "measurement". Has it something to do with a human being observing it? Isn't any measurement just electromagnetic or gravitation forces acting on it, which act all the time?
A measurement is an interaction that crosses the Heisenberg cut separating the quantum from the classical world. Eg you prepare an atom in a superposition of states relative to some 'quantization axis' determined by a Stern-Gerlach apparatus. The state will randomly 'collapse' into one of the available final states, resulting in a macroscopic dot on some screen. You and all your physicist buddies will agree on the placement of the dot.
There is no agreement on how this happens, complicated by the fact that there is no such thing as a classical world I alluded to above, ie any interactions with the apparatus and the conscious observers supposedly obey the laws of quantum mechanics.
I think one development that occurred since the late 80's, is that most physicists don't think conciousness is required at all, developing the idea of "decoherence" instead. Not that resolves the measurement problem in any way, of course.
What is the general feeling among theoretical physicists when it comes to breakthroughs in resolving these paradoxes? Decades ago there was great optimism about string theory and similar, followed by a long period of disillusionment . What is the guesstimates now for the next big breakthrough such as unifying gravity and QM? Is it seen as something achievable in the near/foreseeable future, or is it considered to be a far future achievement comparable to say interstellar travel (“we’ll get there if we aren’t extinct by then, but our current civilization will seem primitive by the time we do”)?
As Wheller said "It from bit". It seems to me that when physicists can to consider observation, collapse and physic phenomenas gennerally as some kind of informatinal processes, then quantumn puzzles about reality will be resolved. I think them not sussessed yet because this is incredible difficult task; not because this wrong way. May be it will be something like informational multiverse. Some principle like multiverse branches selection to keep "information conservation law" (as another manifistation of unitarity) can explain nonlocality ect..
These idea is interesting, but, I had a different idea, is constraint interpretation. Mathematics is the real reality. For example, some equations can have multiple solutions, such as x^2+2x-3=0 can be 1 or -3. If you make the experiment that you have a entangled photons and you are on a different planet, you can measure it and if the people on other planet measure it in the same way they will get the same answer; it doesn't go "instantly" or at any other speed; the constraint is they are same, and this entanglement propagates backward in time from the point of view from both experimenters; it doesn't say if one is "before" or "after" another one. You can trace it backward in time to the source and forward in time to the other experimenter, and that is the line of the constraint.
> Pascual Jordan, an important member of Bohr’s circle, cut the Gordian knot: An electron does not have a position until it is observed; the observation is what compels it to assume one. Quantum mechanics makes statistical predictions about where it is more or less likely to be observed.
The things physicists will say instead of "we don't know the answer".
All human models of reality are flawed. It is improbable that our small minds will ever comprehend the nature of the universe in it's entirety. Each answer uncovers more questions, more paradoxes.
That is not to say we should not continue to try to understand the world around us. Only that logic dictates that we proceed with humility.
Definitely. I wholeheartedly subscribe to this view (https://en.wikipedia.org/wiki/All_models_are_wrong). The thinking mind will always be confined to models. The way I see it, the questions posed here are an expected wall one will eventually hit when the mind attempts to explain something that is fundamentally inexplicable - i.e the nature of existence.
My current intuition is that the hard philosophical questions posed by probing deep into what matter essentially is are deeply linked to the hard problem of consciousness.
I'll make an even bolder claim, which is that both of these questions arise from a misunderstanding of what consciousness actually is and that no less than a Copernican revolution in understanding its role is what's needed to put them to rest.
Of course, that won't be the end of physics, as the mystery is endless - but it might be the start of a deeper humility and appreciation of that mystery.
The rise in visibility of subjective science (science of consciousness/spirituality) and integration of it with objective science will accomplish that.
Luckily the subjective sciences are already highly developed, just look at Kabbalah. You can read a thousand pages of it and you still won't understand a single word!
I feel someone else may have said that, but simply said, what if the phenomenon is a wave, and its observable effect a particle? Isn't it the fact that the wave hits matter that makes it observable / measurable. Doesn't seem incompatible, even with the monitors, with the double slits experiment
Correct me if I am wrong, the "paradoxes" described here are due to our lack of understanding of the theory right and not because the theory is inconsistent.
The paradoxes are because the theory is formulated in terms of complicated mathematical expressions that do not map neatly onto a naive understanding of macroscopic reality.
The entire problem of learning QM is, that one has to learn not to trust ones intuition and instead look at what the math is showing. For example the supposed paradox of being sometimes a wave and sometimes a particle is only there if you insist to use the macroscopic analogies of wave and particle. In the full theory, the object is always a vector in an Hillbertspace, which is a concept that does not map neatly onto some combination of wave and particle.
QM doesn't mean the same thing to everybody. Many physicists think it includes both unitary evolution and (real or apparent) collapse during measurements, while being valid for multiple observers and describing a single approximately classical world.
That very common view is a paradox because different observers can have different notions of "measurement" and they are incompatible.
A paradox is essentially a logical contradiction. As far as we know such contradictions don't occur in nature. Paradoxes indicate that there is a flaw in either our theory or our understanding of our theory.
Actually a paradox is something that appears to be a contradiction. It comes from the greek for "contrary to expectation". So often a paradox is about wrong expectations and not a contradiction in of it self.
right. but saying there aren't paradoxes in quantum mechanics isn't helpful and is probably just wrong. quantum mechanics isn't a physical thing and isn't reality. it is a model. of course it can have paradoxes, and does.
A model with a paradox is usually considered a flawed model, as it a logical contradiction, and from a logical contradiction anything can be proven.
"Proof by contradiction" is a proof that the opposite is true, and is constructed so that if the thing you try to disprove would be true, you could create a paradox/contradiction.
> A model with a paradox is usually considered a flawed model
that is the critique at hand, is it not? my understanding of the paradoxes in quantum mechanics stem from things, like measurement, being unspecified or at least being specified or interpreted without consensus.
Your definition of reality here is unrelated to the OP's point. Quantum mechanics will work the way it does regardless of what your intuition on how it should work thinks.
Also the emergence of an apparently paradox is a great heuristic that raises the question of whether our intuition and understanding is wrong or correct.
It's not a matter of intuition, it's a matter of incompleteness or inconsistence in the theory. There is no theoretrically testable explanation of what happens when states transition in quantum mechanics.
i didn't define reality, although i implied it to be everything, which was a counterpoint to the commenter's implication that quantum mechanics is reality and our intuition is somehow separate from that.
> Quantum mechanics will work the way it does regardless of what your intuition on how it should work thinks.
i didn't say otherwise, although i think what you've stated is debatable.
Yes there are. Even without invoking any philosophical issues, quantum mechanics admits it's not self-contained. It takes measurement - the act of an outsider interacting with a system, which collapses the wavefunction, as a postulate. Measurement is not described by quantum mechanics.
Yes it is. Measurement is entanglement between the observed system and the observing system. Wave function collapse is an illusion caused by the two systems no longer being separable. They have entered a mixed state. It looks like information is lost in the observed system only because you are ignoring the observer.
To be more precise, we are in all of them, but our consciousness only perceives one of them.
So the real problem isn't in quantum mechanics, it's in what the hell consciousness is and why it works the way it does.
However, note that you can have observations performed by a computer which e.g. counts events, and the counter will exist in all "worlds" (branches of the universal wave function, really), while each incarnation of it only "perceives" one such "world". So the physics works just fine even while we don't fully understand consciousness.
Why is it necessary for consiousness to be any different from that computer counter? It seems to me that the paradoxical idea is not quantum mechanics, but rather conciousness.
I actually agree with you. If you can accept that consciousness is basically a computer, then there isn't really anything mysterious going on. I probably could have phrased that better.
How does that measurement occur? Since all particles interact via gravitational and electromagnetic Force all the time, why aren't they always entangled.
I never see a definition of measurement, or observation.
No, there is not. The Everett aka many-worlds interpretation demonstrates that you can explain the observed effects of measurement without invoking "collapse" of the wave function, and without reference to anything outside of QM.
MWI is what happens when you don't require the existence of anything extra (collapse phenomenon). It's like how <the way sound emitters change pitch when the observer is moving> isn't an unexplained phenomenon--it results from the evolution of the observer's perspective. In MWI there is no event corresponding to collapse for the same reason that Gallileo's model doesn't have any correlate of epicycles.
In MWI the event corresponding to collapse is the supposed appearance of an entire universe, which is - conveniently - impossible to detect. And which "explains" any one timeline of experience as "Actually, that's still just random."
So MWI goes from "That's random and we don't know why" to "That's random, we don't know why, but now we've added a universe too, although we can't prove it exists."
Something about this doesn't seem entirely convincing.
> In MWI the event corresponding to collapse is the supposed appearance of an entire universe, which is - conveniently - impossible to detect.
My layman's understanding was that MWI doesn't postulate multiple universes. Rather, it posits that all particles are always in superpositions of states. What looks like a collapse to a single state is actually the particles under observation getting entangled with the particles that compose the observer: so, in the case of the double-slit experiment, you have a combined particle-observer system in a superposition of the states (particle goes through left slit / observer sees left detector activated) and (particle goes through right slit / observer sees right detector activated). Nothing has actually changed other than that entanglement: nothing is created or appears, certainly not an entire new universe.
ETA: like I say, I'm a layman, so I'd welcome any correction.
Your description looks right to me. Note that the many-worlds interpretation was originally named the theory of the universal wave-function. You get there by trying to eliminate wave-function collapse and Heisenberg cuts.
And yet there is an intermediate version. Which is that we can have an "observer" that is a moderately complex QM system where further post-observations are possible though difficult. And we can conduct experiments along that line and observe the growing difficulty of future interaction as systems get complex.
Unsurprisingly, these experiments with "partial collapse" produce exactly the predicted results of QM. And are exactly in line with, "And if the system gets more complicated still, collapse becomes irreversible." (Where "more complicated still" is very simple compared to ordinary macroscopic objects.)
Last time I checked, the Everett interpretation has yet to reproduce the "Born Rule" equation for wavefunction collapse.
The Many-Worlds interpretation is nice and fun to talk about, but it hasn't done any heavy lifting in terms of reducing the number of postulates required to describe quantum mechanics.
Many Worlds still owes us an explanation of the apparently huge difference between this, the actual, world and all the other apparently not quite so actual ones, does it not?
The two most promising ideas (IMO) are the idea that the "wave function collapse" is a non-physical event describing a change in our knowledge, the quantum informatics approach as per Philip Ball's recent RI lecture,[1] and the "Relational" interpretation[2] which observes that a measurement always entangles the measuring system with the measured system, and is best treated as such - i.e. as an entanglement relationship; though it seems that this leads to an "entanglements all the way down" view in which these relationships are the ultimate physical reality, rather than the conventional picture of the relationships being secondary to the things which are so related.
The "measurement as entanglement" view is, I know, notably supported by at least one HN poster[3], who I hope will be on hand to correct my no-doubt erroneous rendition :)
Many Worlds still owes us an explanation of the apparently huge difference between this, the actual, world and all the other apparently not quite so actual ones, does it not?
No.
For a start, there is no evidence that the ones you are calling "not quite so actual" are any less actual than the one we share an experience of.
Continuing on, the interpretation does not attempt to explain the phenomena of perceived consciousness. That is as much out of its remit as abiogenesis is out of the remit of Darwin's theory of evolution. Those problems are different in kind, much harder, and have to be addressed by very different theories.
Everett addressed a precise problem. That problem is explaining why, to the extent that QM describes an observer, the act of observation must turn that observer into a superposition of observers. Each of which has observed an apparent collapse, and none of which can interact further in any meaningful way.
> For a start, there is no evidence that the ones you are calling "not quite so actual" are any less actual than the one we share an experience of.
Well, there is evidence of the one I'm calling "actual" - if it's evidence you want, there's no evidence at all of the "not so actual" ones - citing the MW interpretation as evidence of these is purely circular.
Regards not explaining perceived consciousness.. If MW is shunting off questions of actuality into something to be resolved by a theory of consciousness isn't it leaving behind centuries of consensus belief in the objective observer-independent existence of a physical world ontologically prior to consciousness?
To further build on what you're saying, by bringing in the concept of a measure-er we have already entered the domain of philosophy, because the special class "measure-er" belongs to the Copenhagen interpretation and the interpretation of quantum mechanics is a philosophical issue by virtue of the fact that it will never be decided experimentally.
Philosophical issues mean that the theory is wrong and new physics will come from their resolution, and thus philosophical issues can be decided experimentally.
We may be using different definitions of the word philosophical. I'm classifying anything that can be decided experimentally as scientific. These interpretations are identical from an experimental standpoint.
It will surprise you to hear, just as it suprised me, that MWI and Copenhagen have been axiomatically identical from day one. I know what you are thinking, something like, "if the problem was closed we wouldn't be talking about it, so it must be open." Yeah, well...
There is a list of axioms of quantum mechanics, Copenhagen and MWI disagree only on how you get to them. (And also on their conceptual interpretation). That's why they're called interpretations of quantum mechanics, not theories of quantum mechanics.
The act of making a measurement/observation is nothing more than an interaction where the system is exchanging information or this this case quantum states which is described by quantum mechanics. I don't get where this "special/outside" observer coming from.
Quantum mechanical experiments get replicated time and time again. There is nothing even resembling the reproducibility crisis in quantum mechanics. The issue discussed in the article is only about interpreting why the results always come out exactly like we predict, not any issues with the results or reporting of them.
It seems to me that the resolution of the quantum mysteries might come from three sources: (1) the Holographic Principle, (2) Dark Matter, and (3) the Planck Length.
1. The Holographic Principle. The universe looks three dimensional but fundamentally it is different. 3-D space is a projection from some 2-D circuit board. If I believe that, then I have no problem hearing about (a) hidden variables or (b) spooky action at a distance.
2. Dark Matter. The universe is 98% unaccounted for. Could it be that space is not a vacuum? Perhaps we are like fish in water, moving through it but taking it for granted. How can "something" have travelled through "nothing" (a vacuum) anyway? Maybe there's an aether after all.
3. The Planck Length. Space is not an infinitely smooth line but instead there is a fundamental bit size. We thought atoms were it, that's how they got their name ("atom" means uncuttable). Later we found inside them protons and neutrons, and within those, quarks. Someone therefore might imagine that we could go on subdividing forever. But the Planck Length is a hard stop: 0.000000000000000000000000000000016 of a millimeter.
If we are immersed in invisible water, then it's unsurprising that there are "waves." And yet if there is a fundamental smallest size, like the grains of photographic film, then it's not all that weird to say that you can think of matter as particles. Perhaps that's all that a "particle" is, a piece of the water.
2) As far as I know, Dark Matter is totally unrelated to the quantum mysteries (uncertainty principle, measurement problem, ...)
3) In elementary particles, the magnetic moment is twice the expected value of a fake classical particle of the same mass and charge (for a complete accurate explanation see https://en.wikipedia.org/wiki/G-factor_(physics) ) For elementary particles the number is 2. For non elementary particles, it can be any number.
For electrons/muons/quarks the number is almost 2, and the difference is caused because the elementary particle has a cloud of virtual particles around. We can calculate this correction quite well. So there is a high chance that electrons/muons/quarks are actually elementary particles.
Anyway, I don't understand how the subdivision of the current elementary particles is related to the quantum mysteries. You get the same mysteries with electrons (that are probably elementary) and with protons (that are definitively not elementary).
You're correct, particle physics or QFT is not needed to appreciate the fundamental QM "mysteries", any non-trivial evolving measurable observable will do.
IMHO quantum mechaninc is too simple and fundamental for such complicated explanations. It only claims that probabilities of some elementary observations determined by complex number vectors wich evolve unitary.
This is a common error. Macroscopic "everyday" objects don't have a definite position and momentum. Macroscopic objects are quantum objects. But when the mass is big enough, the position and momentum can be defined simultaneously with an error that is so small that you can just ignore the uncertainty and approximate them as classical objects.
(Looking at them as classical objects is just a good approximation, like ignoring the gravity force of the Moon in most common situations.)
Anyway, the measurement problem is a real problem and nobody knows how to solve it. The current fade is to use decoherence to explain it. It is a promising idea, so I hope that in a few years/decades/centuries we can give a good explanation of the measurements that avoid anything that looks like magic.