I understand some of the subject, but I didn't understand the point this article is trying to make. There's a lot of the usual popsci about QM, and then this...
"In this case, the physicists hypothesised the act of measuring time in greater detail requires the possibility of increasing amounts of energy, in turn making measurements in the immediate neighbourhood of any time-keeping devices less precise.
"Our findings suggest that we need to re-examine our ideas about the nature of time when both quantum mechanics and general relativity are taken into account", says researcher Esteban Castro."
joe_the_user's link to the original paper is infinitely more useful than the article itself. The paper seems to imply something fundamental about how anything that we can use as a lock must function, then lays the usual assumptions from QM on top; that makes sense, although who knows if it's true?
I don't get the sense from the paper that it's something we could reasonably probe with current or near-future tech. Worse, their conclusions about whether or not spacetime intervals can be absolutely defined may rely on an arbitrary level of precision.
You might have heard of the Uncertainty Principle? Basically it says that ΔxΔp ≥ ℏ/2. Well the principle is more general (look up Fourier analysis) and it can be shown that ΔtΔE ≥ ℏ/2 as well.
It is indeed, because "time" is not an operator in QM, unlike H or x or p. But from my limited understanding of the time-energy uncertainty relation, you could (as you say) consider it a measure of lifetime - the time it takes for a change in E . So if you get a higher energy, you also should get less change in time, and thus you could probe smaller timeframes.
I might be wrong though. Someone shoot that argument down if you have to ;)
its badly explained, but what the first paragraph is saying is basically:
"the act of measuring time in greater detail requires increasing amounts of energy" (increasing amounts of energy dilates time because general relativity) "therefore, time-keeping devices become less precise".
time dilation implies "less precise measurements". If we assume a balance where time measurements always require more energy, then there has to be a level of precision at which increasing precision through deploying more energy increases precision by less than the increased energy decreases precision, therefore, improving precision becomes impossible.
It seems to me it is fairly analogous to how our perception of mass in relation to GR has been for a while now. It is impossible for a particle with mass to reach the speed of light because a.s it gets closer to the speed of light, it requires more and more energy to get it going faster
Why does increased time dilation imply decreased precision? If we can precisely measure the energy deployed, can't we also precisely estimate time dilation and account for it?
I see, and that at least does make a certain amount of sense. By putting timekeeping into the same category as any other measurement regime you would expect to see this kind of measurement difficulty emerging. What I'm not clear on though, is whether they're saying this is a fundamental uncertainty, or a measurement problem. The latter seems likely, the former a lot less likely.
I don't think it's implying that, at least, I'm not seeing it. The notion of spacetime being quantized though is very much not a proven thing, but it is very interesting. Things like QECC popping up in physical process makes it even more interesting to some.
At any rate, I think this paper applies equally well if spacetime is quantized or continuous.
I think it is saying that: if it is true that more precise measurements of time require more energy, then there is a limit to how precisely we can measure time, even in theory.
> Another way to think about the Planck length is that if you try to measure the position of an object to within in accuracy of the Planck length, it takes approximately enough energy to create a black hole whose Schwarzschild radius is… the Planck length!
I wonder if this could be used to test various quantum gravity models. Apparently this could involve a lot of energy - another giant machine to build, oh boy.
There are a lot of comments here, some by people more knowledgeable than I. Let me just say that, by my understanding, just as there's a quantum uncertainty principle linking position and momentum, so there is a similar (or equivalent) quantum uncertainty principle linking energy with time[0]. The more accurately you know when something happens, the less certain you can be about how much energy is involved, and vice versa.
This is why the quantum vacuum is filled with particles, and why the Casimir effect[1] works.
> This is why the quantum vacuum is filled with particles
The quantum vacuum is defined as the no-particle state.
The particle number operator works by matching annihilation and creation operators. Those operators are fixed on particular coordinates.
When annihilation and creation operators match up there is no particle.
For inertial observers in flat spacetime, a Lorentz transform relates the annihilation and creation operators for any pair of observers no matter what their relative velocities or which way they face relative to one another.
A Bogoliubov transformation can relate annihilation and creation operators for a wider range of observers in flat spacetime, including those who are accelerated. However, a natural choice of coordinates for extremely accelerated observers relates poorly with a natural choice of coordinates for inertial obervers (for example, in each case using polar coordinates with the origin always at the respective observers), with a result that there is a disagreement about where in a set of shared coordinates one finds frequency modes which relate to annihilation and creation operators.
As a result, in flat spacetime, the no-particle vacuum of a(n inertial) Minkowski observer looks to an accelerated observer as if there are particle creations not matched by particle annihilations, and thus the particle number goes up. However, conversely, the no-particle vacuum of a (n accelerated) Rindler observer looks to all Minkowski observers as having particles.
Since Special Relativity does not privilege frames (even though its focus is on relating inertial observers' observations), neither type of observer is really "more correct".
However, since the accelerations required to have a detectable number of particles is extreme (you need to maintain 10^20 gees of acceleration to see a thermal bath of 1 kelvin) and since accelerations do not last indefinitely, it is fair to pseudo-privilege the inertial observers, and conclude that (a) the Rindler observer is counting annihiliation operators source by whatever is locally powering the uniform acceleration and (b) the Minkowski view that the accelerating observer is therefore emitting a thermal bath into the Minkowski vacuum is more correct.
Additionally, in SR one can tell an extreme acceleration from mere inertial (even ultrarelativistic) movement, using an accelerometer.
In general curved spacetime one expects observers not moving on a geodesic (one hovering at a fixed height above the surface of a planet, for instance) to disagree about the particle numbers in the local area compared to an observer freely falling on a geodesic as that observer passes infinitesimally close to the non-geodesic (accelerated) observer, when the particle numbers are defined using annihilation and creation operators set against "natural" coordinate choices for these observers and relating those choices through a Bogoliubov transformation.
Two mutually accelerating observers in flat spacetime or two observers in general curved spacetime may not be able to find a shared coordinate system, and when that is the case, no one can really say which of the two is "more correct" about her or his no-particle vacuum.
Consequently, the quantum vacuum may have particles in it, but only for a peculiar choice of quantum vacuum (i.e. you look at someone else's claimed no-particle state and see particles in it).
When looking at one's own no-particle state, one cannot determine the vacuum energy. One can conclude that at each coordinate point there is the same energy, but not the exact energy -- this ground state / minimum energy can be arbitrarily high.
However, the uncertainty inequality \Delta E\Delta t \geq {h \ over 4\pi} applies and one can treat this as momentarily separating annihiliation and creation operators. One can treat this formally (e.g. in Feynman diagrams of this process) as the production of virtual particle/anti-particle pairs, or alternatively as a realization of the non-commutation of the field's energy operator with the particle number operator, and thus the no-particle state is a superposition of particle number eigenstates.
Trying to turn these superposed states into real particles is taken less or more seriously depending on the writer. I'm fairly confident that many physical cosmologists are fine with the idea when considering the possibility that the early universe is a fluctuation of a high-energy no-particle state into a still-high-energy many-particle state. The problem is in suppressing these fluctuations in later regions of space that are free of particles (from our perspective in our solar system now). Physicists in other areas like to poke many holes in these ideas.
However, in both cases, "filled with particles" invites horror-show ideas of larger scale structures fluctuating into existence and causing problems. This doesn't happen as far as we can tell, so we should preserve the no-particles idea of the vacuum conceptually.
> the Casimir effect
does not really rely on the particle count operator. One can cast it as the (idealized) plates screening longer positive and negative frequency modes that when combined linearly and treated as creation and annihilation operators, simply produces more of these on the outsides of the plates. The "more particles" outside the plates push them together through the "no particles" vacuum between them.
Mathematically this is OK, but conceptually it has problems.
Decades ago a quantum electrodynamics approach was taken to the Casimir effect showing that it arises due to the relativistic retarded intermolecular forces within the plates themselves. This view fully reproduces the "vacuum fluctuation" observables for idealized plates and generalizes to non-idealized plates in a way that the fluctuation view does not (yet, as far as I know). There have been updates on this view of the Casimir force since, and it holds up well, and has the advantage of not needing any particular value for "zero point energy" (which is just the vacuum energy as I described a few paragraphs above, and which ought to be unprobe-able by observers similar to us, which is in fairly serious conflict with the vacuum fluctuation idea taken to extremes).
This is elucidated with citations under "Relativistic van der Waals force" and under "Generalities" in your [1].
Taking virtual particles too seriously can be misleading, even if it works out mathematically.
All this is saying is that using extremely high energy lasers or some other high energy mechanism to keep coherence, is going to necessarily cause a local time dilation due to the high density of energy.
Highly dense energy/matter warps the space around it.
Its not like we cant take this into account though in our measuring devices. If they are measuring accurately and extremely extremely often, any error will accumulate so that even a normal clock is more accurate. Therefore its not hard to tune within reason. Error needs ti approach 0 as number of measurements per second approaches infinite.
They seem to be saying that any measurement regime, in achieving the necessary accuracy, will cause arbitrarily large measurement uncertainties. The authors at least, seem to think that this is fundamental, but as you say, it might just be a limitation of our current notion of a highly accurate clock.
It might not matter though, if this is another in a long line of essentially non-falsifiable ideas.
Let's pretend that I've never taken a single physics class at all, ever.
Who's to say that the inaccuracies are merely caused by fluctuations in spacetime? If you turn on the radio to a frequency on which there is no known transmitter, you hear background radiation. Is it not the same for spacetime? As you measure time (space) more precisely (more instantaneously) and more often, your measurements will occur more often on various boundaries of the background radiation of gravity (which we all know fluctuates spacetime). At that point, the is the error rate you see not also the rate of background radiation? If you could measure or predict the velocity of multiple background spacetime radation sources, you could reduce error? Thus, the error rate of measuring time is the rate of change of gravity.
Of course we built gravity wave detectors, so that nullifies that whole thought. Right? Or does it? The collision and merging of celestial bodies is arguably one of the biggest events in the universe. But what if spacetime or gravity (are they the same thing? I don't know) 'bounces' back on like a wave crashing against a wall, then surely there's fluctuations in spacetime due to past events. Wouldn't that intrinsically cause minute "errors" in measurements?
What keywords should I search to find (hopefully free) online resources to answer my questions?
My physics is pretty old, but I'll try: skimming the actual article (1), it looks like they're isolating time in the Heisenberg uncertainty principle (putting all the Δ on the other side of the equality) and then following those Δ's in the other variables, which is a common use of the uncertainty principle). They imagine some displacement, then insert the result back into the uncertainty principle to see how that Δ would propagate back onto time.
I was sort of excited about this because it seemed like they might be suggesting there was actually a window here to explore a quantum theory of gravity. Alas, they say "Although the methods presented here suffice to describe the entanglement of clocks arising from gravitational interaction, a full description of the physics with no background space–time would require a fundamental quantum theory of gravity".
To your specific question "Who's to say that the inaccuracies are merely caused by fluctuations in spacetime?" I don't think they're claiming they found a theoretical cause to explain some observed fluctuations. In fact, they put in some notional values to see what comes out and observe "this effect is not large enough to be measured with the current experimental capabilities".
Now, I think what you're getting at is the idea that you would expect the ticking of clocks to vary over space time. What they're saying is, "Of course, but we believe the problem is even more fiendish: the tick rate not only varies but it accumulates uncertainty to the point that a system which starts out as a clock can no longer be considered a clock at all. Further, [what I believe they are saying] the clock's coherence decays in a way that potentially varies with path and frame of reference. At which point, one starts wondering, why haven't we all flown apart already?
And they're doing this, again, all from a theoretical standpoint. It' the same sort of thought experiment you might get if you modify Euclid's fifth axiom: all sorts of weird stuff is possible, and we don't really need experiments to show the math, although there are experiments that can be done.
It seems that this could be generalized to "as measurement gets more precise, what is being measured appears more fuzzy". This begins to define a sort of law and a sort of epistemological "wall" we have been bumping against for the past century or so.
"In this case, the physicists hypothesised the act of measuring time in greater detail requires the possibility of increasing amounts of energy, in turn making measurements in the immediate neighbourhood of any time-keeping devices less precise.
"Our findings suggest that we need to re-examine our ideas about the nature of time when both quantum mechanics and general relativity are taken into account", says researcher Esteban Castro."
joe_the_user's link to the original paper is infinitely more useful than the article itself. The paper seems to imply something fundamental about how anything that we can use as a lock must function, then lays the usual assumptions from QM on top; that makes sense, although who knows if it's true?
I don't get the sense from the paper that it's something we could reasonably probe with current or near-future tech. Worse, their conclusions about whether or not spacetime intervals can be absolutely defined may rely on an arbitrary level of precision.