Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> ... vertical shift of a few centimeters could be measured

In what amount of time? Not instantly, right?



In a 2010 experiment based on an older version of this clock[0], NIST succeeding in measuring the gravitational time dilation across a 33 cm vertical separation—a frequency difference of 4.1×10^{-17}—with 140,000 seconds of integration time (<2 days). I don't really understand how that worked.

[0] https://sci-hub.se/https://doi.org/10.1126/science.1192720 ("Optical Clocks and Relativity" (2010))


Time dilation from general relativity is approximately gh/c^2 (1e-18 -ish), which is an order of magnitude bigger than the uncertainty on your clock frequency (1e-19 -ish).

But you would need a more precise characterization of the clock to answer this.

There might be significant noise on individual measurements, meaning that you need to take multiples to get precise enough (see https://en.wikipedia.org/wiki/Allan_variance).

Edit: If you just have clock output in ticks, you also need enought time to elapse to get a deviation of at least one tick between both bot clocks you are comparing. This is a big limitation, because at a clock rate of 1GHz you are still waiting for like 30 years (!!). (In practice you could probably cheat a bit to get around this limit)


These are optical clocks, so their rates are in the hundreds of THz to PHz range (1.121 PHz in the case of the Al+ clock). A 1e-18 shift corresponds to a ~mHz frequency deviation which a decent frequency counter can resolve in around 1s. However, no optical clock is actually that stable at short time scales, and the averaging is required to get rid of these fluctuations.


>Edit: If you just have clock output in ticks, you also need enought time to elapse to get a deviation of at least one tick between both bot clocks you are comparing. This is a big limitation, because at a clock rate of 1GHz you are still waiting for like 30 years (!!). (In practice you could probably cheat a bit to get around this limit)

In practice with this level of precision you are usually measuring the relative phase of the two clocks, which allows substantially greater resolution than just looking at whole cycles, which is 'cheating' to some degree, I guess. (The limit is usually how noisy your phase measurement is)

(To give some intuition, imaging comparing two pendulum clocks. I think you can probably see how if you take a series of pictures of the pendulums next to each other you could gauge whether one of them is running fast relative to the other, and by how much, without one completing one full swing more than the other)


From the article:

    This improves the clock’s stability, reducing the time required to measure down to the 19th decimal place from three weeks to a day and a half.
So no, not instantly.


https://en.wikipedia.org/wiki/Allan_variance

It takes a longer measurement to be more confident.


Instantly more or less. Time instantly moves differently at altitude because you are in a weaker gravitational field. The time dilation effect would be noticeable after 1 (or at most a few) ticks of the clocks.


I'm very skeptical of this claim. While the physical effect of time dilation acts immediately, I expect it would take many many ticks of both clocks before the rate difference between them became resolvable.


Yes, and no. The time-dilation effect will happen instantly, but the more quickly you want to observe it, the better your measurement's S/N ratio will have to be... and that, in turn, requires narrow measurement bandwidths that imply longer observation times.

So then the question has to be asked, does the effect really happen instantly? Or do the same mechanisms that impose an inverse relationship between bandwidth and SNR mean that, in fact, it doesn't happen instantly at all?


I don't understand. Wouldn't it only be possible to find out by comparing two identical clocks that were at different altitudes for some larger number of ticks, allowing you to then compare the elapsed ticks? How would you conduct such an experiment? My mental model is that I have a black box that outputs an electrical signal every tick, and then maybe we could just figure out which clock ticked first with a simple circuit. But that seems like we would need to sync them, and that it's fundamentally wrong due to the fact that the information of the tick is also subject to the speed of light. I don't know much beyond high school physics, fwiw.


My comment here might give some intuition for it: https://news.ycombinator.com/item?id=44576004 . You do need to measure for some time, because the measurement of the clocks with respect to each other is noisy, but you don't need to wait for there to be a whole 'tick' of extra time between them.


According to ChatGPT, the speedup factor for getting 10 cm higher is 1 + 1.09e−17. (With ΔT = gh /(c^2) The math seems to check out, but not sure if the formula itself is correct.) Surely, if the clock ticks at rate 1e-19 in a second, i.e. one tick is hundred times smaller than the dilation difference in a second, the clock would still need at least a hundreth of a second to accumulate enough ticks for the count of ticks to differ even by one tick because of the dilation.


The frequency that is actually counted with a digital counter in this clock is only 500 MHz (i.e. after a frequency divider, because no counter can be used at the hundreds of THz of an optical signal).

Nevertheless, in order to measure a frequency difference between two optical clocks you do not need to count their signals. The optical signals can be mixed in a non-linear optical medium, which will provide a signal whose frequency is equal to the difference between the input frequencies.

That signal might have a frequency no greater than 1 GHz, so it might be easy to count with a digital counter.

Of course, the smaller the frequency difference is, the longer must be the time used for counting, to get enough significant digits.

The laser used in this clock has a frequency around 200 THz (like for optical fiber lasers), i.e. about 2E14 Hz. This choice of frequency allows the use of standard optical fibers to compare the frequencies of different optical clocks, even when they are located at great distances.

Mixing the light beams of 2 such lasers, in the case of a 1E-17 frequency difference would give a difference signal with a period of many minutes, which might need to be counted for several days to give an acceptable precision. The time can be reduced by a small factor selecting some harmonic, but it would still be of some days.


To make this even clearer:

Let's imagine that there is a huge amount of time dilation (we live on the surface of a neuron star or something). By climbing a bit, we experience 1.1 seconds instead of 1.0 seconds experienced by someone who left down.

We have a clock that can measure milliseconds as the smallest tick. But climbing up, back down, and comparing the amount of ticks won't let us conclude anything after a single millisecond. If anything, we must spend at least 11 milliseconds up to have a noticeable 11 to 10 millisecond difference.

Now, if the dilation was 1.01 seconds vs 1.00, we would need to spend at least 101 milliseconds up, to get a minimal comparison between 101 and 100 milliseconds.


Thinking in terms of 'ticks' over-discretises it. You can in practice measure frequency to much more precision than any discrete cycle time in the system you're using, because you can usually measure phase, i.e. you're not just seeing some on-off flash, you're seeing a gradual pulse or wave, and it's how accurately you can measure that pulse (in terms of SNR) which sets your integration time for a given precision, not how rapidly you're measuring it.


> Let's imagine that there is a huge amount of time dilation (we live on the surface of a neuron star or something).

That idea is the premise of https://en.wikipedia.org/wiki/Incandescence_(novel)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: