Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Eh, my Linux server is off by 2s, and that’s supposed to be server grade hardware.

The reality might simply be that being off by a second doesn’t matter so much, as long as it’s gradual.

For example, Google chose to “smear” a leap second by prolonging each second by a tiny bit, and everything was fine.

And finally, to pay devils advocate, clients being off by +- a few seconds might actually be a good thing. If everyone was aligned to the millisecond 99.99% of the time, then a lot of timing bugs and inconsistencies won’t be surfaced except on the field.



If you're running some form of ntp, the underlying hardware doesn't matter that much. An offset greater than 0.5s is probably caused by ntp not running, time jumps during hibernation, automatic VM live migrations, or other virtualization issues. Not sure how bad a time jump of 0.5s really is, but in the past those did not happen on servers, so there are old ntp configs that attempt to correct it over days. Probably a mistake, because the instant correction of a large time jump should have no worse effects than the time jump itself.


I once forgot to install NTP on a Debian server (It is not installed by default). The skew reached 31s after 2 years and it invalidated all my JWT tokens.


Am I correct in inferring that you're saying the system time on your machine has lost 2 seconds since you last synchronised it with time.gov?

Over what time period did the slippage occur?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: