Hacker News new | past | comments | ask | show | jobs | submit login

It seems to me (and I don't have any formal qualifications to answer this questions) that computer clock drift is on the order of mega- or gigahertz, while this is on the order of 50hz - 5-10 orders of magnitude slower.



The speed of the clock that is drifting is unrelated to the rate or the amount of the drift. An audio sample rate of 48kHz may be driven by a 12MHz clock, and that 12MHz clock may exhibit thermally induced drift between, let's say, 11.99MHz and 12.01MHz. That will result in a sample rate drift between 47960Hz and 48040Hz. A perfect 50Hz tone recorded in those conditions will vary between 49.9583Hz and 50.0417Hz.

In the case I gave before, I was trying to synchronize a 48kHz USB audio interface recording with a 48kHz/30fps DV tape recording. If I lined up the beginning of the recordings, the ends were off by ~500ms (IIRC), which for a 2min clip means a 0.4% deviation. However, if I adjusted the speed of one of the clips to align the ends, the middle would be off by 500ms, suggesting a fluctuating deviation as high as 0.8% (if my middle-of-the-night mental estimation is correct).

According to [0], the UK grid is allowed to vary between 49.5Hz and 50.5Hz, or ±1%. Watching the meters at [1], [2], and [3], it looks like deviations of 0.2% are common. Depending on the frequency of the mains and recording rate deviations, it seems mains deviation could be swamped by the 0.4% variation I observed in real-world recording scenarios. Thus, I am skeptical of the forensic utility of mains frequency analysis, and would need to see evidence that forensic analysts are compensating for recording rate deviation, or arguments why it's irrelevant, before I would change my mind.

[0] http://www.nationalgrid.com/uk/Electricity/Balancing/service...

[1] http://www.nationalgrid.com/uk/Electricity/Data/Realtime/Fre...

[2] http://www.dynamicdemand.co.uk/grid.htm

[3] http://www.mainsfrequency.com/


I think you are missing how much the spectrum of the drift matters. Just knowing the range over which the clock varies (say 5%) isn't enough. If the drift is slow - which thermally induced drift usually is, because it's driven by daily heating/cooling cycles - then it's possible to correct for it in the recording. Analysis techniques which are based on frequency-domain variations would tend to reject this type of slow drift automatically, but the details depend on the technique used. If, on the other hand, the noise on the clock is fast jitter rather than slow drift, things become much more difficult. On typical consumer recording devices, the noise floor due to jitter is way below the noise floor due to the limited SNR of the microphones and amplifiers used. The jitter noise is non-linear, which makes things harder, but doesn't tend to be a limiting factor.

It's been most of a decade since I worked on this stuff (in the context of radar and sonar), so that knowledge may be out of date. Still, I'd be surprised if clock (LO) jitter is the limiting factor in this type of analysis.


It would be interesting to compare your drift if the recording s were done by a purely electronic device rather than a DV tape.


> while this is on the order of 50hz

The baseline is 50 Hz, but how big are the actual variations? It's the magnitude of the variations that you need to compare against clock drift, not the baseline.


clock drift is on the order of mega- or gigahertz I think you mean frequency rather than drift. Drift is likely in single-digit percentages.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: