[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: NIST time



John Hasler wrote:

> > Atomic clock accuracy is really not an issue on the internet.
>
> Those who run stratum one timeservers still like them, though.
>
> > I get fluctuation in reoprted return time of about one second form
> > California.
>
> Which is why programs such as chronyd and ntpd go to great lengths to
> measure and compensate for propagation delays.  Perhaps you would find the
> NTP docs illuminating.
>
> > Therefore, debian hackers should not waste their time trying to solve
> > problems that they imagine such a person might have.
>
> What are you talking about?
> --
> John Hasler
> john@dhh.gt.org (John Hasler)
> Dancing Horse Hill
> Elmwood, WI

The current standard precision clock is NIST-F1. It is stable to one part
in 10**15. (fortran exponential notation)  Time on the internet is quantized,

with a quantum size that depends on the CPU clock period of the computers
that you are using to look at the internet and the clock period of your
internet
transport layer. This quantum on internet time is at least 10**-10 sec. To
determine the rate of a clock you must make two observations that are
separated
in time by a long enough period to allow for your measurement errors in
each observation. So, if you could get everything just so on a 10 GHz
internet,
you would need two observations that are 10**5 seconds apart. 10**5 sec. is
slightly more than a day. All other complications of the internet, like
delays
in packet forwarding, can only add to the error and make the required
base-line
time longer than this crude estimate.

This isn't any near the whole story, but it should be enough to indicate the
difficulties of precision measurement of time. It is both an interesting
topic,
and a quagmire.

Paul






Reply to: