←back to thread

221 points mfiguiere | 6 comments | | HN request time: 0s | source | bottom
1. amluto ◴[] No.33699625[source]
I’m wondering whether the extremely careful GNSS part is really needed. A microsecond of offset between two servers in the same datacenter could easily matter, but I suspect that, if an entire datacenter were off by a microsecond, everything would be fine — communicating from that datacenter to anywhere else will take well over a microsecond, so an offset of this type would be a bit like the datacenter wiggling around in space a bit.

On a different note, there’s an Intel feature called the Always Running Timer. In theory one ought to be able to directly determine the TSC <-> NIC clock offset using the ART. I’m not sure anyone has gotten this to work, though.

replies(2): >>33699672 #>>33700902 #
2. walrus01 ◴[] No.33699672[source]
Having rooftop mounted GNSS receive antennas for GPS+GLONASS is extremely common in telecom and ISP infrastructure applications. It's sort of a belt and suspenders approach to obtaining time from low stratum NTP sources and also having a local GNSS timing source to reference from.

Or for use in a case where the network has a total absence of connectivity to any internet-based NTP sources (maybe because your management network doesn't talk to the internet at all, for many good reasons), and in the event of loss of transport connectivity to your own somewhere-in-the-region NTP servers in your management system, you want to be absolutely sure the system clocks on your local radio transport equipment, DWDM equipment, metro ethernet stuff are all precise.

Using receive-only GPS data as a master time reference source is effectively saying "we think that getting the time as it's set by the USAF/Space Force people at Schriever AFB, who run the GPS system, should be treated as our master point of reference and all other clocks should derive from that". It's not a bad policy, as such things go, because GPS maintaining extremely accurate time is of great interest to the US federal government as a whole.

Even a fairly small LTE cellular rooftop site, monopole or tower often has a similar receiver. It doesn't add a lot of cost.

replies(1): >>33700793 #
3. amluto ◴[] No.33700793[source]
Meta seems to have gone with the expensive option. I think it’s this:

https://www.mouser.com/ProductDetail/HUBER%2bSUHNER/Direct-G...

Admittedly, at their scale, this is peanuts. But I wouldn’t buy one of these for a scrappy startup :). SparkFun will sell a perfectly serviceable kit for a few hundred dollars.

(If you are worried about lightning, then GPSoF looks like cheap insurance.)

replies(1): >>33716216 #
4. error503 ◴[] No.33700902[source]
Accurate delay compensation is necessary to enable redundancy. IF you need multiple GrandMasters at different locations in the facility, using independent RFoF systems and antennas, they will have different GNSS delays, and that difference will propagate down into uncertainty on the hosts. There are other ways to eliminate this uncertainty than to characterize the full delay, but if the racks are on opposite ends of a giant data centre, that might be as or more difficult than just going through those motions.

If you just have one GM, then sure, the delay means you will have a larger fixed offset from TAI/UTC, but that won't be consequential, and you'll still get the benefits of a tightly synchronized monotonic clock. Until that GM fails, and it all goes haywire.

replies(1): >>33716196 #
5. bradknowles ◴[] No.33716196[source]
It's a hard problem to solve. You end up doing something like the NIST TMAS service (see https://www.nist.gov/programs-projects/time-measurement-and-...) using differential common view measurements to create a "Multi-Source Common-View Disciplined Clock".
6. bradknowles ◴[] No.33716216{3}[source]
Their Calnex Sentinel equipment they use for measuring their signal is probably a lot more expensive than that (see https://calnexsol.com/en/product-detail/1033-sentinel).

You wouldn't want to build your entire monitoring system on top of that and be forced to deploy those at hundreds of datacenters around the world.