←back to thread

221 points mfiguiere | 1 comments | | HN request time: 0.259s | source
Show context
throw0101a ◴[] No.33696297[source]
So they state:

> One could argue that we don’t really need PTP for that. NTP will do just fine. Well, we thought that too. But experiments we ran comparing our state-of-the-art NTP implementation and an early version of PTP showed a roughly 100x performance difference:

While I'm not necessarily against more accuracy/precision, what problems specifically are experiencing? They do mention some use cases of course:

> There are several additional use cases, including event tracing, cache invalidation, privacy violation detection improvements, latency compensation in the metaverse, and simultaneous execution in AI, many of which will greatly reduce hardware capacity requirements. This will keep us busy for years ahead.

But given that NTP (either ntpd or chrony) tends to give me an estimated error of around (tens of) 1e-6 seconds, and PTP can get down to 1e-9 seconds, I'm not sure how many data centre applications need that level of accuracy.

> We believe PTP will become the standard for keeping time in computer networks in the coming decades.

Given the special hardware needed for the grand master clock to get down to nanosecond time scales, I'm doubtful this will be used in most data centres of most corporate networks. Adm. Grace Hopper elegantly illustrates 'how long' a nanosecond is:

* https://www.youtube.com/watch?v=9eyFDBPk4Yw

How many things need to worry the latency of signal travelling ~300mm?

replies(15): >>33696382 #>>33696446 #>>33696532 #>>33696586 #>>33697400 #>>33697855 #>>33698277 #>>33699184 #>>33700548 #>>33700694 #>>33700741 #>>33702033 #>>33702269 #>>33702897 #>>33705135 #
SEJeff ◴[] No.33702033[source]
Disclaimer: I work in finance, and have for 15+ years.

> Given the special hardware needed for the grand master clock to get down to nanosecond time scales, I'm doubtful this will be used in most data centres of most corporate networks.

The "special hardware" is often just a gps antenna and a PCI card though. In fact, many tier 1 datacenters actually provide a "service" where they'll either cross connect you directly to a PPS feed from a tier 0 grandmaster time service or plug your server into a gps antenna up on the roof. It isn't really that exotic. For financial application, especially trading ones, syncing a LAN timesync to a handful of nanoseconds is doable and optimal.

It is just a matter of time before non-finance sees reasons that better timesync is useful. Precision Time Protocol aka IEEE 1588 was released in 2002 and IEEE 1588 version 2 was released in 2008. This isn't exactly a new thing.

With the right hardware and a tier 0 timesource, modern ntp on modern hardware with modern networks can keep a LAN in sync subsecond. However, as a protocol, NTP only guarantees 1 second accuracy.

replies(2): >>33715988 #>>33718984 #
bradknowles ◴[] No.33715988[source]
Disclaimer: I've been involved in supporting the NTP Public Services Project since 2003.

I assure you, with the right hardware and paying attention to your latencies, NTP can get you down below one millisecond accuracy. Poul Henning Kamp was doing nanosecond level accuracy with NTP back in the mid-aughts, but then he had rewritten the NTP server code, the NTP client code, and the kernel on the server.

As an NTP service provider, what you really want to keep an eye on is the Clock Error Bound that gives you the worst case estimate for how bad the time is that you could be serving to your customers. For the client side, you mainly care about just the accuracy you're actually getting.

replies(1): >>33720419 #
1. SEJeff ◴[] No.33720419[source]
Yes, I've seen it get down to a few milliseconds of sync on the right hardware (boundary clock on the switches, stratum 0 timeserver with pps, etc), but the protocol only guarantees 1 second of sync. Am I incorrect in that assertion?