←back to thread

221 points mfiguiere | 1 comments | | HN request time: 0.246s | source
Show context
throw0101a ◴[] No.33696297[source]
So they state:

> One could argue that we don’t really need PTP for that. NTP will do just fine. Well, we thought that too. But experiments we ran comparing our state-of-the-art NTP implementation and an early version of PTP showed a roughly 100x performance difference:

While I'm not necessarily against more accuracy/precision, what problems specifically are experiencing? They do mention some use cases of course:

> There are several additional use cases, including event tracing, cache invalidation, privacy violation detection improvements, latency compensation in the metaverse, and simultaneous execution in AI, many of which will greatly reduce hardware capacity requirements. This will keep us busy for years ahead.

But given that NTP (either ntpd or chrony) tends to give me an estimated error of around (tens of) 1e-6 seconds, and PTP can get down to 1e-9 seconds, I'm not sure how many data centre applications need that level of accuracy.

> We believe PTP will become the standard for keeping time in computer networks in the coming decades.

Given the special hardware needed for the grand master clock to get down to nanosecond time scales, I'm doubtful this will be used in most data centres of most corporate networks. Adm. Grace Hopper elegantly illustrates 'how long' a nanosecond is:

* https://www.youtube.com/watch?v=9eyFDBPk4Yw

How many things need to worry the latency of signal travelling ~300mm?

replies(15): >>33696382 #>>33696446 #>>33696532 #>>33696586 #>>33697400 #>>33697855 #>>33698277 #>>33699184 #>>33700548 #>>33700694 #>>33700741 #>>33702033 #>>33702269 #>>33702897 #>>33705135 #
DannyBee ◴[] No.33700548[source]
First, 300mm is not the real measure in practice for the common use case. PTP is often used to distribute GPS time for things that need it but don't have direct satellite access, and also so you don't have to have direct satellite access everywhere.

For that use case, 1ns of inaccuracy is about 10ft all told (IE accounting for all the inaccuracy it generates).

It can be less these days, especially if not just literally using GPS (IE a phone with other forms of reckoning, using more than just GPS satellites, etc). You can get closer to the 1ns = 1ft type inaccuracy.

But if you are a cell tower trying to beamform or something, you really want to be within a few ns, and without PTP that requires direct satellite access or somet other sync mechanism.

Second, I'm not sure what you mean by special. Expense is dictated mostly by holdover and not protocol. It is true some folks gouge heavily on PTP add-ons (orolia, i'm looking at you), but you can ignore them if you want. Linux can do fine PTP over most commodity 10G cards because they have HW support for it. 1G cards are more hit or miss.

For dedicated devices: Here's a reasonable grandmaster that will keep time to GPS(/etc) with a disciplined OCXO, and easily gets within 40ns of GPS and a much higher end reference clock i have. https://timemachinescorp.com/product/gps-ntpptp-network-time...

It's usually within 10ns. 40ns is just the max error ever in the past 3 years.

Doing PTP, the machines stay within a few NS of this master.

If you need better, yes it can get a bit expensive, but honestly, there are really good OCXO out there now with very low phase noise that can more accurately stay disciplined against GPS.

Now, if you need real holdover for PTP (IE , yes you will probably have to go with rubidium, but even that is not as expensive as it was.

Also, higher end DOCXO have nearly the same performance these days, and are better in the presence of any temperature variation.

As for me, i was playing with synchronizing real-time motion of fast moving machines that are a few hundred feet apart for various reasons. For this sort of application, 100us is a lot of lag.

I would agree this is a pretty uncommon use case, and I could have achieved it through other means, this was more playing around.

AFAIK, the main use of accurate time at this level is cell towers/etc, which have good reasons to want it.

I believe there are also some synchronization applications that have need of severe accuracy (synchronous sound wave generation/etc) but no direct access to satellite signal (IE underwater arrays).

replies(1): >>33716124 #
1. bradknowles ◴[] No.33716124[source]
That's the one I was thinking about getting for my home lab. I'm also looking at: https://www.meinbergglobal.com/english/products/synchronizat...

I already have an ancient Meinberg Stratum-1 somewhere that I should pull out of storage and send back to Heiko so that they can put it in a museum. These days, for proper datacenter use, I'd go for something like this one: https://www.meinbergglobal.com/english/products/modular-2u-s...