Has anyone here ever needed microsecond precision? Would love to hear about it.
I worked at Altera (FPGA supplier) as the Ethernet IP apps engineer for Europe for a few years. All the big telecoms (Nokia, Ericsson, Cisco, etc) use Precision Time Protocol (PTP) in some capacity and all required clocks to be ns levels for accuracy. Sometimes as low a 10ns at the boundary. Any imperfection in the local clock directly converts into timestamp error, and timestamp error is what limits PTP synchronization performance. Timestamps are the fundamental observable in PTP.
Quantization and jitter create irreducible timestamp noise.
That noise directly limits offset and delay estimation.
Errors accumulate across network elements and internal clock error must be much smaller than the system requirement.<p>I think most people would look at the error and think "what's the big deal" but at all the telecoms customers would be scrambling to find a clock that hasn't fallen out of sync.
We don't use NTP, but for robotics, stereo camera synchronization we often want the two frames to be within ~10us of eachother. For sensor fusion we then also need a lidar on PTP time to be translated to the same clock domain as cameras, for which we also need <~10us.<p>We actually disable NTP entirely (run it once per day or at boot) to avoid clocks jumping while recording data.
For a low precision environment to avoid sudden jumps I used SetSystemTimeAdjustment on Windows (now SetSystemTimeAdjustmentPrecise) to smoothly steer the system clock to match the GPS supplied time signal.<p>On Linux I think the adjtimex() system call does the equivalent <a href="https://manpages.ubuntu.com/manpages/trusty/man2/adjtimex.2.htm" rel="nofollow">https://manpages.ubuntu.com/manpages/trusty/man2/adjtimex.2....</a><p>It smears out time differences which is great for some situations and less ideal for others.
> We actually disable NTP entirely (run it once per day or at boot) to avoid clocks jumping while recording data.<p>This doesn't seem right to me. NTP with default settings should be monotonic. So no jumps. If you disable it Linux enters 11-minute mode, IIRC, and that may not be monotonic.
Pedantically, a monotonic function need not have a constant first derivative. To take it further, in mathematics it is accepted for a monatomic function to have a countable number of discontinuities, but of course in the context of a digital clock that only increments in discrete steps, that’s of little bearing.<p>But that’s all besides the point since most sane time sync clients (regardless of protocol) generally handle small deviations (i.e. normal cases) by speeding up or slowing down the system clock, not jumping it (forward or backward).
You are correct, NTP prefers to jump first (if needed) and then slew afterwards (which is exactly what we want!), although it can jump again if the offset is too large.<p>In our case the jumps where because we also have PTP disciplining the same system clock, when you have both PTP and NTP fighting over the same clock, you will see jumping with the default settings.<p>For us it was easier to just do a one time NTP sync at the beginning/boot, and then sync the robots local network with only PTP afterwards.
In your stereo camera example, are these like USB webcams or something like MIPI CSI attached devices?
We use nanosecond precision driven by GPS clocks. That timestamp in conjunction with star tracker systems gives us reliable positioning information for orbital entities.<p><a href="https://en.wikipedia.org/wiki/Star_tracker" rel="nofollow">https://en.wikipedia.org/wiki/Star_tracker</a>
(Assuming "precision" really meant "accuracy") The network equipment I work on requires sub microsecond time sync on the network for 5G providers and financial trading customers. Ideally they'd just get it from GPS direct, but that can be difficult to do for a rack full of servers. Most of the other PTP use cases I work with seem to be fine with multiples of microseconds, e.g. Audio/Video over the network or factory floor things like PLCs tend to be find with a few us over the network.<p>Perhaps a bit more boring than one might assume :).
Lightning detection. You have a couple of ground stations with known positions that wait for certain electromagnetic puses, and which record the timestamps of such pulses. With enough stations you can triangulate the location of the source of each pulse. Also a great way to detect nuclear detonations.<p>There is a german club that builds and distrubutes such stations (using GPS for location and timing), with a quite impressive global coverage by now:<p><a href="https://www.blitzortung.org" rel="nofollow">https://www.blitzortung.org</a>
At a previous role, we needed nanosecond precision for a simulcast radio communications system. This was to allow for wider transmission for public safety radio systems without having to configure trunking. We could even adjust the delay in nanoseconds to move the deadzones away from inhabited areas.<p>We solved this by having GPS clocks at each tower as well as having the app servers NTP with each other. The latter burned me once due to some very dumb ARP stuff, but that's a story for another day.
Your speakers do so that people's voices match their mouth movements. The speaker clocks need to be in-sync with the cpu clocks and they operate at different frequencies.
We need nanosecond precision for trading - basically timestamping exchange/own/other events and to measure latency.
You probably want to ask about accuracy. Any random microcontroller from the 90s needs microsecond precision.
How do you even get usable microsecond precision sync info from a server thousands of kilometers away? The latency is variable so the information you get can't be verified / will be stale the moment it arrives. I'm quite ignorant on the topic.
High speed finance is msec and below. Fastest publically known tick to trade is just shy of 14 nanos.<p>Timekeeping starts to become really hard, often requiring specialized hardware and protocols.
When we collected, correlated, and measured all controlling messages in a whole 4G network. Millisecond precision meant guaranteed out of order message flows.
The high frequency trading guys<p>edit: also the linked slides in TFA
Yes, but always got it from GPS so presumably they'd be off about the same amount.<p>Distributed sonar, allows placing receivers willy-nilly and aligning the samples later.<p>Remote microphone switching - though for this you wouldn't notice 5us jitter, it's just that the system we designed happened to have granularity that good.
Nuclear measurements, where the speed of a gamma ray flying across a room vs a neutron is relevant. But that requires at least nanosecond time resolution, and you’re a long way from thinking about NTP.
If you want sample accurate audio transmission you're going to want resolution on the order of 10s of microseconds.
I believe LTE and 5G networks require it to coordinate timeslots between overlapping cells. Of course, they can use whatever reference they want, as long as all the cells are using the same one - it doesn't have to be UTC. Some (parts of) networks transmit it across the network, while others have independent GPS receivers at each cell site.<p>Synchronization is also required for SDH networks. Don't know if those are still used.<p>Someone else referenced low power ham radio modes like WSPR, which I also don't know much about, but I can imagine they have timeslots linked to UTC and require accuracy. Those modes have extremely low data rates and narrow bandwidths, requiring accurate synchronization. I don't know if they're designed to self-synchronize, or need an external reference.<p>When multiple transmitters are transmitting the same radio signal (e.g. TV) they might need to be synchronized to a certain phase relationship. Again, don't know much about it
WSPR doesn't require tight synchronization, but it does require pretty stable frequency sources over periods of 10s of seconds.<p>It is very common to integrate a GPS in a WSPR beacon to discipline the transmit frequency, but with modest thermal management, very ordinary crystal oscillators have very nice stability.
A database like Google Spanner has higher latency in proportion to the uncertainty about the time. Driving the time uncertainty down into the microsecond range, or lower, keeps latency low.
Telecom.<p>Precision Time Protocol gets you sub-microsecond.<p><a href="https://en.wikipedia.org/wiki/Precision_Time_Protocol" rel="nofollow">https://en.wikipedia.org/wiki/Precision_Time_Protocol</a>
Lots of things do. Shoot, even plain old TDM needs timing precision on the order of picoseconds to nanoseconds.
I mean, we routinely benchmark things that take microseconds or less. I've seen a 300 picosecond microbenchmark (single cycle at 3GHz). No requirement that absolute time is correct, though.