You can do even faster, about 8ns (almost an additional 10x improvement) by using software perf events: PERF_COUNT_SW_TASK_CLOCK is thread CPU time, it can be read through a shared page (so no syscall, see perf_event_mmap_page), and then you add the delta since the last context switch with a single rdtsc call within a seqlock.<p>This is not well documented unfortunately, and I'm not aware of open-source implementations of this.<p>EDIT: Or maybe not, I'm not sure if PERF_COUNT_SW_TASK_CLOCK allows to select only user time. The kernel can definitely do it, but I don't know if the wiring is there. However this definitely works for overall thread CPU time.
That's a brilliant trick. The setup overhead and permission requirements for perf_event might be heavy for arbitrary threads, but for long-lived threads it looks pretty awesome! Thanks for sharing!
Why do you need a seqlock? To make sure you're not context switched out between the read of the page value and the rdtsc?<p>Presumably you mean you just double check the page value after the rdtsc to make sure it hasn't changed and retry if it has?<p>Tbh I thought clock_gettime was a vdso based virtual syscall anyway
clock_gettime is not doing a syscall, it's using vdso.
clock_gettime() goes through the vDSO shim, but whether it avoids a syscall depends on the clock ID and (in some cases) the clock source. For thread-specific CPU user time, the vDSO shim cannot resolve the request in user space and must transit into the kernel. In this specific case, there is absolutely a syscall.
Flamegraphs are wonderful.<p>Me: looks at my code. "sure, ok, looks alright."<p>Me: looks at the resulting flamegraph. "what the hell is this?!?!?"<p>I've found all kinds of crazy stuff in codebases this way. Static initializers that aren't static, one-line logger calls that trigger expensive serialization, heavy string-parsing calls that don't memoize patterns, etc. Unfortunately some of those are my fault.
I also like icicle graphs for this. They're flamegraphs, but aggregated in the reverse order. (I.e. if you have calls A->B->C and D->E->C, then both calls to C are aggregated together, rather than being stacked on top of B and E respectively. It can make it easier to see what's wrong when you have a bunch of distinct codepaths that all invoke a common library where you're spending too much time.)<p>Regular flamegraphs are good too, icicle graphs are just another tool in the toolbox.
So someone else linked the original flamegraph site [0] and it describes icicle graphs as "inverting the y axis" but that's not only what's happening, right? You bucket top-down the stack opposed to bottom-up, correct?<p>[0] <a href="https://www.brendangregg.com/flamegraphs.html" rel="nofollow">https://www.brendangregg.com/flamegraphs.html</a>
Also cool that when you open it in a new tab, the svg [0] is interactive! You can zoom in by clicking on sections, and there's a button to reset the zoom level.<p>[0]: <a href="https://questdb.com/images/blog/2026-01-13/before.svg" rel="nofollow">https://questdb.com/images/blog/2026-01-13/before.svg</a>
Yes, they are made with: <a href="http://www.brendangregg.com/flamegraphs.html" rel="nofollow">http://www.brendangregg.com/flamegraphs.html</a> and<p><a href="https://github.com/brendangregg/FlameGraph" rel="nofollow">https://github.com/brendangregg/FlameGraph</a><p>Useful site if you are on to perf/eBPF/performance things with many examples and descriptions even for other uses as e.g. memory usage, disk usage (prefer heatmaps here but they are nice if you want to send someone a interactive view of their directory tree ...).
I always found profiling performance critical code and experimenting with optimisations to be one of the most enjoyable parts of development - probably because of the number of surprises that I encountered ("Why on Earth is <i>that</i> so slow?").
I might be very wrong in every way but, string parsing and or manipulating and memoiziation... sound like a super strange combo? For the first you know you're already doing expensive allocations, but the 2nd is also not a pattern I really see apart from in JS codebases. Could you provide more context on how this actually bit you in the behind? memoizing strings seems like a complicated and error prone "welp it feels better now" territory in my mind so I'm genuinely curious.
In Java it can be a bad toString() implementation hiding behind a + used for string assembly.<p>Or another great one: new instances of ObjectMapper created inside a method for a single call and then thrown away.
> but the 2nd is also not a pattern I really see apart from in JS codebases.<p>If you're referring to "one-line logger calls that trigger expensive serialization", it's also common in java.
I've never used flamegraphs but would like to know about them. Can you explain more? Or where should I start?
Flame graphs have an official web site, maintained by Brendan Gregg, who invented them: <a href="https://www.brendangregg.com/flamegraphs.html" rel="nofollow">https://www.brendangregg.com/flamegraphs.html</a>. It's a useful starting point.
I use them all the time on Perl code.<p><a href="https://metacpan.org/pod/Devel::NYTProf" rel="nofollow">https://metacpan.org/pod/Devel::NYTProf</a>
I would also try hotspot, it is a interactive viewer for perf graphs.
Author here. After my last post about kernel bugs, I spent some time looking at how the JVM reports its own thread activity. It turns out that "What is the CPU time of this thread?" is/was a much more expensive question than it should be.
I don't think it is possible to talk about fractions of nanoseconds without having an extremely good idea of the stability and accuracy of your clock. At best I think you could claim there is some kind of reduction but it is super hard to make such claims in the absolute without doing a massive amount of prep work to ensure that the measured times themselves are indeed accurate. You could be off by a large fraction and never know the difference. So unless there is a hidden atomic clock involved somewhere in these measurements I think they should be qualified somehow.
Stability and accuracy, when applied to clocks, are generally about dynamic range, i.e. how good is the scale with which you are measuring time. So if you're talking about nanoseconds across a long time period, seconds or longer, then yeah, you probably should care about your clock. But when you're measuring nanoseconds out of a millisecond or microsecond, it really doesn't matter that much and you're going to be OK with the average crystal oscillator in a PC. (and if you're measuring a 10% difference like in the article, you're going to be fine with a mechanical clock as your reference if you can do the operation a billion times in a row).
Did you look into the large spread on your distributions? Some of these span multiple orders of magnitude which is interesting
Fair point. These were run on a standard dev workstation under load, which may account for the noise. I haven't done a deep dive into the outliers yet, but the distribution definitely warrants a more isolated look.
Very thankful for the 1liner tldr<p>edit : I had an afterthought about this because it ended up being a low quality comment ;<p>Bringing up such TLDR give a lot of value to reading content, especially on HN, as it provides way more inertia and let focus on -<p>reading this short form felt like that cool friend who gave you a heads up.
Quelle Suprise
> Flame graph image<p>> Click to zoom, open in a new tab for interactivity<p>I admit I did not expect "Open Image in New Tab" to do what it said on the tin. I guess I was aware that it was possible with SVG but I don't think I've ever seen it done and was really not expecting it.
Courtesy of Brendan Gregg and his flamegraph.pl scripts: <a href="https://github.com/brendangregg/FlameGraph" rel="nofollow">https://github.com/brendangregg/FlameGraph</a><p>Normally, I use the generator included in async-profiler. It produces interactive HTML. But for this post, I used Brendan’s tool specifically to have a single, interactive SVG.
Obviously a vdso read is going to be significantly faster than a syscall switching to the kernel, writing serialized data to a buffer, switching back to userland, and parsing that data.
Author of the OpenJDK patch here.<p>Thanks for the write-up Jaromir :) For those interested, I explored memory overhead when reading /proc—including eBPF profiling and the history behind the poorly documented user-space ABI.<p>Full details in my write-up: <a href="https://norlinder.nu/posts/User-CPU-Time-JVM/" rel="nofollow">https://norlinder.nu/posts/User-CPU-Time-JVM/</a>
Hi Jonas, thanks for the work on OpenJDK and the post! I swear I hadn't seen your blog :) I finished my draft around Christmas and it’s been in the queue since. Great minds think alike, I guess.<p>edit: I just read your blog in full and I have to say I like it more than mine. You put a lot more rigor into it. I’m just peeking into things.<p>edit2: I linked your article from my post.
Which goes to show writing C, C++ or whatever systems language isn't automatically blazing fast, depending on what is being done.<p>Very interesting read.
clock_gettime() goes through vDSO, avoiding a context switch. It shows up on the flamegraph as well.
Only for some clocks (CLOCK_MONOTONIC, etc) and some clock sources. For VIRT/SCHED, the vDSO shim still has to invoke the actual syscall. You can't avoid the kernel transition when you need per-thread accounting.
Oh for some time after its introduction, CLOCK_MONOTONIC_RAW wasn't vDSO'd and it took some time and syscall profiling ('huh, why do I see these as syscalls in perf record -e syscalls' ...) to understand what was going on.
Thanks, I really should've looked deeper than that.
If you look below the vDSO frame, there is still a syscall. I think that the vDSO implementation is missing a fast path for this particular clock id (it could be implemented though).
edit: agh, no. CLOCK_THREAD_CPUTIME_ID falls through the vdso to the kernel which makes sense as it would likely need to look at the task struct.<p>here it gets the task struct: <a href="https://elixir.bootlin.com/linux/v6.18.5/source/kernel/time/posix-cpu-timers.c#L358" rel="nofollow">https://elixir.bootlin.com/linux/v6.18.5/source/kernel/time/...</a> and here <a href="https://elixir.bootlin.com/linux/v6.18.5/source/kernel/time/posix-cpu-timers.c#L194" rel="nofollow">https://elixir.bootlin.com/linux/v6.18.5/source/kernel/time/...</a> to here where it actually pulls the value out: <a href="https://elixir.bootlin.com/linux/v6.18.5/source/kernel/sched/cputime.c#L844" rel="nofollow">https://elixir.bootlin.com/linux/v6.18.5/source/kernel/sched...</a><p>where here is the vdso clock pick logic <a href="https://elixir.bootlin.com/linux/v6.18.5/source/lib/vdso/gettimeofday.c#L288" rel="nofollow">https://elixir.bootlin.com/linux/v6.18.5/source/lib/vdso/get...</a> and here is the fallback to the syscall if it's not a vdso clock <a href="https://elixir.bootlin.com/linux/v6.18.5/source/lib/vdso/gettimeofday.c#L317" rel="nofollow">https://elixir.bootlin.com/linux/v6.18.5/source/lib/vdso/get...</a>
"look, I'm sorry, but the rule is simple:
if you made something 2x faster, you might have done something smart
if you made something 100x faster, you definitely just stopped doing something stupid"<p><a href="https://x.com/rygorous/status/1271296834439282690" rel="nofollow">https://x.com/rygorous/status/1271296834439282690</a>
The QuestDB team are among the best doing it.<p>Love the people and their software.<p>Great blog Jaromir!
It's kinda crazy the amount of plumbing required to get a few bits across the CPU.
This is such a great writeup
It took seven years to address this concern following the initial bug report (2018). That seems like a lot, considering how instrumenting CPU time can be in the hot path for profiled code.
I really wished™ there was an API/ABI for userland- and kernelland-defined individual virtual files at arbitrary locations, backed by processes and kernel modules respectively. I've tried pipes, overlays, and FUSE to no avail. It would greatly simply configuration management implementations while maintaining compatibility with the convention of plain text files, and there's often no need to have an actual file on any media or the expense of IOPS.<p>While I don't particularly like the IO overhead and churn consequences of real files for performance metrics, I get the 9p-like appeal of treating the virtual fs as a DBMS/API/ABI.
This is a great example of how a small change in the right place can outweigh years of incremental tuning.
Does anyone knowledgeable know whether it’s possible to drastically reduce the overhead of reading from procfs? IIUC everything in it is in-memory, so there’s no real reason reading some data should take the order of 10us.
cool