Great article! Just yesterday I watched a Devoxx talk by Andrei Pangin [1], the creator of async-profiler where I learned about the new heatmap support. To many folks it might not sound that exciting, until you realise that these heatmaps make it much easier to see patterns over time. If you’re interested there’s a solid blog post [2] from Netflix that walks through the format and why it can be incredibly useful.<p>[1]: <a href="https://www.youtube.com/watch?v=u7-S-Hn-7Do" rel="nofollow">https://www.youtube.com/watch?v=u7-S-Hn-7Do</a><p>[2]: <a href="https://netflixtechblog.com/netflix-flamescope-a57ca19d47bb" rel="nofollow">https://netflixtechblog.com/netflix-flamescope-a57ca19d47bb</a>
Question, isn't this a bug?
static enum hrtimer_restart perf_swevent_hrtimer(struct hrtimer *hrtimer)
{
- if (event->state != PERF_EVENT_STATE_ACTIVE)
+ if (event->state != PERF_EVENT_STATE_ACTIVE ||
+ event->hw.state & PERF_HES_STOPPED)
return HRTIMER_NORESTART;<p>The bug being that the precedence of || is higher than the precedence of != ?
Consider writing it
if ((event->state != PERF_EVENT_STATE_ACTIVE) ||
(event->hw_state & PERF_HES_STOPPED))<p>This coming from a person who has too many scars from not parenthesizing my expressions in conditionals to ensure they work the way I meant them to work.
Wow, someone is actually reading the article in detail, that's a good feeling!
In C, the != operator has higher precedence than the || operator. That said, extra parentheses never hurt readability.
Which language(s?) have || before !=/==?
Likely they're confusing it with bitwise OR, since in C, a | b == c parses as a | (b == c), causing widespread pain.
Ah, this is the bug that froze the system when Minecraft was running with Spark profiler mod!
Nice article, thank you.
Did you also consider using bpftrace while debugging?<p>I do not have much experience with it, but I think you can see the kernel call stack with it and I know you can also see the return value (in eax).
That would be less effort than qemu + gdb + disabling kernel aslr, etc.
I'm glad to hear I'm not alone. Due to the nature of what I do, I'm often accumulating ~800-900GB of Docker images and volumes on my machine, sometimes running 20-30 containers at once starting/stopping them concurrently. Somehow, very rarely, but still quite often (once every couple of weeks) - it leads to a complete deadlock somewhere inside of the kernel due to some crazy race condition that I'm absolutely in no way able to reliably reproduce.
Great write-up.<p>This kind of "debugging journey" post is gold.
Author here. I've always been kernel-curious despite never having worked on one myself. Consider this either a collection of impractical party tricks or a hands-on way to get a feel for kernel internals.
Great debugging effort.<p>Now, with the complexity (MLoCs!) of the Linux kernel, this is definitely not the only bug to be found in there.<p>This is why Linux is just an interim kernel for these use cases in which we still cannot use seL4[0].<p>0. <a href="https://sel4.systems/" rel="nofollow">https://sel4.systems/</a>