Hi HN,<p>Have you ever run GNU Parallel on a powerful machine just to find one core pegged at 100% while the rest sit mostly idle?<p>I hit that wall...so I built forkrun.<p>forkrun is a self-tuning, drop-in replacement for GNU Parallel (and xargs -P) designed for high-frequency, low-latency shell workloads on modern and NUMA hardware (e.g., log processing, text transforms, HPC data prep pipelines).<p>On my 14-core/28-thread i9-7940x it achieves:<p>- 200,000+ batch dispatches/sec (vs ~500 for GNU Parallel)<p>- ~95–99% CPU utilization across all 28 logical cores (vs ~6% for GNU Parallel)<p>- Typically 50×–400× faster on real high-frequency low-latency workloads (vs GNU Parallel)<p>These benchmarks are intentionally worst-case (near-zero work per task), where dispatch overhead dominates. This is exactly the regime where GNU Parallel and similar tools struggle — and where forkrun is designed to perform.<p>A few of the techniques that make this possible:<p>- Born-local NUMA: stdin is splice()'d into a shared memfd, then pages are placed on the target NUMA node via set_mempolicy(MPOL_BIND) before any worker touches them, making the memfd NUMA-spliced.<p>- SIMD scanning: per-node indexers use AVX2/NEON to find line boundaries at memory bandwidth and publish byte-offsets and line-counts into per-node lock-free rings.<p>- Lock-free claiming: workers claim batches with a single atomic_fetch_add — no locks, no CAS retry loops; contention is reduced to a single atomic on one cache line.<p>- Memory management: a background thread uses fallocate(PUNCH_HOLE) to reclaim space without breaking the logical offset system.<p>…and that’s just the surface. The implementation uses many additional systems-level techniques (phase-aware tail handling, adaptive batching, early-flush detection, etc.) to eliminate overhead at every stage.<p>In its fastest (-b) mode (fixed-size batches, minimal processing), it can exceed 1B lines/sec. In typical streaming workloads it's often 50×–400× faster than GNU Parallel.<p>forkrun ships as a single bash file with an embedded, self-extracting C extension — no Perl, no Python, no install, full native support for parallelizing arbitrary shell functions. The binary is built in public GitHub Actions so you can trace it back to CI (see the GitHub "Blame" on the line containing the base64 embeddings).<p>- Benchmarking scripts and raw results: <a href="https://github.com/jkool702/forkrun/blob/main/BENCHMARKS" rel="nofollow">https://github.com/jkool702/forkrun/blob/main/BENCHMARKS</a><p>- Architecture deep-dive: <a href="https://github.com/jkool702/forkrun/blob/main/DOCS" rel="nofollow">https://github.com/jkool702/forkrun/blob/main/DOCS</a><p>- Repo: <a href="https://github.com/jkool702/forkrun" rel="nofollow">https://github.com/jkool702/forkrun</a><p>Trying it is literally two commands:<p><pre><code> . frun.bash # OR `. <(curl https://raw.githubusercontent.com/jkool702/forkrun/main/frun.bash)`
frun shell_func_or_cmd < inputs
</code></pre>
Happy to answer questions.
Thanks for making and thanks for sharing :)<p>I’m not a parallels kind of user but I can appreciate your craft and know how rewarding these odysseys can be :)<p>What was the biggest “aha” moment when you worked how things interlock or you needed to make both change an and b at the same time, as either on their own slowed it down? Etc. And what is the single biggest impacting design choice?<p>And if you’re objective, what could be done to other tools to make them competitive?
So, in forkruns development there have been a few "AHA!" moments. Most of them were accompanied by a full re-write (current forkrun is v3).<p>The 1st AHA, and the basis for the original forkrun, was that you could eliminate a HUGE amount of the overhead of parallelizing things in shell in you use persistent workers and have them run things for you in a loop and distribute data to them. This is why the project is called "forkrun" - its short for "first you FORK, then you RUN".<p>The 2nd AHA, which spawned forkrun v2, was that you could distribute work without a central coordinator thread (which inevitably becomes the bottleneck). forkrun v2 did this by having 1 process dump data into a tmpfile on a ramdisk, then all the workers read from this file using a shared file descriptor and a lightweight pipe-based lock: write a newline into a shared anonymous pipe, read from pipe to acquire lock, write newline back to pipe to release it. FIFO naturally queues up waiters. This version actually worked really well, but it was a "serial read, parallel execute" design. Furthermore, the time it took to acquire and release a lock meant the design topped out at ~7 million lines per second. Nothing would make it faster, since that was the locking overhead.<p>The 3rd AHA was that I could make a very fast (SIMD-accellerated) delimiter scanner, post the byte offsets where lines (or batches of lines) started in the global data file, and then workers could claim batches and read data in parallel, making the design fully "parallel read + parallel execute"<p>The 4th AHA was regarding NUMA. it was "instead of reactively re-shuffling data between nodes, just put it on the right node to begin with". Furthermore, determine the "right node" using real-time backpressure from the nodes with a 3 chunk buffer to ensure the nodes are always fed with data. This one didn't need a rewrite, but is why forkrun scales SO WELL with NUMA.
I am using a 9950x3D processor and didn't see any slow-down nor cpu sitting idle, I suggest you read the man-pages more clearly :P
>Have you ever run GNU Parallel on a powerful machine just to find one core pegged at 100% while the rest sit mostly idle?<p>Yes, to my extreme frustration. Thank you, I'm installing this right now while I read the rest of your comment.
Please don't support only curl for installation. There are many package registries you can use; e.g., <a href="https://github.com/aquaproj/aqua-registry" rel="nofollow">https://github.com/aquaproj/aqua-registry</a>
Why the hell do you curl ? Additionally, why do you advertise it when you just had uploaded it? Nobody should install something that new...