1 comments

  • PhilipTrettner3 days ago
    I looked into this because part of our pipeline is forced to be chunked. Most advice I&#x27;ve seen boils down to &quot;more contiguity = better&quot;, but without numbers, or at least not generalizable ones.<p>My concrete tasks will already reach peak performance before 128 kB and I couldn&#x27;t find pure processing workloads that benefit significantly beyond 1 MB chunk size. Code is linked in the post, it would be nice to see results on more systems.
    • twoodfin1 hour ago
      Your results match similar analyses of database systems I’ve seen.<p>64KB-128KB seems like the sweet spot.