I’ve seen disks do off track writes, dropped writes due to write channel failures, and dropped writes due to the media having been literally scrubbed off the platter previously. You need LBA seeded CRC to catch these failures along with a number of other checks. I get excited when people write about this in the industry. They’re extremely interesting failure modes that I’ve been lucky enough to have been exposed to, at volume, for a large fraction of my career.
People consistently underestimate the many ways in which storage can and will fail in the wild.<p>The most vexing storage failure is phantom writes. A disk read returns a "valid" page, just not the last written/fsync-ed version of that page. Reliably detecting this case is very expensive, particularly on large storage volumes, so it is rarely done for storage where performance is paramount.
Check out Parity Lost and Parity Regained and Characteristics, Impact, and Tolerance of Partial Disk Failures (which this blog indirectly cites) if you'd like authoritative reading on the topic.<p><a href="https://www.usenix.org/legacy/event/fast08/tech/full_papers/krioukov/krioukov.pdf" rel="nofollow">https://www.usenix.org/legacy/event/fast08/tech/full_papers/...</a><p><a href="https://citeseerx.ist.psu.edu/document?repid=rep1&type=pdf&doi=8e486139d944cc7291666082bc5a74814af6e388" rel="nofollow">https://citeseerx.ist.psu.edu/document?repid=rep1&type=pdf&d...</a>
This looks AI-generated, including the linked code. That explains why the .zig-cache directory and the binary is checked into Git, why there's redundant commenting, and why the README has that bold, bullet point and headers style that is typical of AI.<p>If you can't be bothered to write it, I can't be bothered to read it.
The front page this weekend has been full of this stuff. If there’s a hint of clickbait about the title, it’s almost a forgone conclusion you’ll see all the other LLM tics, too.<p><i>These do not make the writing better!</i> They obscure whatever the insight is behind LinkedIn-engagement tricks and turns of phrase that obfuscate rather than clarify.<p>I’ll keep flagging and see if the community ends up agreeing with me, but this is making more and more of my hn experience disappointing instead of delightful.
I worked with a greybeard that instilled in me that when we were about to do some RAID maintenance that we would always run sync twice. The second to make sure it immediately returns. And I added a third for my own anxiety.
You need to sync twice because Unix is dumb: "According to the standard specification (e.g., POSIX.1-2001), sync() schedules the writes, but may return before the actual writing is done." <a href="https://man7.org/linux/man-pages/man2/sync.2.html" rel="nofollow">https://man7.org/linux/man-pages/man2/sync.2.html</a>
<p><pre><code> sync; sync; halt</code></pre>
it's not just a good idea for raid
Oh definitely not, I do it on every system that I've needed it to be synced before I did something. We were just working at a place that had 2k+ physical servers with 88 drives each in RAID6, so that was our main concern back then.<p>I have been passing my anxieties about hardrives to junior engineers for a decade now.
> Submit the write to the primary file<p>> Link fsync to that write (IOSQE_IO_LINK)<p>> The fsync's completion queue entry only arrives after the
write completes<p>> Repeat for secondary file<p>Wait, so the OS can re-order the fsync() to happen before the write request it is supposed to be syncing? Is there a citation or link to some code for that? It seems too ridiculous to be real.<p>> O_DSYNC: Synchronous writes. Don't return from write() until the data is actually stable on the disk.<p>If you call fsync() this isn't needed correct? And if you use this, then fsync() isn't needed right?
> Wait, so the OS can re-order the fsync() to happen before the write request it is supposed to be syncing? Is there a citation or link to some code for that? It seems too ridiculous to be real.<p>This is an io_uring-specific thing. It doesn't guarantee any ordering between operations submitted at the same time, unless you explicitly ask it to with the `IOSQE_IO_LINK` they mentioned.<p>Otherwise it's as if you called write() from one thread and fsync() from another, before waiting for the write() call to return. That obviously defeats the point of using fsync() so you wouldn't do that.<p>> If you call fsync(), [O_DSYNC] isn't needed correct? And if you use [O_DSYNC], then fsync() isn't needed right?<p>I believe you're right.
This FAST '08 paper "Parity Lost and Parity Regained" is still the one I pull out and show people if they seem to be under-imagining all the crimes an HDD can do.<p><a href="https://www.usenix.org/legacy/event/fast08/tech/full_papers/krioukov/krioukov.pdf" rel="nofollow">https://www.usenix.org/legacy/event/fast08/tech/full_papers/...</a>
This article is pretty low quality. It's an important and interesting topic and the article is mostly right but it's not clear enough to rely on.<p>The OS page cache is not a "problem"; it's a basic feature with well-documented properties that you need to learn if you want to persist data. The writing style seems off in general (e.g. "you're lying to yourself").<p>AFAIK fsync is the best practice not O_DIRECT + O_DSYNC. The article mentions O_DSYNC in some places and fsync in others which is confusing. You don't need both.<p>Personally I would prefer to use the filesystem (RAID or ditto) to handle latent sector errors (LSEs) rather than duplicating files at the app level. A case could be made for dual WALs if you don't know or control what filesystem will be used.<p>Due to the page cache, attempting to verify writes by reading the data back won't verify anything. Maaaybe this will work when using O_DIRECT.
<a href="https://en.wikipedia.org/wiki/Data_Integrity_Field" rel="nofollow">https://en.wikipedia.org/wiki/Data_Integrity_Field</a><p>This, along with RAID-1, is probably sufficient to catch the majority of errors. But realize that these are just probabilities - if the failure can happen on the first drive, it can also happen on the second. A merkle tree is commonly used to also protect against these scenarios.<p>Notice that using something like RAID-5 can result in data corruption migrating throughout the stripe when using certain write algorithms
The paranoid would also follow the write with a read command, setting the SCSI FUA (forced unit access) bit, requiring the disk to read from the physical media, and confirming the data is really written to that rotating rust. Trying to do similar in SATA or with NVMe drives might be more complicated, or maybe impossible. That’s the method to ensure your data is actually written to viable media and can be subsequently read.
Note that 99% of drives don't implement DIF.
I thought an fsync on the containing directories of each of the logs was needed to ensure the that newly created files were durably present in the directories.
Right, you do need to fsync when creating new files to ensure the directory entry is durable. However, WAL files are typically created once and then appended to for their lifetime, so the directory fsync is only needed at file creation time, not during normal operations.
> Conclusion<p>> A production-grade WAL isn't just code, it's a contract.<p>I hate that I'm now suspicious of this formulation.
You’re not insane. This is definitely AI.
In what sense? The phrasing is just a generalization, production-grade anything needs consideration of the needs and goals of the project.
“<x> isn’t just <y>, it’s <z>” is an AI smell.
It is, but partly because it is a common form in the training data. LLM output seems to use the form more than people, presumably either due to some bias in the training data (or the way it is tokenised) or due to other common token sequences leading into it (remember: it isn't an official acronym but Glorified Predictive Text is an accurate description). While it is a smell, it certainly isn't a reliable marker, there needs to be more evidence than that.
Wouldn't that just be because the construction is common in the training materials, which means it's a common construction in human writing?
<a href="https://en.wikipedia.org/wiki/Wikipedia:Signs_of_AI_writing" rel="nofollow">https://en.wikipedia.org/wiki/Wikipedia:Signs_of_AI_writing</a><p>the language technique of negative parallel construction is a classic signal for AI writing
Flashback to this old thread about SSD vendors lying about FLUSH'd writes: <a href="https://news.ycombinator.com/item?id=38371307">https://news.ycombinator.com/item?id=38371307</a> (I have a SKHynix drive with this issue)
Deleting data from the disk is actually even harder.