5 comments

  • yjftsjthsd-h1 hour ago
    &gt; This is not a bug report. [...] The goal is constructive, not a complaint.<p>Er, I appreciate trying to be constructive, but in what possible situation is it not a bug that a power cycle can lose the pool? And if it&#x27;s not technically a &quot;bug&quot; because BTRFS officially specifies that it can fail like that, why is that not in big bold text at the start of any docs on it? &#x27;Cuz that&#x27;s kind of a big deal for users to know.<p>EDIT: From the longer write-up:<p>&gt; Initial damage. A hard power cycle interrupted a commit at generation 18958 to 18959. Both DUP copies of several metadata blocks were written with inconsistent parent and child generations.<p>Did the author disable safety mechanisms for that to happen? I&#x27;m coming from being more familiar with ZFS, but I would have expected BTRFS to also use a CoW model where it wasn&#x27;t possible to have multiple inconsistent metadata blocks in a way that didn&#x27;t just revert you to the last fully-good commit. If it does that by default but there&#x27;s a way to disable that protection in the name of improving performance, that would significantly change my view of this whole thing.
    • rincebrain1 hour ago
      As far as I can see, no, the author disabled nothing of the sort that he documented.<p>I suspect that the author&#x27;s intent is less &quot;I do not view this as a bug&quot; and more &quot;I do not think it&#x27;s useful to get into angry debates over whether something is a bug&quot;. I do not know whether this is a common thing on btrfs discussions, but I have certainly seen debates to that effect elsewhere.<p>(My personal favorite remains &quot;it&#x27;s not a data loss bug if someone could technically theoretically write something to recover the data&quot;. Perhaps, technically, that&#x27;s true, but if nobody is writing such a tool, nobody is going to care about the semantics there.)
      • yjftsjthsd-h41 minutes ago
        &gt; I suspect that the author&#x27;s intent is less &quot;I do not view this as a bug&quot; and more &quot;I do not think it&#x27;s useful to get into angry debates over whether something is a bug&quot;.<p>Agreed, and I appreciate the attempt to channel things into a productive conversation.
    • Retr0id42 minutes ago
      Unless I missed it the writeup never identifies a causal bug, only things that made recovery harder.
  • Retr0id57 minutes ago
    This is obviously LLM output, but perhaps LLM output that corresponds to a real scenario. It&#x27;s plausible that Claude was able to autonomously recover a corrupted fs, but I would not trust its &quot;insights&quot; by default. I&#x27;d love to see a btrfs dev&#x27;s take on this!
    • number646 minutes ago
      This is also my first impulse. The second was, if this happened to me, I would not be able to recover it. All the custom c tool talk... If you ask Claude Code it will code something up.<p>Well that he recovered the disks is amazing in itself. I would have given up and just pulled a backup.<p>However, I would like to see a Dev saying: why didn&#x27;t you use the --&lt;flag&gt; which we created for this Usecase
    • yjftsjthsd-h44 minutes ago
      I was assuming real scenario with heavy LLM help to recover. Would be nice for the author to clarify. And, separately, for BTRFS devs to weigh in, though I&#x27;d somewhat prefer to get some indication that it&#x27;s real before spending their time.
  • stinkbeetle2 hours ago
    &gt; Case study: recovery of a severely corrupted 12 TB multi-device pool, plus constructive gap analysis and reference tool set #1107<p>Please don&#x27;t be btrfs please don&#x27;t be btrfs please don&#x27;t be btrfs...
    • toaste_25 minutes ago
      I mean, the only other option was bcachefs, which might have been funny if this LLM-generated blogpost were written by the OpenClaw instance the developer has decided is sentient:<p><a href="https:&#x2F;&#x2F;www.reddit.com&#x2F;r&#x2F;bcachefs&#x2F;comments&#x2F;1rblll1&#x2F;the_blog_of_an_llm_saying_its_owned_by_kent_and&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.reddit.com&#x2F;r&#x2F;bcachefs&#x2F;comments&#x2F;1rblll1&#x2F;the_blog_...</a><p>But no. It was btrfs.<p>As a side note, it&#x27;s somewhat impressive that an LLM agent was able to produce a suite of custom tools that were apparently successfully used to recover some data from a corrupted btrfs array, even ad-hoc.
      • yjftsjthsd-h16 minutes ago
        It <i>could</i> be ZFS. I&#x27;d be much more surprised, but it can still have bugs.
  • phoronixrly2 hours ago
    To theal author: did you continue using btrfs after this ordeal? An FS that will not eat (all) your data upon a hard powercycle only at the cost of 14 custom C tools is a hard pass from me no matter how many distros try to push it down my throat as &#x27;production-ready&#x27;...<p>Also, impressive work!