7 comments

  • acidmath1 day ago
    Before Bluestore, we ran Ceph on ZFS with the ZFS Intent Log on NVDIMM (basically non-volatile RAM backed by a battery). The performance was extremely good. Today, we run Bluestore on ZVOLs on the same setup and if the zpool is a "hybrid" pool we put the Ceph OSD databases on an all-NVMe zpool. Ceph WAL wants a disk slice for each OSD, so we don't do Ceph WAL and consolidate incoming writes on the ZiL/SLOG on NVDIMM.
    • nightfly1 day ago
      Why ceph on ZVOLs and not bare disks?
      • acidmath1 day ago
        In the servers we have only 16gb to 64gb of NVDIMM, depending on density of NVDIMM and how many slots are populated with NVDIMM. Whatever raw NVDIMM is, usable is half because we mirror the contents for physical redundancy (if we lose a transaction it is fatal to our business). NVMe is amazing, but not everything should be NVMe, like petabyte scale object storage for example does not need to be on all NVMe (which is super pricey).<p>In newer DDR5 servers where we can&#x27;t get NVDIMM, the alternative battery backed RAM options leave us with even less to work with.<p>Where we have counts of HDDs or SATA&#x2F;SAS SSDs in the hundreds, we still want the performance improvements provided by WAL (or functional equivalent such as ZiL&#x2F;SLOG) on NVDIMM and some layer-2 (where layer-1 is RAM) caching with NVMe.<p>Ceph OSDs want a dedicated WAL device. Some places use OpenCAS to make &quot;hybrid&quot; devices out of HDDs by pairing them with SSDs where the SSDs can accelerate reads for that HDD and the Ceph OSD goes on a logical OpenCAS device. OpenCAS is really great, but the devices acting as &quot;caching layer&quot; often end up underutilized.<p>By placing &quot;big&quot; Ceph OSDs on ZVOLs, we don&#x27;t have individual disk slices for WAL (or equivalent) or individual disks for layer-2 read caching, but a <i>consolidated layer</i> in the form of ZFS Intent Log on &quot;Separate Log&quot; (NVDIMM) and another consolidated layer in the ZFS disk pool&#x27;s L2ARC (layer-2 adaptive readback cache).<p>The ZVOLs are striped across multiple relatively large RAIDz3 arrays. Yeah, it&#x27;s &quot;less efficient&quot; in some ways, but the tradeoff is worth it for us.<p><pre><code> https:&#x2F;&#x2F;docs.ceph.com&#x2F;en&#x2F;latest&#x2F;rados&#x2F;configuration&#x2F;bluestore-config-ref&#x2F;#devices https:&#x2F;&#x2F;open-cas.com&#x2F;</code></pre>
        • __turbobrew__1 day ago
          Do you have any recommendations or warnings about running ceph clusters?
          • Agingcoder23 hours ago
            Find people who understand it. I’ve seen epic failures when things grow , you lose a DC and hell rains on you. It’s not magic , you will need people who get it ( source : unstable cluster of a few petabytes where I work ).
          • acidmath22 hours ago
            Just off the top of my head:<p>Run Ceph on <a href="https:&#x2F;&#x2F;rook.io&#x2F;" rel="nofollow">https:&#x2F;&#x2F;rook.io&#x2F;</a> ; don&#x27;t bother with Cephadm. Running Rook provides very helpful guard rails. Put the logs for Ceph Rook into Elasticsearch+Kibana on its own small (three or four node) dedicated Ceph Rook cluster. Which Kubernetes distro this runs on matters more than anything.<p>Recently we are looking at using <a href="https:&#x2F;&#x2F;www.parseable.com&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.parseable.com&#x2F;</a> instead of Elasticsearch+Kibana. And we had somewhat recently started moving things from Elaticsearch+Kibana to OpenSearch+OpenSearchDashboards due to the license change.<p>The requirement outlined by Ceph documentation to dedicate layer-1 paths (can be same switches, but must be different ports) to Ceph replication is not about &quot;performance&quot; but about normal functionality.<p>If you have any pointed questions feel free to email &quot;section two thirty audit@mail2tor dot com&quot; (where &quot;two thirty&quot; are the three digits rather than spelled out).
            • __turbobrew__20 hours ago
              I already set things up with Rook as we are super heavily invested into kubernetes, and things are working well so far. I built out a test cluster to 1PiB and was able to push more than a terrabit&#x2F;second through the cluster which was good.<p>I also set up topology aware replication so pg’s can be spread across racks&#x2F;datacenters.<p>My main worry now is disaster recovery. From what I have seen, object recovery is quite manual if you lose any. I would like to write some scripts so we can bulk mark objects which we know are actually lost.<p>We already have a loki setup, so ceph logs just get put into there.
              • acidmath13 hours ago
                &gt; object recovery is quite manual if you lose any<p>When I read this I think &quot;but you should never lose an object&quot;. Do you mean like the underlying data chunks Ceph stores? Can you elaborate on this part? I know some of the teams I work with do things in unorthodox ways and we tend to operate on different assumptions than others.<p>&gt; so pg’s can be spread across racks&#x2F;datacenters.<p>Some Ceph pools come to mind (this was a while ago, I&#x27;m sure they&#x27;re still running though) where the erasure coding was done across cabinet rows and each cabinet row was on its own power distribution. I don&#x27;t know how the power worked but I was told rather forwardly that some specific Ceph pools&#x27; failure domains aligned with the datacenter&#x27;s failure domains.<p>&gt; We already have a loki setup<p>Nice. We have logs go into S3 and then anyone who prefers a particular tool is welcome to load whatever sets of logs from S3 within the resource limits set for whatever K8s namespace they work with. Originally keeping logs append-only in S3 was for compliance but we wanted to limit team members by RAM quota rather than tools in line with the &quot;people over tools over process&quot; DevOps maxim.
  • Also known as:<p>Write! No, fsync! No, really fsync I mean it!<p>Wait, why is my disk throughput so low? And why am I out of file descriptors?
    • chupasaurus1 day ago
      Article is focused on Ceph where FS is a frontend to the storage backend(s), now read the title again...
    • Dylan168071 day ago
      &gt; Wait, why is my disk throughput so low?<p>Because many filesystems do fsync wrong, for reasons that are not inherent to filesystems in general.
  • baruch1 day ago
    It&#x27;s easier to write the system&#x27;s front end while paying little attention to the backend and &quot;just&quot; letting a local filesystem do a lot of the work for you, but it doesn&#x27;t work well. The interesting question is if the result is also that the frontend-to-backend communication abstraction is good enough to replace the backend with a better solution. I&#x27;m not familiar enough with Ceph and BlueStore to have a conclusion on that.<p>I happen to work for a distributed file-system company, and while I don&#x27;t do the filesystem part itself, the old saying &quot;it takes software 10 years to mature&quot; is so true in this domain.
  • sitkack1 day ago
    See also &quot;Hierarchical File Systems are Dead&quot; by Margo Seltzer and Nicholas Murphy <a href="https:&#x2F;&#x2F;www.usenix.org&#x2F;legacy&#x2F;events&#x2F;hotos09&#x2F;tech&#x2F;full_papers&#x2F;seltzer&#x2F;seltzer.pdf" rel="nofollow">https:&#x2F;&#x2F;www.usenix.org&#x2F;legacy&#x2F;events&#x2F;hotos09&#x2F;tech&#x2F;full_paper...</a>
    • MR4D1 day ago
      No mention of LATCH theory? (Location, Alphabet, Time, Category, and Hierarchy)<p>Oddly, no matter how they are organized, their indices will always be a hierarchy (tree).<p>Personally, I think human brains just have a categorization approach that is built into our brains as hierarchy, so while other methods are definitely useful, they are an add-on, not a replacement.
      • Til <a href="https:&#x2F;&#x2F;parsonsdesign4.wordpress.com&#x2F;resources&#x2F;latch-methods-of-organization&#x2F;" rel="nofollow">https:&#x2F;&#x2F;parsonsdesign4.wordpress.com&#x2F;resources&#x2F;latch-methods...</a>
  • zokier1 day ago
    Lot&#x27;s of these issues seem to be not specific to distributed systems and also impact local single-node systems. Notable example is postgresql fsyncgate, or how mail servers in the past struggled (iirc that was one of the cases where reiserfs shined).
  • resurrected1 day ago
    Noooo, really?<p>It all depends on what you want to do. For things that are already in files like all that data that DeepSeek and other models train on and for which DS open sourced their own distributed file system, it makes sense to go with a distributed file system.<p>For OLTP you need a database with appropriate isolation levels.<p>I know someone will build a distributed file system on top of FoundationDB if they haven’t yet.
    • _zoltan_1 day ago
      ~2006 I&#x27;ve built a fuse fs that used mysql as a backend, kept all file hashes (not blocks, just whole files) and did deduplication. good old times.
    • darkstar_161 day ago
      Isn&#x27;t the Cassandra file system something like that ?
    • AtlasBarfed1 day ago
      They did it atop Cassandra.
    • jeffrallen1 day ago
      They have, at Exoscale. My officemate leads the team doing it.
    • EGreg1 day ago
      Just use hypercore with hyperdrive. And be free!
  • Spivak1 day ago
    It really is true, I spent years of my life wrangling a massive glusterfs cluster and it was awful. You basically can&#x27;t do any kind of file system operations on it that aren&#x27;t CRUD on well known specific paths. Anything else— traversal, moving&#x2F;copying, linking, updating permissions would just hang forever. You&#x27;re also at the mercy of the kernel driver which does hate you personally. You will have nightmares about uninterruptible sleep. Migrating it all to S3 over Ceph was a beautiful thing.
    • ted_dunning1 day ago
      That has more to do with gluster&#x27;s primitive nature than with a general statement of what can work for distributed storage.