I've had to test out various networked filesystems this year for a few use cases (satellite/geo) on a multi petabyte scale. Some of my thoughts:<p>* JuiceFS - Works well, for high performance it has limited use cases where privacy concerns matter. There is the open source version, which is slower. The metadata backend selection really matters if you are tuning for latency.<p>* Lustre - Heavily optimised for latency. Gets very expensive if you need more bandwidth, as it is tiered and tied to volume sizes. Managed solutions available pretty much everywhere.<p>* EFS - Surprisingly good these days, still insanely expensive. Useful for small amounts of data (few terabytes).<p>* FlexFS - An interesting beast. It murders on bandwidth/cost. But slightly loses on latency sensitive operations. Great if you have petabyte scale data and need to parallel process it. But struggles when you have tooling that does many small unbuffered writes.
Nothing around content addressable storage? Has anyone used something like IPFS / Kubo in production?<p>For those who don't know IPFS, the original paper is fascinating: <a href="https://arxiv.org/pdf/1407.3561" rel="nofollow">https://arxiv.org/pdf/1407.3561</a>
Related, "The Design & Implementation of Sprites" [1] (also currently on the front page) mentioned JuiceFS in its stack:<p>> The Sprite storage stack is organized around the JuiceFS model (in fact, we currently use a very hacked-up JuiceFS, with a rewritten SQLite metadata backend). It works by splitting storage into data (“chunks”) and metadata (a map of where the “chunks” are). Data chunks live on object stores; metadata lives in fast local storage. In our case, that metadata store is kept durable with Litestream. Nothing depends on local storage.<p>[1] <a href="https://news.ycombinator.com/item?id=46634450">https://news.ycombinator.com/item?id=46634450</a>
Do people really trust Redis for something like this? I feel like it's sort of pointless to pair Redis with S3 like this, and it'd be better to see benchmarks with metadata stores that can provide actual guarantees for durability/availability.<p>Unfortunately, the benchmarks use Redis. Why would I care about distributed storage on a system like S3, which is all about consistency/durability/availability guarantees, just to put my metadata into Redis?<p>It would be nice to see benchmarks with another metadata store.
We developed Object Mount (formerly cunoFS) (<a href="https://www.storj.io/object-mount?hn=1" rel="nofollow">https://www.storj.io/object-mount?hn=1</a>) specifically to not rely on any metadata storage other than S3 AND preserve 1:1 mapping of objects to files AND support for POSIX. We have a direct mode that uses LD_PRELOAD to keep everything in userspace so no FUSE overhead.<p>This approach isn't right for every use case and juice might be a better fit for this sort of 'direct block store', but wanted to include it here for folks that might want something like Juice but without having to maintain a metadata store.<p>(Disclosure: I work at Storj that develops Object Mount)
JuiceFS metadata engine comparison -> <a href="https://juicefs.com/docs/community/metadata_engines_benchmark" rel="nofollow">https://juicefs.com/docs/community/metadata_engines_benchmar...</a>
Redis is as reliable as the storage you persist it to. If you're running Redis right, it's very reliable. Not S3 reliable, though. But if you need S3 reliable, you would turn to something else.<p>I expect that most folks looking at this are doing it because it means:<p>1. Effectively unbounded storage<p>2. It's fast<p>3. It's pretty darn cheap<p>4. You can scale it horizontally in a way that's challenging to scale other filesystems<p>5. All the components are pretty easy to set up. Many folks are probably already running S3 and Redis.
Redis isn't durable unless you drastically reduce the performance.<p>Filesystems are pretty much by definition durable.
> Redis is as reliable as the storage you persist it to.<p>For a single node, if you tank performance by changing the configuration, sure. Otherwise, no, not really.<p>I don't get why you'd want a file system that isn't durable, but to each their own.
> 4. You can scale it horizontally in a way that's challenging to scale other filesystems<p>Easy to scale on RDS, along with everything else. But there’s no Kubernetes operator. Is there a better measure of “easy” or “challenging?” IMO no. Perhaps I am spoiled by CNPG.
I think they should replace Redis with Valkey or even better use rocksdb.
It says MySQL can be used instead of Redis for the metadata
yes, support more 10 options, include redis, SQL-like DB, TiKV, FoundationDB, and more. see here -> <a href="https://juicefs.com/docs/community/databases_for_metadata" rel="nofollow">https://juicefs.com/docs/community/databases_for_metadata</a>
Juice is cool, but tradeoffs around which metadata store you choose end up being very important. It also writes files in it's own uninterpretable format to object storage, so if you lose the metadata store, you lose your data.<p>When we tried it at Krea we ended up moving on because we couldn't get sufficient performance to train on, and having to choose which datacenter to deploy our metadata store on essentially forced us to only use it one location at a time.
I'm betting this is on the front page today (as opposed to any other day; Juice is very neat and doesn't need us to hype it) because of our Sprites post, which goes into some detail about how we use Juice (for the time being; I'm not sure if we'll keep it this way).<p>The TL;DR relevant to your comment is: we tore out a lot of the metadata stuff, and our metadata storage is SQLite + Litestream.io, which gives us fast local read/write, enough systemwide atomicity (all atomicity in our setting runs asymptotically against "someone could just cut the power at any moment"), and preserves "durably stored to object storage".
Litestream.io is amazing. Using sqlite as a DB in a typical relational data model where objects are related mean most read then write transactions would have to one node, but if the using it for blobs as first class objects(e.g. video uploads or sensor data) which are independent probably means you can shard and scale your set up the wazoo right?
In large-scale metadata scenarios, JFS recommends using a distributed key-value store to host metadata, such as TiKV or FoundationDB. Based on my experience with large JFS users, most of them choose TiKV.<p>Disclaimer: I'm the co-founder of TiKV.
> It also writes files in it's own uninterpretable format to object storage, so if you lose the metadata store, you lose your data.<p>That's so confusing to me I had to read it five times. Are you saying you lose the metadata, or that the underlying data is actually mangled or gone, or merely that you lose the metadata?<p>One of the greatest features of something like this to me would be the ability to durable even beyond JuiceFS access to my data in a bad situation. Even if JuiceFS totally messes up, my data is still in S3 (and with versioning etc even if juicefs mangles or deletes my data, still). So odd to design this kind of software and lose this property.
It backs its metadata up to S3. You do need metadata to map inodes / slices / chunks to s3 objects, though.<p>Tigris has a one-to-one FUSE that does what you want: <a href="https://github.com/tigrisdata/tigrisfs" rel="nofollow">https://github.com/tigrisdata/tigrisfs</a>
As I understand it, if the metadata is lost then the whole filesystem is lost.<p>I think this is a common failure mode in filesystems. For example, in ZFS, if you store your metadata on a separate device and that device is destroyed, the whole pool is useless.
The consistency guarantees are what makes this interesting in my opinion.<p>> *
Close-to-open consistency. Once a file is written and closed, it is guaranteed to view the written data in the following opens and reads from any client. Within the same mount point, all the written data can be read immediately.*<p>> <i>Rename and all other metadata operations are atomic, which are guaranteed by supported metadata engine transaction.</i><p>This is a lot more than other <i>"POSIX compatible"</i> overlays claim, and I think similar to what NFSv4 promises. There are lots of subtitles there, though, and I doubt you could safely run a database on it.
Can run MySQL and PG, but don't recommend, not good performance for production. but for temporary it's OK. Here's a case study -> <a href="https://juicefs.com/en/blog/user-stories/xiachufang-mysql-backup-practice-on-juicefs" rel="nofollow">https://juicefs.com/en/blog/user-stories/xiachufang-mysql-ba...</a><p>And here's a POSIX compatibility comparison with other cloud file system, like AWS EFS. <a href="https://juicefs.com/en/blog/engineering/posix-compatibility-comparison-among-four-file-system-on-the-cloud" rel="nofollow">https://juicefs.com/en/blog/engineering/posix-compatibility-...</a>
If tested various Posix FS projects over the years and everyone has their shortcomings in one way or the other.<p>Although the maintainers of these projects disagree, I mostly consider them as a workaround for smaller projects. For big data (PB range) and critical production workloads I recommend to bite the bullet and make your software nativley S3 compatible without going over a POSIX mounted S3 proxy.
Agree, don't recommend POSIX proxy on S3 for complex workload, like S3FS. In the design of JuiceFS, S3 is like raw disk, JuiceFS metadata engine is like partition table, compare with local file system.
JuiceFS can be scaled to hundreds of PB by design, also is verified by thousands of users in production [1].<p>[1] <a href="https://juicefs.com/en/blog/company/2025-recap-artificial-intelligence-storage" rel="nofollow">https://juicefs.com/en/blog/company/2025-recap-artificial-in...</a>
I think so.
The key I think with s3 is using it mostly as a blobstore. We put the important metadata we want into postgres so we can quickly select stuff that needs to be updated based on other things being newer. So, we don't need to touch s3 that often if we don't need the actual data.<p>When we actually need to manipulate or generate something in Python, we download/upload to S3 and wrap it all in a tempfile.TemporaryDirectory() to cleanup the local disk when we're done. If you don't do this, you end up with a bunch of garbage eventually in /tmp/ you need to deal with.<p>We also have some longer-lived disk caches and using the data in the db and a os.stat() on the file we can easily know if the cache is up to date without hitting s3. And this cache, we can just delete stuff that's old wrt os.stat() to manage the size of it since we can always get it from s3 again if needed in the future.
This is upside down.<p>We need a kernel native distributed file system so that we can build distributed storage/databases on top of it.<p>This is like building an operating system on top of a browser.
I'm glad you noticed this, I thought this was a wildly insane thing to do. Its like the satanic inversion of 9P protocol
Show me an operating system built on top of a browser that can be used to solve real-world problems like JuiceFS.
My criticism is of the basic architecture, not usability or fitness for a particular purpose.<p>If a distributed file system is useful, then a properly architectured one is 100x more useful and more performant.
<a href="https://github.com/tractordev/apptron" rel="nofollow">https://github.com/tractordev/apptron</a> ?
OrbitDB?
See also their User Stories: <a href="https://juicefs.com/en/blog/user-stories" rel="nofollow">https://juicefs.com/en/blog/user-stories</a><p>I'm not an enterprise-storage guy (just sqlite on a local volume for me so far!) so those really helped de-abstractify what JuiceFS is for.
Distributed filesystem and POSIX don't go together well.
I was actually looking at using this to replace our mongo disks so we could easily cold store our data
It is not clear that pjdfstest establishes full POSIX semantic compliance. After a short search of the repo I did not see anything that exercises multiple unrelated processes atomically writing with O_APPEND, for example. And the fact that their graphic shows applications interfacing with JuiceFS over NFS and SMB casts further doubt, since both of those lack many POSIX semantic properties.<p>Over the decades I have written test harnesses for many distributed filesystems and the only one that seemed to actually offer POSIX semantics was LustreFS, which, for related reasons, is also an operability nightmare.
Interesting. Would this be suitable as a replacement for NFS? In my experience literally everyone in the silicon design industry uses NFS on their compute grid and it sucks in numerous ways:<p>* poor locking support (this sounds like it works better)<p>* it's slow<p>* no manual fence support; a bad but common way of distributing workloads is e.g. to compile a test on one machine (on an NFS mount), and then use SLURM or SGE to run the test on other machines. You use NFS to let the other machines access the data... and this works... except that you either have to disable write caches or have horrible hacks to make the output of the first machine visible to the others. What you <i>really</i> want is a manual fence: "make all changes to this directory visible on the server"<p>* The bloody .nfs000000 files. I think this might be fixed by NFSv4 but it seems like nobody actually uses that. (Not helped by the fact that CentOS 7 is considered "modern" to EDA people.)
FUSE is full of gotchas. I wouldn't replace NFS with JuiceFS for arbitrary workloads. Getting the full FUSE set implemented is not easy -- you can't use sqlite on JuiceFS, for example.<p>The meta store is a bottleneck too. For a shared mount, you've got a bunch of clients sharing a metadata store that lives in the cloud somewhere. They do a lot of aggressive metadata caching. It's still surprisingly slow at times.
> poor locking support (this sounds like it works better)<p>File locking on Unix is in general a clusterf*ck. (There was a thread a few days ago at <a href="https://news.ycombinator.com/item?id=46542247">https://news.ycombinator.com/item?id=46542247</a> )<p>> no manual fence support; a bad but common way of distributing workloads is e.g. to compile a test on one machine (on an NFS mount), and then use SLURM or SGE to run the test on other machines. You use NFS to let the other machines access the data... and this works... except that you either have to disable write caches or have horrible hacks to make the output of the first machine visible to the others. What you really want is a manual fence: "make all changes to this directory visible on the server"<p>In general, file systems make for poor IPC implementations. But if you need to do it with NFS, the key is to understand the close-to-open consistency model NFS uses, see section 10.3.1 in <a href="https://www.rfc-editor.org/rfc/rfc7530#section-10.3" rel="nofollow">https://www.rfc-editor.org/rfc/rfc7530#section-10.3</a> . Of course, you'll also want some mechanism for the writer to notify the reader that it's finished, be it with file locks, or some other entirely different protocol to send signals over the network.
> In general, file systems make for poor IPC implementations.<p>I agree but also they do have advantages such as simplicity, not needing to explicitly declare which files are needed, lazy data transfer, etc.<p>> you'll also want some mechanism for the writer to notify the reader that it's finished, be it with file locks, or some other entirely different protocol to send signals over the network.<p>The writer is always finished before the reader starts in these scenarios. The issue is reads on one machine aren't guaranteed to be ordered after writes on a different machine due to write caching.<p>It's exactly the same problem as trying to do multithreaded code. Thread A writes a value, thread B reads it. But even if they happen sequentially in real time thread B can still read an old value unless you have an explicit fence.
> The writer is always finished before the reader starts in these scenarios. The issue is reads on one machine aren't guaranteed to be ordered after writes on a different machine due to write caching.<p>In such a case it should be sufficient to rely on NFS close-to-open consistency as explained in the RFC I linked to in the previous message. Closing a file forces a flush of any dirty data to the server, and opening a file forces a revalidation of any cached content.<p>If that doesn't work, your NFS is broken. ;-)<p>And if you need 'proper' cache coherency, something like Lustre is an option.
It wasn't my job so I didn't look into this fully, but the main issue we had was clients claiming that files didn't exist when they did. I just reread the NFS man page and I guess this is the issue:<p>> To detect when directory entries have been added or removed on the server, the Linux NFS client watches a directory's mtime. If the client detects a change in a directory's mtime, the client drops all cached LOOKUP results for that directory. Since the directory's mtime is a cached attribute, it may take some time before a client notices it has changed. See the descriptions of the acdirmin, acdirmax, and noac mount options for more information about how long a directory's mtime is cached.<p>> Caching directory entries improves the performance of applications that do not share files with applications on other clients. Using cached information about directories can interfere with applications that run concurrently on multiple clients and need to detect the creation or removal of files quickly, however. The lookupcache mount option allows some tuning of directory entry caching behavior.<p>People did talk about using Lustre or GPFS but apparently they are really complex to set up and maybe need fancier networking than ethernet, I don't remember.
> * The bloody .nfs000000 files. I think this might be fixed by NFSv4 but it seems like nobody actually uses that. (Not helped by the fact that CentOS 7 is considered "modern" to EDA people.)<p>Unfortunately, NFSv4 also has the silly rename semantics...
> NFSv4 but it seems like nobody actually uses that<p>Hurry up and you might be able to adopt it before its 30th birthday!
How about CephFs?
ZeroFS [0] outperforms JuiceFS on common small file workloads [1] while only requiring S3 and no 3rd party database.<p>[0] <a href="https://github.com/Barre/ZeroFS" rel="nofollow">https://github.com/Barre/ZeroFS</a><p>[1] <a href="https://www.zerofs.net/zerofs-vs-juicefs" rel="nofollow">https://www.zerofs.net/zerofs-vs-juicefs</a>
Respect to your work on ZeroFS, but I find it kind of off-putting for you to come in and immediately put down JuiceFS, especially with benchmark results that don't make a ton of sense, and are likely making apples-to-oranges comparisons with how JuiceFS works or mount options.<p>For example, it doesn't really make sense that "92% of data modification operations" would fail on JuiceFS, which makes me question a lot of the methodology in these tests.
I have very limited experiences with object storage, but my humble benchmarks with juicefs + minio/garage [1] showed very bad performance (i.e. total collapse within a few hours) when running lots of small operations (torrents).<p>I wouldn't be surprised if there's a lot of tuning that can be achieved, but after days of reading docs and experimenting with different settings i just assumed JuiceFS was a very bad fit for archives shared through Bittorrent. I hope to be proven wrong, but in the meantime i'm very glad zerofs was mentioned as an alternative for small files/operations. I'll try to find the time to benchmark it too.<p>[1] <a href="https://git.deuxfleurs.fr/Deuxfleurs/garage/issues/1021" rel="nofollow">https://git.deuxfleurs.fr/Deuxfleurs/garage/issues/1021</a>
I'm always curious about the of the option space. I appreciate folks talking about the alternative s. What's yours?
Our product is Archil [1], and we are building our service on top of a durable, distributed SSD storage layer. As a result, we have the ability to: (a) store and use data in S3 in its native format [not a block based format like the other solutions in this thread], (b) durably commit writes to our storage layer with lower latency than products which operate as installable OSS libraries and communicate with S3 directly, and (c) handle multiple writers from different instances like NFS.<p>Our team spent years working on NFS+Lustre products at Amazon (EFS and FSx for Lustre), so we understand the performance problems that these storage products have traditionally had.<p>We've built a custom protocol that allows our users to achieve high-performance for small file operations (git -- perfect for coding agents) and highly-parallel HPC workloads (model training, inference).<p>Obviously, there are tons of storage products because everyone makes different tradeoffs around durability, file size optimizations, etc. We're excited to have an approach that we think can flex around these properties dynamically, while providing best-in-class performance when compared to "true" storage systems like VAST, Weka, and Pure.<p>[1] <a href="https://archil.com">https://archil.com</a>
> but I find it kind of off-putting for you to come in and immediately put down JuiceFS, especially with benchmark results that don't make a ton of sense, and are likely making apples-to-oranges comparisons with how JuiceFS works or mount options.<p>The benchmark suite is trivial and opensource [1].<p>Is performing benchmarks “putting down” these days?<p>If you believe that the benchmarks are unfair to juicefs for a reason or for another, please put up a PR with a better methodology or corrected numbers. I’d happily merge it.<p>EDIT: From your profile, it seems like you are running a VC backed competitor, would be fair to mention that…<p>[1] <a href="https://github.com/Barre/ZeroFS/tree/main/bench" rel="nofollow">https://github.com/Barre/ZeroFS/tree/main/bench</a>
> The benchmark suite is trivial and opensource.<p>The actual code being benchmarked is trivial and open-source, but I don't see the actual JuiceFS setup anywhere in the ZeroFS repository. This means the self-published results don't seem to be reproducible by anyone looking to externally validate the stated claims in more detail. Given the very large performance differences, I have a hard time believing it's an actual apples-to-apples production-quality setup. It seems much more likely that some simple tuning is needed to make them more comparable, in which case the takeaway may be that JuiceFS may have more fiddly configuration without well-rounded defaults, not that it's actually hundreds of times slower when properly tuned for the workload.<p>(That said, I'd love to be wrong and confidently discover that ZeroFS is indeed that much faster!)
Yes, I'm working in the space too. I think it's fine to do benchmarks, I don't think it's necessary to immediately post them any time a competitor comes up on HN.<p>I don't want to see the cloud storage sector turn as bitter as the cloud database sector.<p>I've previously looked through the benchmarking code, and I still have some serious concerns about the way that you're presenting things on your page.
Let's remember that JuiceFS can be setup very easily to not have a single point of failure (by replicating the metadata engine), meanwhile ZeroFS seems to have exactly that.<p>If I was a company I know which one I'd prefer.
Yea, that is a big caveat to ZeroFS. Single point of failure. It is like saying I can write a faster etcd by only having a single node. Sure, that is possible, but the hard part of distributed systems is the coordination, and coordination always makes performance worse.<p>I personally have went with Ceph for distributed storage. I personally have a lot more confidence in Ceph over JuiceFS and ZeroFS, but realize building and running a ceph cluster is more complex, but with that complexity you get much cheaper S3, block storage, and cephfs.
The magnitude of performance difference alone immediately makes me skeptical of your benchmarking methodology.
I'm not an expert in any way, but i personally benchmarked [1] juiceFS performance totalling collapsing under very small files/operations (torrenting). It's good to be skeptical, but it might just be that the bar is very low for this specific usecase (IIRC juiceFS was configured and optimized for block sizes of several MBs).<p><a href="https://git.deuxfleurs.fr/Deuxfleurs/garage/issues/1021" rel="nofollow">https://git.deuxfleurs.fr/Deuxfleurs/garage/issues/1021</a>
> ZeroFS supports running multiple instances on the same storage backend: one read-write instance and multiple read-only instances.<p>Well that's a big limiting factor that needs to be at the front in any distributed filesystem comparison.<p>Though I'm confused, the page says things like "ZeroFS makes S3 behave like a regular block device", but in that case how do read-only instances mount it without constantly getting their state corrupted out from under them? Is that implicitly talking about the NBD access, and the other access modes have logic to handle that?<p>Edit: What I want to see is a ZeroFS versus s3backer comparison.<p>Edit 2: changed the question at the end
For a proper comparison, also significant to note that JuiceFS is Apache-2.0 licensed while ZeroFS is dual AGPL-3.0/commercial licensed, significantly limiting the latter's ability to be easily adopted outside of open source projects.
does having to maintain the slatedb as a consistent singleton (even with write fencing) make this as operationally tricky as a third party db?
Looks like the underdog beats it handily and easier deployment to boot. What's the catch?
ZeroFS is a single-writer architecture and therefore has overall bandwidth limited by the box it's running on.<p>JuiceFS scales out horizontally as each individual client writes/reads directly to/from S3, as long as the metadata engine keeps up it has essentially unlimited bandwidth across many compute nodes.<p>But as the benchmark shows, it is fiddly especially for workloads with many small files and is pretty wasteful in terms of S3 operations, which for the largest workloads has meaningful cost.<p>I think both have their place at the moment. But the space of "advanced S3-backed filesystems" is... advancing these days.
Can SQLite run on it?