I suspect that Clickhouse will go down the same path. They already changed their roadmap a bit two years ago[1], and had good reasons: if the open sourced version does too well, it will compete with their cloud business.<p>[1] <a href="https://news.ycombinator.com/item?id=37608186">https://news.ycombinator.com/item?id=37608186</a>
There is <a href="https://github.com/seaweedfs/seaweedfs" rel="nofollow">https://github.com/seaweedfs/seaweedfs</a><p>I haver not used it but will be likely a good minio alternative for people who want to run a server and don't use minio just as s3 client.
They have been removing features from the open source version for a while.<p>The closest alternative seems to be RustFS. Has anyone tried it? I was waiting until they support site replication before switching.
Garage is a popular alternative to Minio. <a href="https://garagehq.deuxfleurs.fr" rel="nofollow">https://garagehq.deuxfleurs.fr</a><p>I hadn't heard of RustFS and it looks interesting, although I nearly clicked away based on the sheer volume of marketing wank on their main page. The GitHub repo is here: <a href="https://github.com/rustfs/rustfs" rel="nofollow">https://github.com/rustfs/rustfs</a>
We’ve done some fairly extensive testing internally recently and found that Garage is somewhat easier to deploy, but is not as performant at high speeds. IIRC we could push about 5 gigabits of (not small) GET requests out of it, but something blocked it from reaching the 20-25 gigabits (on a 25g NIC) that MinIO could reach (also 50k STAT requests/s)<p>I don’t begrudge it that. I get the impression that Garage isn’t necessarily focussed on this kind of use case.
Speaking as an open-source enthusiast, I’m actually really digging RustFS. Honestly, anything that can replace or compete with MinIO is a win for the users. Their marketing vibe feels pretty American, actually—they aren't afraid to be loud and proud, haha. You gotta give it to them though, they’ve got guts, and their timing is spot on.
I use garage at home, single node setup. It's very easy and fast, I'm happy with it. You're missing out on a UI for it, but MountainDuck / CyberDuck solves that problem for me.
I’ve been using this <a href="https://github.com/khairul169/garage-webui" rel="nofollow">https://github.com/khairul169/garage-webui</a> as a UI for Garage. It’s been solid.<p>After years of using Garage for S3 for the homelab I’d never pick anything else. Absolutely rock solid, no problem whatsoever. There isn’t ONE other piece of software I can say that about, not ONE.<p>Major kudos to the guys at deuxfleurs. Merci beaucoup!
I saw an article here not long about where someone explained they were hosting their Kopia or Nextcloud aver Garage, but I can't find it anymore.<p>This was going to be my next project, as I am currently storing my Kopia/Ente on MinIO in a non-distributed way. MinIO project going to shi*s is a good reason to take on this project faster than later.
Yeah, that page is horrendous and looks super sketchy. It looks like a very professional fishing attempt to get unsuspecting developers to download malware.<p>They have a lot of obviously fake quotes from non-existent people at positions that don’t even mention what company it is. The pictures are misgendered and even contain pictures of kids.<p>Feels like the whole page is AI generated.
They have a CLA that assigns copyright to them: <a href="https://github.com/rustfs/rustfs/blob/5b0a3a07645364d998e3f518f33a128d2e457da6/CLA.md?plain=1#L31-L37" rel="nofollow">https://github.com/rustfs/rustfs/blob/5b0a3a07645364d998e3f5...</a><p>So, arguably worse than MinIO.
The _only_ reason to require a CLA is because you expect to change the license in the future. RustFS has rug-pull written all over it.
How would you run a project like this? People come and go. People do a one-time contribution and then you never hear from them again. People work on a project for years and then just go silent. Honestly, credit where credit is due, but how is a project like this supposed to manage this?
You can have CLA without assigning copyright to the project.<p>You don't need assignment to the project if you are not planning to change project's license.<p>You do need assignment to the project if you need to ever rugpull the community and close the code
You could pick a license and not plan to relicense later. Like Linux.
I maintain an S3 client that has a test matrix for the commonly used S3 implementations. RustFS regularly breaks it. Last time it did I removed it from the matrix because deleteObject suddenly didn't delete the object any more. It is extremely unstable in its current form. The website states that it is not in a production-ready state, which I can confirm.<p>I'd take a look at garage (didn't try seaweed yet).
If it is not an Apache/CNCF/LinuxFoundation project, it can be a rug pull aimed at using open source for getting people in the door only. They were never open for commits, and now they have abandoned open source altogether.
I made recently an open source alternative to minio Server & minio UI also in Rust:<p><a href="https://github.com/vibecoder-host/ironbucket/" rel="nofollow">https://github.com/vibecoder-host/ironbucket/</a><p><a href="https://github.com/vibecoder-host/ironbucket-ui" rel="nofollow">https://github.com/vibecoder-host/ironbucket-ui</a>
Might be coming soon based on this: <a href="https://docs.rustfs.com/features/replication/" rel="nofollow">https://docs.rustfs.com/features/replication/</a>
Although promising, RustFS is a Chinese product. This would be a non-starter for many.
From what I looked still very fresh project, to the point running out of date minio version will most likely be less problematic than latest rustfs
Sad to see these same people were behind GlusterFS.
I've been working on <a href="https://github.com/uroni/hs5" rel="nofollow">https://github.com/uroni/hs5</a> as a replacement with similar goals to early minio.<p>The core is stable at this point, but the user/policy management and the web interface is still in the works.
Looks like you cleanly point out their violation of the AGPL. I wish I were a lawyer with nothing better to do, I'd definitely be suing the MinIO group, there's no way they can cleanly remove the AGPL code outsiders contributed.
I don't think there would be an issue with removing AGPL contributed code. You can't force someone to distribute something they don't want to. IANAL, but I believe that what (all?) copyright in software is most concerned with is the active distribution of code -- not the removal of code.<p>That said, if there was contributed AGPL code, they couldn't change the license on that part of the code w/o a CLA. AGPL also doesn't necessarily mean you have to make the code publicly available, just available to those that you give the program to (I'm assuming AGPL is like the GPL in this regard).<p>So, that I'd be curious about it is -- (1) is there any contributed AGPL code in the current version? (2) what license is granted to customers of the enterprise version?<p>Minio can completely use whatever license they want for their code. But, if there was contributed code w/o a CLA, then I'm not sure how a commercial/enterprise license would play with contriubuted AGPL code. It would be an interesting question to find out.
> <i>AGPL also doesn't necessarily mean you have to make the code publicly available, just available to those that you give the program to (I'm assuming AGPL is like the GPL in this regard).</i><p>This is the crucial difference between the AGPL and the GPL: the AGPL requires you to make the code available to users for whom you run the code, as well as users you give the program to.
But, for minio, the users aren't the public... the users are their enterprise customers (now). So, to fulfill the AGPL, they'd have to give the code to their users, but that doesn't necessarily mean to the public at large (via GitHub).<p>But, what I don't know is -- is there any other AGPL code that minio doesn't own, but that was otherwise contributed to minio? Because, presumably, they aren't actually giving their customers the minio program with an AGPL license, rather they have whatever their enterprise license agreement is. If this is the case, and there is AGPL code that's not owned by Minio, I can foresee problems in the future.
That's definitely not how its written or interpreted. Microsoft had to release code because they touched GPL code some years back I think it was for HyperV? We're talking about a company with many lawyers at the ready not being able to skirt the GPL in any way, like undoing the code.<p>Additionally, in order to CHANGE the license, if others contributed code under that license, you would need their permission, on top of the fact, you cannot retroactively revoke the license for previous versions.
What I'm <i>really</i> curious about is if their most recent enterprise versions/code must be released under AGPL. And if so, can they restrict customers from distributing AGPL'd code through an enterprise contract?<p>I can't see how this is a defensible position for Minio, but I'm not sure they really care that much at this point.
I don't see a contributor licensing agreement (CLA), so you may be right.<p>(I personally choose not to contribute to projects with CLAs, I don't want my contributions to become closed-source in the future.)
I'm not a contributor to Minio. This is its own separate thing.<p>I do have a separate AGPL project (see github) where I have nearly all of the copyright and have looked into how one would be able to enforce this in the US at some point and it did look pretty bleak -- it is a civil suit where you have to show damages etc. but IANAL.<p>I did not like the FUD they were spreading about AGPL at the time since it is a good license for end-user applications.
Oh I didn't mean to imply yours was, yours is C++ theirs is Go. The AGPL is fine, not a license for me, but its fine. I'm more of an MIT license kind of guy. If you're going to do the AGPL thing and then try to secure funding, make sure you own the whole thing first.
I wish I knew about this last week. I spent way too long trying out MinIO alternatives before getting SeaweedFS to work, but it is overkill for my purposes.<p>Looks like a great alternative.
Interesting! I like the relative simplicity and durability guarantees. I can see using this for dev and proof of concept. Or in situations where HA/RAID are handled lower in the stack.<p>What is the performance like for reads, writes, and deletes?<p>And just to play devil's advocate: What would you say to someone who argues that you've essentially reimplemented a filesystem?
It uses LMDB, so if the object mapping fits in memory that should be pretty optimal for reading, while using the build-in Linux page cache and not a separate one (important for testing use cases).
For write/deletes it has a bit of write-amplification due to the copy-on-write btree. I've implemented a separate, optional WAL for this and also a mode where writes/delete can be bundeled in a transaction, but in practice I think the performance difference should not matter.<p>W.r.t. filesystem: Yes, aware of this. Initially used minio and also implemented the use case directly on XFS as well and only had problems at larger scales (that still fit on a machine) with it. Ceph went into a similar direction with BlueStore (reimplementing the filesystem, but with RocksDB).
Good time to post a Show HN for your project then
Fork in Linux foundation incoming. Minio will revert in 1-2 years, but too late, community will move on and never return, reputation lost forever
Stallman was right. When will the developer community learn not to contribute to these projects with awful CLAs. The rug has been pulled.
Shocker... they abandoned POSIX compatibility, built a massively over-complicated product, then failed to compete with things like Ceph on the metal side or ubiquitous S3/R2/B2 on the cloud side.
No, they rebranded to AIStor and are now selling to AI companies.<p>Minio is/was pretty solid product for places where rack of servers for Ceph wasn't an option (it does have quite a bit higher memory requirements), or you just need a bit of S3 (like we have small local instances that just run as build cache for CI/CD)<p>But that's not where money is
> they abandoned POSIX compatibility, built a massively over-complicated product<p>This is a wild sentence--how can you criticize them for abandoning POSIX support __and__ building a massively over-complicated product? Making a reliable POSIX system is inherently very complex.
I think the criticism (just interpreting the post, don’t know anything about the technical situation) is that the complication is not necessary/worthwhile.<p>POSIX can be complicated, but it puts you in a nice ecosystem, so for some use-cases complex POSIX support is not <i>over</i> complicated. It is just… appropriately complicated.
Sure, but then you can make that argument about any of the features in Minio, in which case the parent's argument about Minio <i>as a whole</i> being overcomplicated is invalidated. Probably the more sensible way to look at things is "value / complexity" or "bang for buck", but even there I think POSIX loses since it's relatively little value for a relatively large amount of complexity.
Yeah. I don’t actually know if they are right or wrong, it depends on the ecosystem the project wants to hook in to, right? I just want to reduce it from “wild” to “debatable,” haha.
What would go in to POSIX compatibility for a product like this which would make it complicated? Because the kind of stuff that stands out to me is the use of Linux specific syscalls like epoll/io_uring vs trad POSIX stuff like poll. That doesn't seem too complicated?
Minio is more or less feature complete for most use cases. Actually the <i>last</i> big update of minio removed features (the UI). I am using minio for 5 years and haven't messed with it or used any new thingie for the last 5 years (i.e since I installed it); I only update to new versions.<p>So if the minio maintainers (or anybody that forks the project and wants to work it) can fix any security issues that may occur I don't see any problems with using it.
> Actually the last big update of minio removed features (the UI)<p>AFIK they removed it only to move it to their paid version, didn't they?
Well I didn't mind when they removed it and certainly I didn't consider their paid version which is way too expensive for most use cases.<p>The UI was useful when first configuring the buckets and permissions; if you've got it working (and don't need to change anything) you're good to go. Also, everything can be configured without the UI (not so easily of course).
yes
I used it for my experiments in Docker. I once or two used the UI, I always connected through python.
What a story. EOL the open source foundation of your commercial product, to which many people contributed, to turn it into a closed source "A-Ff*ing-I Store" .. seriously what the ...
Didn't contribute to MinIO, but if they accepted external contributions without making them sign a CLA, they cannot change the license without asking every external contributor for consent to the license change.
As it is AGPL, they still have to provide the source code somewhere.<p>IANAL, of course
They required a "Community Contribution License" in each PR description, which licensed each contribution under Apache 2 as an <i>inbound</i> license.<p>Meanwhile, MinIO's own contributions and the distribution itself (outbound license) were AGPL licensed.<p>It's effectively a CLA, just a bit weaker, since they're still bound by the terms of Apache 2 vs. a full license assignment like most CLAs.
People underestimate the amount of fakeness a lot of these "open-core/source" orgs have. I guarantee from day one of starting the MinIO project, they had eyes on future commercialization, and of course made contributors sign away their rights knowing full well they are going to go closed source.
Well, you can not have a product without having "AI" somewhere in the name anymore. It's the law.
What's the problem? Surely people will fork it
I still don't understand what the difference is.<p>What is an AI Stor (e missing on purpose because that is how it is branded: <a href="https://www.min.io/product/aistor" rel="nofollow">https://www.min.io/product/aistor</a>)
Might be because of this other storage product named that <a href="https://github.com/NVIDIA/aistore" rel="nofollow">https://github.com/NVIDIA/aistore</a>
About a billion dollars difference in valuation up until the bubble pops.
Looks like AI slop<p><pre><code> Replication
A trusted identity provider is a
key component to single sign on.
</code></pre>
Uh, what?<p>It’s probably just Minio but it costs more money.
It can store things for AI workloads (and non-AI workloads, but who’s counting…)
This is why I don't bother with AGPL released by a company (use or contribute).<p>Choosing AGPL with contributors giving up rights is a huge red flag for "hey, we are going to rug pull".<p>Just AGPL by companies without even allowing contributor rights is saying, "hey, we are going to attempt to squeeze profit out and don't want competition on our SaaS offering."<p>I wish companies would stop trying to get free code out of the open source community. There have been so many rug pulls it should be expected now.
please copy and paste outrage from previous discussions to not waste more time<p><a href="https://news.ycombinator.com/item?id=45665452">https://news.ycombinator.com/item?id=45665452</a>
Is this not the best thing that could happen? Like now its in maintenance, it can be forked without any potential license change in the future, or any new features that are in that license change... This allows anyone to continue working on this, right? Or did i miss something?
> ... it can be forked without any potential license change in the future ...<p>It is useful to remember that one may fork at the commit before a license change.
It is also useful to remember that MinIO has historically held to an absurd interpretation of the AGPL -- that it spreads (again, according to them) to software that communicates with MinIO via the REST API/CLI.<p>I assume forks, and software that uses them will be held to the same requirements.
They're not the only ones to claim that absurdity.<p><a href="https://opensource.google/documentation/reference/using/agpl-policy" rel="nofollow">https://opensource.google/documentation/reference/using/agpl...</a>
As long as I'm not the one who gets sued over this, I think it would be wonderful to have some case law on what constitutes an AGPL derivative work. It could be a great thing for free software, since people seem to be too scared to touch the AGPL at all right now.
Pretty sure you can’t retroactively apply a restrictive license, so that was never a concern.
You can, sort of, sometimes. Copyleft is still based on copyright. So in theory you can do a new license as long as all the copyright holders agree to the change. Take open source/free/copyleft out of it:<p>You create a proprietary piece of software. You license it to Google and negotiate terms. You then negotiate different terms with Microsoft. Nothing so far prevents you from doing this. You can't yank the license from Google unless your contract allows that, but maybe it does. You can in theory then go and release it under a different license to the public. If that license is perpetual and non-revokable then presumably I can use it after you decide to stop offering that license. But if the license is non-transferrable then I can't pass on your software to someone else either by giving them a flash drive with it, or by releasing it under a different license.<p>Several open source projects have been re-licensed. The main thing that really is the obstacle is that in a popular open source or copyleft project you have many contributors each of which holds the copyright to their patches. So now you have a mess of trying to relicense only some parts of your codebase and replace others for the people resisting the change or those you can't reach. It's a messy process. For example, check out how the Open Street Maps data got relicensed and what that took.
I think you are correct, but you probably misunderstood the parent.<p>My understanding of what they meant by "retroactively apply a restrictive license" is to apply a restrictive license to previous commits that were already distributed using a FOSS license (the FOSS part being implied by the new license being "restrictive" and because these discussions are usually around license changes for previously FOSS projects such as Terraform).<p>As allowing redistribution under at least the same license is usually a requirement for a license to be considered FOSS, you can't really change the license of an existing version as anyone who has acquired the version under the previous license can still redistribute it under the same terms.<p>Edit: s/commit/version/, added "under the same terms" at the end, add that the new license being "restrictive" contributes to the implication that the previous license was FOSS
As a note ceph (rook on kubernetes) which is distributed blockstorage has a built in s3 endpoint support
I use this image on my VPS, it was the last update before they neutered the community version<p>quay.io/minio/minio:RELEASE.2025-04-22T22-12-26Z
This is a way too old version. You should use a newer one instead by downloading the source and built the binaries yourself.<p>Here's a simple script that does it automagically (you'll need golang installed):<p>> build-minio-ver.sh<p><pre><code> #!/bin/bash
set -e
VERSION=$(git ls-remote --tags https://github.com/minio/minio.git | \
grep -Eo 'RELEASE\.[0-9T-]+Z' | sort | tail -n1)
echo "Building MinIO $VERSION ..."
rm -rf /tmp/minio-build
git clone --depth 1 https://github.com/minio/minio.git /tmp/minio-build
cd /tmp/minio-build
git fetch --tags
git checkout "$VERSION"
echo "Building minio..."
CGO_ENABLED=0 go build -trimpath \
-ldflags "-s -w \
-X github.com/minio/minio/cmd.Version=$VERSION \
-X github.com/minio/minio/cmd.ReleaseTag=$VERSION \
-X github.com/minio/minio/cmd.CommitID=$(git rev-parse HEAD)" \
-o "$OLDPWD/minio"
echo " Binary created at: $(realpath "$OLDPWD/minio")"
"$OLDPWD/minio" --version</code></pre>
Same here, since I'm the only one using my instance. But, you should be aware that there is an CVE in that version that allows any account level to increase their own permissions to admin level, so it's inherently unsafe
I thought they were pivoting towards close it and trying to monetize this?<p>That got backlash so now it’s just getting dropped entirely?<p>People get to do whatever they want but bit jarring to go from this is worth something people will pay for to maintenance mode in quick succession
> I thought they were pivoting towards close it and trying to monetize this?<p>That's literally what the commit shows that they're doing?<p>> *This project is currently under maintenance and is not accepting new changes.*<p>> For enterprise support and actively maintained versions, please see MinIO SloppyAISlop (not actual name)
Their marketing had shifting to trying to push an AI angle for some time now. For an established project or company, that's usually a sign that things aren't going well.
They cite a proprietary alternative they offer for enterprises. So yes they pivoted to a monetized offering and are just dropping the open source one.
So they’re pulling an OpenAI.<p>Start open source to use free advertising and community programmer, and then dumps it all for commercial licensing.<p>I think n8n is next because they finished the release candidate for version 2.0, but there are no changelogs.
Does anyone have any recommendations for a simple S3-wrapper to a standard dir? I've got a few apps/services that can send data to S3 (or S3 compatible services) that I want to point to a local server I have, but they don't support SFTP or any of the more "primitive" solutions. I did use a python local-s3 thing, but it was... not good.
Versity Gateway looks like a reasonable option here. I haven't personally used it, but I know some folks who say it performs pretty great as a "ZFS-backed S3" alternative.<p><a href="https://github.com/versity/versitygw" rel="nofollow">https://github.com/versity/versitygw</a><p>Unlike other options like Garage or Minio, it doesn't have any clustering, replication, erasure coding, ...<p>Your S3 objects are just files on disk, and Versity exposes it. I gather it exists to provide an S3 interface on top of their other project (ScoutFS), but it seems like it should work on any old filesystem.
Versity is really promising. I got a chance to meet with Ben recently at the Super Computing conference in St. Louis and he was super chill about stuff. Big shout out to him.<p>He also mentioned that the minio-to-versity migration is a straight forward process. Apparently, you just read the data from mino's shadow filesystem and set it as an extended attribute in your file.
I really like what I've (just now) read about Versity. I like that they are thinking about large scale deployments with tape as the explicit cold-storage option. It really makes sense to me coming from an HPC background.<p>Thanks for posting this, as it's the first I've come across their work.
Garage also decide to not implement erasure coding.
You could perhaps checkout <a href="https://garagehq.deuxfleurs.fr/" rel="nofollow">https://garagehq.deuxfleurs.fr/</a>
Do you want to serve already existing files from a directory or just that the backend is a directory on your server?<p>If the answer is the latter, seaweedfs is an option:<p><a href="https://github.com/seaweedfs/seaweedfs?tab=readme-ov-file#quick-start-with-single-binary" rel="nofollow">https://github.com/seaweedfs/seaweedfs?tab=readme-ov-file#qu...</a>
s3proxy has a filesystem backend [0].<p>Possibly of interest: s3gw[1] is a modified version of ceph's radosgw that allows it to run standalone. It's geared towards kubernetes (notably part of Rancher's storage solution), but should work as a standalone container.<p>[0] <a href="https://github.com/gaul/s3proxy" rel="nofollow">https://github.com/gaul/s3proxy</a>
[1] <a href="https://github.com/s3gw-tech/s3gw" rel="nofollow">https://github.com/s3gw-tech/s3gw</a>
Check out from nvidia, aistore: <a href="https://github.com/NVIDIA/aistore" rel="nofollow">https://github.com/NVIDIA/aistore</a><p>It's not a fully featured s3 compatible service, like MinIO, but we used it to great success as a local on-prem s3 read/write cache with AWS as the backing S3 store. This avoided expensive network egress charges as we wanted to process data in both AWS as well as in a non-AWS GPU cluster (i.e. a neocloud)
rclone serve s3, could be.
What is the purpose of MinIO, Seaweedfs and similar object storage systems? They lack durability guarantees provided by S3 and GCS. They lack "infinite" storage promise contrary to S3 and GCS. They lack "infinite" bandwidth unlike S3 and GCS. They are more expensive than other storage options, unlike S3 and GCS.
We use it because we are already running our own k8s clusters in our datacenters, and we have large storage requirements for tools that have native S3 integration, and running our own minio clusters in the same datacenter as the tools that generate and consume that data is a lot faster and cheaper than using S3.<p>For example, we were running a 20 node k8s cluster for our Cortex (distributed Prometheus) install, monitoring about 30k servers around the world, and it was generating a bit over a TB of data a day. It was a lot more cost effective and performant to create a minio cluster for that data than to use S3.<p>Also, you can get durability with minio with multi cluster replication.
I haven't used it in a while, but it used to be great as a test double for s3
Minio allows you to have an s3 like interface when you have your own servers and storage.
S3 is a widely supported API schema, so if you need something on-prem, you use these.
It's great for a prototype which doesn't need to store a huge amount of data, you can run it on the same VM as a node server behind Cloudflare and get a fairly reliable setup going
It sucks that S3 somehow became the defacto object storage interface, the API is terrible IMO. Too many headers, too many unknowns with support. WebDAV isn't any better, but I feel like we missed an opportunity here for a standardized interface.
?<p>Its like GET <namespace>/object, PUT <namespace>/object. To me its the most obvious mapping of HTTP to immutable object key value storage you could imagine.<p>It is bad that the control plane responses can be malformed XML (e.g keys are not escaped right if you put XML control characters in object paths) but that can be forgiven as an oversight.<p>Its not perfect but I don't think its a strange API at all.
That may be what S3 is <i>like</i>, but what the S3 API <i>is</i> is this: <a href="https://pkg.go.dev/github.com/aws/aws-sdk-go-v2/service/s3" rel="nofollow">https://pkg.go.dev/github.com/aws/aws-sdk-go-v2/service/s3</a><p>My browser prints that out to 413 pages with a naive print preview. You can squeeze it to 350 pretty reasonably with a bit of scaling before it starts getting to awfully small type on the page.<p>Yes, there's a simple API with simple capabilities struggling to get out there, but pointing that out is merely the first step on the thousand-mile journey of determining what, <i>exactly</i>, that is. "Everybody uses 10% of Microsoft Word, the problem is, they all use a different 10%", basically. If you sat down with even 5 relevant stakeholders and tried to define that "simple API" you'd be shocked what you discover and how badly Hyrum's Law will bite you even at that scale.
> That may be what S3 is like, but what the S3 API is is this: <a href="https://pkg.go.dev/github.com/aws/aws-sdk-go-v2/service/s3" rel="nofollow">https://pkg.go.dev/github.com/aws/aws-sdk-go-v2/service/s3</a><p>> My browser prints that out to 413 pages with a naive print preview. You can squeeze it to 350 pretty reasonably with a bit of scaling before it starts getting to awfully small type on the page.<p>idk why you link to Go SDK docs when you can link to the actual API reference documentation: <a href="https://docs.aws.amazon.com/AmazonS3/latest/API/API_Operations_Amazon_Simple_Storage_Service.html" rel="nofollow">https://docs.aws.amazon.com/AmazonS3/latest/API/API_Operatio...</a> and its PDF version: <a href="https://docs.aws.amazon.com/pdfs/AmazonS3/latest/API/s3-api.pdf" rel="nofollow">https://docs.aws.amazon.com/pdfs/AmazonS3/latest/API/s3-api....</a> (just 3874 pages)
That page crashes Safari for me on iOS.
It gets complex with ACLs for permissions, lifecycle controls, header controls and a bunch of other features that are needed on S3 scale but not at smaller provider scale.<p>And many S3-compatible alternatives (probably most but the big ones like Ceph) don't implement all of the features.<p>For example for lifecycles backblaze have completely different JSON syntax
Last I checked the user guide to the API was 3500 pages.<p>3500 pages to describe upload and download, basically. That is pretty strange in my book.
Even download and upload get tricky if you consider stuff like serving buckets like static sites, or stuff like siged upload URLs.<p>Now with the trivial part off the table, let's consder storage classes, security and ACLs, lifecycle management, events, etc.
Everything uses poorly documented, sometimes inconsistent HTTP headers that read like afterthoughts/tech debt. An S3 standard implementation has to have amazon branding all over it (x-amz) which is gross.
I suspect they learned a lot over the years and the API shows the scars. In their defense, they did go first.
I mean… it’s straight up an Amazon product, not like it’s an IETF standard or something.
HTTP isn't really a great back plane for object storage.
!!!<p>I’ve seen a lot of bad takes and this is one of them.<p>Listing keys is weird (is it V1 or V2)?<p>The authentication relies on an obtuse and idiosyncratic signature algorithm.<p>And S3 in practice responds with malformed XML, as you point out.<p>Protocol-wise, I have trouble liking it over WebDAV. And that's depressing.
I thought the openstack swift API was pretty clean, but i'm biased.
To be fair. We still have an opportunity to create a standardized interface for object storage. Funnily enough when Microsoft made their own they did not go for S3 compatible APIs, but Microsoft usually builds APIs their customers can use.
It was better. When it first came out, it was a pretty simple API, at least simpler than alternatives (IIRC, I could just be thinking with nostalgia).<p>I think it's only gotten as complicated as it has as new features have been organically added. I'm sure there are good use cases for everything, but it does beg the question -- is a better API possible for object storage? What's the minimal API required? GET/POST/DELETE?
I suspect there is no decent "minimal" API. Once you get to tens of millions of objects <i>in a given prefix</i>, you need server side filtering logic. And to make it worse, you need multiple ways to do that.<p>For example, did you know that date filtering in S3 is based on string prefix matching against an ISO8601/RFC3339 style string representation? Want all objects created between 2024-01-01 and 2024-06-30? You'll need to construct six YYYY-MM prefixes (one per month) for datetime and add them as filter array elements.<p>As a result the service abbreviation is also incorrect these days. Originally the first S stood for "Simple". With all the additions they've had to bolt on, S2 would be far more appropriate a name.
Like everything it starts off simple but slowly with every feature added over 19 years Simple Storage is it not.<p>S3 has 3 independent permissions mechanisms.
S3 isn't JSON<p>it's storing a [utf8-string => bytes] mapping with some very minimal metadata. But that can be whatever you want. JSON, CBOR, XML, actual document formats etc.<p>And it's default encoding for listing, management operations and similar is XML....<p>> but I feel like we missed an opportunity here for a standardized interface.<p>except S3 _is_ the de-facto standard interface which most object storage system speaks<p>but I agree it's kinda a pain<p>and commonly done partial (both feature wise and partial wrong). E.g. S3 store utf8 strings, not utf8 file paths (like e.g. minio does), that being wrong seems fine but can lead to a lot of problems (not just being incompatible for some applications but also having unexpected perf. characteristics for others) making it only partial S3 compatible. Similar some implementations random features like bulk delete or support `If-Match`/`If-Non-Match` headers can also make them S3 incompatible for some use cases.<p>So yeah, a new external standard which makes it clear what you should expect to be supported to be standard compatible would be nice.
So, when anyone will fork in? Call it MaxIO or whatever. I might even submit couple of small patches.<p>My only blocker for a fork to maintain compatibility and path to upgrade from earlier versions.
<a href="https://aistore.nvidia.com" rel="nofollow">https://aistore.nvidia.com</a>
github.com/NVIDIA/aistore<p>At the 1 billion valuation from the previous round, achieving a successful exit requires a company with deep pockets. Right now, Nvidia is probably a suitable buyer for MinIO, which might explain all the recent movements from them. Dell, Broadcom, NetApp, etc, are not going to buy them.
I can't believe they made this decision. It's detrimental to the open-source ecosystem and MinIO users, and it's not good for them either, just look at the Elasticsearch case.
Is this just the open source portion? Minio is now a fully paid product then?
"For enterprise support and actively maintained versions, please see MinIO AIStor."<p>Probably yes.
Basically officially killing off the open source version.
I'm quite interested in a k8s-native file-system that makes use of local persistent volumes. I'm running cockroachDB in my cluster (not yet with local persistent volumes.. but getting closer).<p>Anyone have any suggestions?
What's the simplest replacement for mocking S3 in CI? We don't about performance or reliability.. it's just gotta act like S3.
I've been using the minio-go client for S3-compatible storage abstraction in a project I'm working on. This new change putting the minio project into maintenance mode means no new features or bug fixes, which is concerning for something meant to be a stable abstraction layer<p>Need to start reconsidering the approach now and looking for alternatives
Any good alternatives?
I saw this referenced a few days ago. Haven't investigated it at all.<p><a href="https://garagehq.deuxfleurs.fr/" rel="nofollow">https://garagehq.deuxfleurs.fr/</a><p>Edit: jeez, three of us all at once...
If you just need a simple local s3 server (e.g. for developing and testing), I recommend rclone.<p>rclone serve s3 path/to/buckets --addr :9000 --auth-key <key-id>,<secret>
<a href="https://www.versity.com/products/versitygw/" rel="nofollow">https://www.versity.com/products/versitygw/</a><p>I haven't tried it though. Seems simple enough to run.
Seaweed and garage (tried both, still using seaweed)
A lot of them actually. Ceph personally I've used. But there's a ton, some open source, some paid. Backblaze has a product Buckets or something. Dell powerscale. Cloudian has one. Nutanix has one.
Ceph is awesome for software defined storage where you have multiple storage nodes and multiple storage devices on each. It's way too heavy and resource intensive for a single machine with loopback devices.
I've been looking at microceph, but the requirement to run 3 OSDs on loopback files plus this comment from the docs gives me pause:<p>`Be wary that an OSD, whether based on a physical device or a file, is resource intensive.`<p>Can anyone quantify "resource intensive" here? Is it "takes an entire Raspberry Pi to run the minimum set" or is it "takes 4 cores per OSD"?<p>Edit: This is the specific doc page <a href="https://canonical-microceph.readthedocs-hosted.com/stable/how-to/single-node/" rel="nofollow">https://canonical-microceph.readthedocs-hosted.com/stable/ho...</a>
Ceph has multiple daemons that would need to be running: monitor, manager, OSD (1 per storage device), and RADOS Gateway (RGW). If you only had a single storage device it would still be 4 daemons.
ceph depends a lot on your use case<p>minio was also suited for some smaller use cases (e.g. running a partial S3 compatible storage for integration tests). Ceph isn't really good for it.<p>But if you ran large minio clusters in production ceph might be a very good alternative.
This one is usually the most recommended: <a href="https://garagehq.deuxfleurs.fr/" rel="nofollow">https://garagehq.deuxfleurs.fr/</a>
RustFS is good, but still pretty immature IMO
seaweedfs
wasn't there a fork with the UI?
Have heard good things about Garage (<a href="https://garagehq.deuxfleurs.fr/" rel="nofollow">https://garagehq.deuxfleurs.fr/</a>).<p>Am forced to use MinIO for certain products now but will eventually move to better eventually. Garage is high on my list of alternatives.
So how are HN reviews of GarageHQ? Or any others?
Garage works well for its limited feature set, but it doesn't have very active development. Apparently they're working on a management UI.<p>Seaweedfs is more mature and has many interfaces (S3, webdav, SFTP, REST, fuse mount). It's most appropriate for storing lots of small files.<p>I prefer the command line interface and data/synchronization model of Garage, though. It's easier to manage, probably because the developers aren't biting off more than they can chew.
I havn't tested it since a while, but it was pretty good and a lot simpler than MinIO.<p>Like in the old MinIO days, an S3 object is a file on the filesystem, not some replicated blocks. You could always rebuild the full object store content with a few rsync. I appreciate the simplicity.<p>My main concern was that you couldn't configure it easily through files, you had to use CLI, which wasn't very convenient. I hope this has changed.
Objects in Garage are broken up into 1MB (default) blocks, and compressed with zstandard. So, it would be difficult to reconstruct the files. I don't know if that was a recent change since you looked at it.<p>Configuration is still through the CLI, though it's fairly simple. If your usecase is similar to the way that the Deuxfleurs organization uses it -- several heterogeneous, geographically distributed nodes that are more or less set-it-and-forget-it -- then it's probably a good fit.
I guess this change was inevitable. But I like the possibility to reconstruct a broken distributed file storage system. GlusterFS also allowed this.<p>My use case is relatively common : I want small S3 compatible object stores that can be deployed in Kubernetes without manual intervention. The CLI part was a bit in the way last time, this could have been automated but it wasn't straightforward.
Any efforts to consolidate around a community fork yet?
Time to fork and bring back removed features. :). An advantage of it being AGPL licensed.
> Kill open source features.<p>> Gaslight community when rightfully annoyed<p>> Kill off primary product<p>> Offer same product with AI slapped on the name to enterprise customers.<p>Good riddance Minio, and goodbye!
Raising 100 mil at 1 B valuation and then trying for an exit is a bitch!
Open source is not a sustainable business model.<p>There are two ways open source projects continue.<p>1. The creator has a real, solid way to make money (React by Facebook, Go by Google).<p>2. The project is <i>extremely</i> popular (Linux, PostreSQL).<p>Is it possible for people to reliably keep working for ~free? Yes, but if you expect that, you have a very bad understanding of 98% of human behavior.
The best software is the one that doesn't change.
big L for all the cloud providers that made the mistake of using it instead of forging their own path, they're kind of screwed now
for those looking for a simple and reliable self hosted S3 thing, check out Garage . it's much simpler - no web ui, no fancy RS coding, no VC-backed AI company, just some french nerds making a very solid tool.<p>fwiw while they do produce Docker containers for it, it's also extremely simple to run without that - it's a single binary and running it with systemd is unsurprisingly simple[1].<p>0: <a href="https://garagehq.deuxfleurs.fr/" rel="nofollow">https://garagehq.deuxfleurs.fr/</a><p>1: <a href="https://garagehq.deuxfleurs.fr/documentation/cookbook/systemd/" rel="nofollow">https://garagehq.deuxfleurs.fr/documentation/cookbook/system...</a>
I had a minio server in my homelab and I have to replace it after the 15v because they capped almost all settings. So sad...
Hopefully no one is shocked or surprised.
I'm both shocked and not surprised. Lots of questions: Are they doing that bad from the outcry? Or are they just keeping a private version and going completely commercial only? If so, how do they bypass the AGPL in doing so, I assume they had contributions under the AGPL.
"For enterprise support and actively maintained versions, please see MinIO AIStor."<p>Commercial only, they will replace the agpl contributions from external people. (Or at least they will say that)
Is there a good overview of recent Open Source Rugpulls in the vein of killedbygoogle.com somewhere?
Disgusting.
Build a product, make it open-source to gain traction, and when you are done completely abandon it.
Shame on me that I have put this ^%^$hit on a project and advocated it.
“The real hell of life is everyone has his reasons.”
― Jean Renoir
I've been using Minio in ZeroFS' [0] CI (a POSIX compliant filesystem that works on top of s3). I guess I'll switch to MicroCeph [1].<p>[0] <a href="https://github.com/Barre/ZeroFS" rel="nofollow">https://github.com/Barre/ZeroFS</a><p>[1] <a href="https://canonical-microceph.readthedocs-hosted.com/stable/" rel="nofollow">https://canonical-microceph.readthedocs-hosted.com/stable/</a>
What is the use case for implementing a POSIX filesystem on top of an object store? I remember reading this article a few years ago, which happens to be by the minio folks: <a href="https://blog.min.io/filesystem-on-object-store-is-a-bad-idea/" rel="nofollow">https://blog.min.io/filesystem-on-object-store-is-a-bad-idea...</a>
> What is the use case for implementing a POSIX filesystem on top of an object store?<p>The use case is fully stateless infrastructure: your file/database servers become disposable and interchangeable (no "pets"), because all state lives in S3. This dramatically simplifies operations, scaling, and disaster recovery, and it's cheap since S3 (or at least, S3 compatible services) storage costs are very low.<p>The MinIO article's criticisms don't really apply here because ZeroFS doesn't store files 1:1 to S3. It uses an LSM-tree database backed by S3, which allows it to implement proper POSIX semantics with actual performance.
It makes sense that some of the criticisms wouldn't apply if you're not storing the files 1:1.<p>What about NFS or traditional filesystems on iSCSI block devices? I assume you're not using those because managing/scaling/HA for them is too painful? What about the openstack equivalents of EFS/EBS? Or Ceph's fs/blockdev solutions (although looking into it a bit, it seems like those are based on its object store)?
I use Supabase Storage. It does S3-style signed download links (so I can switch to any S3 service if I like later).
Like many smart people they focused on telling people the "how", and assume visitors to their wall of "AI"/hype text already understand the use-case "why".<p>1. I like that it is written in Go<p>2. I saw nothing above what Apache Spark+Hadoop with _consistent_ object stores already offers on Amazon (S3), Google Cloud (GCS), and or Microsoft (Azure Storage, ADLS Gen2)<p>Best of luck, maybe folks should look around for that <a href="https://donate.apache.org/" rel="nofollow">https://donate.apache.org/</a> button before the tax year concludes =3
> I saw nothing above what Apache Spark+Hadoop with _consistent_ object stores already offers on Amazon (S3), Google Cloud (GCS), and or Microsoft (Azure Storage, ADLS Gen2)<p>it was very simple to setup, and even if you just leased a bunch of servers off say OVH, far FAR cheaper to run your own than paying any of the big cloud providers.<p>It also had pretty low requirements, ceph can do all that but setup is more complex and RAM requirements far, far higher
MinIO still makes no sense, as Ceph is fundamentally already RADOS at its core (fully compatible with S3 API.)<p>For a proper Ceph setup, even the 45drives budget configuration is still not "hobby" grade.<p>I will have to dive into the MinIO manual at some point, as the value proposition still seems like a mystery. Cheers =3
MinIO is far less complex than getting same functionality on Ceph stack.<p>But that's kind of advantage only on the small companies and hobbyist market, big company either have enough needs to run big ceph cluster, or to buy it as a service.<p>Minio is literally "point it at storage(s), done". And at far smaller RAM usage.<p>Ceph is mon servers, osd servers, then rados gatway server on top of that.
"Ceph is fundamentally already RADOS at its core (fully compatible with S3 API.)"<p>Yes, Ceph is RADOS at its core. However, RADOS != S3. Ceph provides an S3 compatible backend with the RADOS Gateway (RGW).
My point was even 45drives virtualization of Ceph host roles to squeeze the entire setup into a single box was not a "hobby" grade project.<p>I don't understand yet exactly what MinIO would add on top of that to make it relevant at any scale. I'll peruse the manual on the weekend, because their main site was not helpful. Thanks for trying though ¯\_(ツ)_/¯
> For enterprise support and actively maintained versions, please see [MinIO AIStor]<p>Naming the product “AIStor” is one of the most blatant forced AI branding pivots I’ve seen.
for maximum performance with MinIO AIStor, make sure to use one of Seagate's "AI hard drives":<p><a href="https://www.seagate.com/products/video-analytics/skyhawk-ai-hard-drive/" rel="nofollow">https://www.seagate.com/products/video-analytics/skyhawk-ai-...</a>
And the naming conflicts with NVidia's AIStore (<a href="https://github.com/NVIDIA/aistore" rel="nofollow">https://github.com/NVIDIA/aistore</a>). The two products are extremely similar. I don't know which came first, but Minio is going to want to do another pivot very soon if they want to survive. I doubt they have the resources to stand up to NVidia's army of extremely well-paid IP lawyers.
Raising 100 mil at 1 B valuation and then trying for an exit is a bitch!
How it makes sense? If they are no longer open-source S3 and cloud only, I'll just use S3.
Oh, no! Anyway... Maybe it's for the best seeing as it's AGPL. I won't go within 39.5 feet of infected software like that, so no loss for me.