This OpenPGP and GnuPG criticism is brought up regularly here, but the proposed alternatives come with their own downsides: some of those are proprietary, some are centralized systems or depend on such. In addition to all the inconvenience, when such centralized systems are blocked, casual users switch to explicitly backdoored options. The advertised IMs are tied to phone numbers, introducing both privacy and availability issues. Almost nothing of that is available from Linux distributions' system repositories. Integration with other software and infrastructures is lacking. Dealing with multiple specialized tools is more of a headache even for expert users, especially when their added benefits do not make much sense given one's threat model. OpenPGP/GnuPG is more resilient and versatile than those, still usable where those are not.<p>I think such an article would seem more convincing, at least to me, if more sensible alternatives were proposed. Ideally without the advice to not encrypt email, without assumptions of continued availability of all the online services, of trust to certain third parties, and so on. Or it could be just a plain criticism without suggestions, which would still be somewhat informative.<p>Edit: there is another list of alternatives in a sibling comment, advising against (well, actually being quite hostile towards, and generally impolite) usage of what I had in mind as one of the possible more sensible alternatives: XMPP with OMEMO. Though upon skimming the criticism of that, I have not found it particularly convincing, either, and it just looks like some authors try to be particularly provocative/edgy.
What is your issue with Sequoia PGP? It is not proprietary, it is not centralized and it is much better than GunPG from what I can tell.
I have no issues with it, and actually happy to see alternative implementations. Possibly because I did not use it much, but it does look fine to me. Not as a complete GPG replacement yet, since some software still depends on GPG, but a viable one, and a suitable one for most of the manual CLI usage (ignoring that its version on slightly older systems has a different interface, adding a bit of confusion; hopefully it is stable now). It was not listed among suggested alternatives in the linked article though, and from what I gather, the author would not be happy with it, either.
Since you mentioned me: what's the point? It would be one thing if you could (1) use Sequoia, (2) be assured of modern cryptography, and (3) maintain compatibility with the majority of the installed base of PGP users. But you can't. That being the case, why put up with all the PGP problems that Sequoia can't address? You're losing compatibility either way, so use an actually-good cryptosystem.
One of the premises of modern cryptographic engineering is security under a hostile setting: it <i>shouldn’t matter</i> to a chat protocol that a server is proprietary or a network is centralized if the design itself is provably end-to-end encrypted. The server could be run by Satan and it wouldn’t matter.<p>(Centralization itself is a red herring. One may as well claim that PGP is centralized, given that there’s only one prominent keyserver still limping around the Internet.)<p>But even this jumps ahead, given that the alternatives are not in fact proprietary. The list of open source tool alternatives has been the same for close to a decade now:<p>* For messaging/secure communication, use Signal. It’s open source.<p>* For file encryption, use age. It’s open source and has multiple mature implementations by well-regarded cryptographic engineers.<p>* For signing, use minisign, or Sigstore, or even ssh signing. All are open source.
> security under a hostile setting<p>Yes, but security usually includes availability, and I mentioned a setting with service blocking above. Like that by a government.<p>> (Centralization itself is a red herring. One may as well claim that PGP is centralized, given that there’s only one prominent keyserver still limping around the Internet.)<p>How is it a red herring?<p>> For messaging/secure communication, use Signal. It’s open source.<p>From my point of view, it is complicated by Signal being blocked here (and it being centralized helped to establish such blocking easily), likely the phone number verification won't work here, it is not available without a phone, and it is not available from F-Droid repositories on top of that. Currently money transfers are also complicated, so finding some foreign service that would help to circumvent phone number verification is also complicated, and not something I would normally do even without that. All this Internet blocking is a new development here, but such availability issues due to centralization were anticipated for a long time, and are a major motivation behind federated or distributed systems. Some mail servers are also being blocked, but generally mail still works, and less of a pain to use.<p>> For file encryption, use age. It’s open source and has multiple mature implementations by well-regarded cryptographic engineers.<p>> For signing, use minisign, or Sigstore, or even ssh signing. All are open source.<p>These I find to be okay. Having to install them in addition to GnuPG that is usually already available, but that is to be expected; they are available at least from Debian repositories, so not something to complain about when considering alternatives. Likewise with the key sharing: not getting to reuse OpenPGP's PKI, and will have to replace that somehow, but it is not like it is used widely and consistently anyway, so perhaps not much of a loss in practice. Likewise with familiarity of the users: I would expect a little more friction with such tools, compared to GnuPG, but not much more. And I don't see actual usage downsides apart from those. Though the benefits also seem a bit uncertain, but generally that sounds like a switch that makes sense to consider.
I have no clue how you reached the conclusion of calling it a red herring.<p>It matters - because Satan can disconnect the centralized nodes.
It’s a red herring because systems that achieve end-to-end security do so regardless of whether the underlying hosts are centralized or not. A typical network adversary <i>wants</i> you to downgrade the security properties of your protocol in the presence of an unreliable network, so they can pull more metadata out of you.
> given that there’s only one prominent keyserver still limping around the Internet<p>Hey, I take issue with that. keys.openpgp.org is just about the only thing running smoothly in the openpgp ecosystem :P
Sorry, that was an unduly inflammatory framing from me. You’re right that keys.openpgp.org runs smoothly, particularly in contrast to the previous generation of SKS hosts. I don’t think it comes close to meeting the definition of a decentralized identity distribution system, however.
I tried to find something in the article that bothered me, but I don’t find it very convincing. Points like "someone can forward your email unencrypted after they decrypt it" are just... well, yeah - that can happen no matter what method you choose. It feels like GPG gets hate for reasons other than what’s actually mentioned, and I'm completely oblivious to what those reasons might be.
It's not that someone can <i>forward your mail</i> unencrypted. It's that in the normal operation of the system, someone taking the natural next step in a conversation (replying) can --- and, in the experience of everyone I've talked to who has used PGP in anger for any extended period of time, inevitably does --- destroy the security of the entire conversation by accidentally replying in plaintext.<p>That <i>can't</i> happen in any modern encrypted messenger. It <i>does</i> happen routinely with encrypted email.
Yes, it's a problem with _email_.<p>pgp as a tool could integrate with that, but in practice fails for... many reasons, the above included. All the other key exchange / etc issues as well.
well that's fair, but sounds more like a email client issue than an actual issue with gpg/pgp. My client shows pretty clearly when it gets encrypted. But maybe I am oblivious.
I agree that it's an email problem, which is why I wrote a whole article about why email can't be made secure with any reasonable client. But email is overwhelmingly the messaging channel PGP users use; in fact, it's a common-cited reason why people continue to use PGP (because it allows them to encrypt email).
Yes, it is odd that this criticism is only allowed for gpg while worse Signal issues are not publicized here:<p><a href="https://cloud.google.com/blog/topics/threat-intelligence/russia-targeting-signal-messenger" rel="nofollow">https://cloud.google.com/blog/topics/threat-intelligence/rus...</a><p>Some Ukrainians may regret that the followed the Signal marketing. I have never heard of a <i>real world</i> exploit that <i>has actually been used</i> like that against gpg.
As mentioned a few days ago, this post mainly covers a gpg problem not a PGP problem.<p>I recommend people to spend some time and try out sequoia (sq) [0][1], which is a sane, clean room re-implementation of OpenPGP in Rust. For crypto, it uses the backend you prefer (including openssl, no more ligcrypt!) and it isn't just a CLI application but also as a library you can invoke from many other languages.<p>It does signing and/or encryption, for modern crypto including AEAD, Argon2, PQC.<p>Sure, it still implements OpenPGP/RFC 9580 (which is not the ideal format most people would define from scratch today) but it throws away the dirty water (SHA1, old cruft) while keeping the baby (interoperability, the fine bits).<p>[0] <a href="https://sequoia-pgp.org/" rel="nofollow">https://sequoia-pgp.org/</a><p>[1] <a href="https://archive.fosdem.org/2025/events/attachments/fosdem-2025-6297-a-practical-introduction-to-using-sq-sequoia-pgp-s-cli/slides/237970/presentat_pvYEqR5.pdf" rel="nofollow">https://archive.fosdem.org/2025/events/attachments/fosdem-20...</a>
But if you use the modern crypto stuff you loose interoperability, right? What is the point of keeping the cruft of the format if you still won't have compatability if you use the modern crypto? The article mentions this:<p>> Take AEAD ciphers: the Rust-language Sequoia PGP defaulted to the AES-EAX AEAD mode, which is great, and nobody can read those messages because most PGP installs don’t know what EAX mode is, which is not great.<p>Other implementations also don't support stuff like Argon2.<p>So it feels like the article is on point when it says<p>> You can have backwards compatibility with the 1990s or you can have sound cryptography; you can’t have both.
When you encrypt something, you are the one deciding which level of interoperability
you want and you can select the crypto primitives matching capabilities you know you recipient reasonably have. I don't see anything special with this: when you run a web service, you also decide if you want to talk to TLS 1.0 clients (hopefully not).<p>sequoia's defaults are reasonable as far as I remember. It's also bit strange that the post found it defaulted to using AEAD in 2019 when AEAD was standardized only in 2024 with RFC 9580.<p>But the elephant in the room is that gpg famously decided to NOT adopt RFC 9580 (which Sequoia and Proton do support) and stick to a variant of the older RFC (LibrePGP), officially because the changes to the crypto were seen as too "ground-breaking".
I think GP’s point isn’t that you don’t have the freedom to decide your own interoperability (you clearly do), but that the primary remaining benefit of PGP as an ecosystem <i>is</i> that interoperability. If you’re throwing that away, then there’s very little reason to shackle yourself to a larger design that the cryptographic community (more or less) unanimously agrees is dangerous and antiquated.
It is not a coincidence that most of the various proposed alternatives to PGP (signal, wormhole, age, minisign, etc) are led by a single golden implementation and neither support nor promote community-driven specifications (e.g., at the IETF).<p>Over the decades, PGP has already transitioned out of old key formats or old crypto. None of us is expecting to receive messages encrypted with BassOmatic (the original encryption algorithm by Zimmermann) I assume? The process has been slow, arguably way slower than it should have after the advancements in attacks in the past 15 years (and that is exactly the crux behind the schism librepgp/opengpgp). Nonetheless, here we are, pointing at the current gpg as "the" interoperable (yet flawed) standard.<p>In this age, when implementations are expected (sometimes by law) to be ready to update more quickly, the introduction of new crypto can take into account adoption rates and the specific context one operates in. And still, that happens within the boundaries of a reasonably interoperable protocol.<p>TLS 1.3 is a case in point - from certain points of view, it has been a total revolution and break with the past. But from many others, it is still remarkably similar to the previous TLS as before, lots of concepts are reused, and it can be deemed as an iteration of the same standard. Nobody is questioning its level of interoperability, and nobody is shocked by the fact that older clients can't connect to a TLS 1.3-only server.
You're right, it's not a coincidence. The track record of standards-body-driven cryptography is wretched. It's why we all use WireGuard and not IPSEC. TLS 1.3 is an actually good protocol, but it took <i>for-ev-er</i> to get there, and part of that process involved basically letting the cryptographers seize the microphones and make decisions by fiat in the 1.2->1.3 shift (TLS 1.3 also follows a professionalization at CFRG). It's the exception that proves the rule. It's contemporaneous sibling is WPA3 and Dragonfly, and look how that went.
I wrote the post and object to the argument that it primarily covers GnuPG issues.<p>But stipulate that it does, and riddle me this: what's the point? You can use Sequoia set up for "modern crypto including AEAD", yes, but now you're not compatible with the rest of the installed base of PGP.<p>If you're going to surrender compatibility, why on Earth would you continue to use OpenPGP, a design mired in 1990s decisions that no cryptography engineer on the planet endorses?
If you use AEAD, you clearly expect your recipients to use a recent client. Same as if you want to use PQC or any other recent feature.<p>If your audience is wider, dont use AEAD but make sure to sign the data too.<p>With respect to the 90's design, yes, it is not pretty and it could be simpler. It is also not broken and not too difficult to understand.
Even though I read so many posts criticizing PGP, it's still difficult for me to find an alternative. He states in the article that being a "Swiss Army Knife" is bad. I understand the argument, but this is precisely what makes GPG so powerful. The scheme of public keys, private keys, revoke, embedded WOT, files, texts, everything. They urgently need to make a "modern version" of GPG. He needs a replacement, otherwise he'll just be whining.
There's a section in this post with proposed replacements:<p><a href="https://www.latacora.com/blog/2019/07/16/the-pgp-problem/#the-answers" rel="nofollow">https://www.latacora.com/blog/2019/07/16/the-pgp-problem/#th...</a><p>I was also frustrated with this criticism in the past, but there are definitely some concrete alternatives provided for many use cases there. (But not just with one tool.)
I’m still frustrated by the criticism because I internalized it a couple of years ago and tried to move to age+minisig because those are the only 2 scenarios I personally care about. The overall experience was annoying given that the problems with pgp/gpg are esoteric and abstract that unless I’m personally are worried about a targeted attack against me, they are fine-ish.<p>If someone scotch tapes age+minisig and convince git/GitHub/gitlab/codeberge to support it, I’ll be so game it’ll hurt. My biggest usage of pgp is asking people doing bug reports to send me logs and giving them my pgp keys if they are worried and don’t want to publicly post their log file. 99.9% of people don’t care, but I understand the 0.1% who do. The other use is to sign my commits and to encrypt my backups.<p>Ps: the fact that this post is recommending Tarsnap and magicwormhole shows how badly it has aged in 6 years IMO.
> git/GitHub/gitlab/codeberge<p>Is this about commit signing? Git and all of the mentioned forges (by uploading the public key in the settings) support SSH keys for that afaik.<p>git configuration:<p>gpg.format = ssh<p>user.signingkey = /path/to/key.pub<p>If you need local verification of commit signatures you need gpg.ssh.allowedSignersFile too to list the known keys (including yours). ssh-add can remember credentials. Security keys are supported too.
Has Tarsnap become inadequate, security-wise? The service may be expensive for a standard backup. It had a serious bug in 2011, but hasn't it been adequate since then?
you cannot selfhost it. it's not verified and audited independently as a whole system.<p>for some people, that's important
I don’t know anything that makes me think it’s inadequate per se, but it’s also been more than 10 years since I thought about it. Restic, gocryptfs, and/or age are far more flexible, generic and flat out better in managing encrypted files/backups depending on how you want to orchestrate it. Restic can do everything, gocryptfs+rclone can do more, etc.
> the fact that this post is recommending Tarsnap and magicwormhole shows how badly it has aged in 6 years<p>What's wrong with magic wormhole?
It’s just not the same thing. There is significant overlap, but it’s not enough to be a reasonable suggestion. You can’t suggest a service as a replacement for a local offline tool. It’s like saying “Why do you need VLC when you can just run peertube?”. Also since then, age is the real replacement for pgp in terms of sending encrypted files. Wormhole is a different use case.
Adding to my comment since it was downvoted:<p>There are two parts of "sending encrypted files": the encryption and the sending. An offline tool (e.g. PGP or age) seems only necessary when you want to decouple the two. After all, you can't do the sending with an offline tool (except insofar as you can queue up a message while offline, such as with traditional mail clients).<p>The question thereby becomes "Why decouple the sending from encryption?"<p>As far as I can see, the main (only?) reason is if the communication channel used for sending doesn't align with your threat model. For instance, maybe there are multiple parties at the other end of the channel, but you only trust one of them. Then you'd need to do something like encrypt the message with that person's key.<p>But in the use-case you mentioned (not wanting to publicly post a log file), I don't see why that reason would hold; surely the people who would send you logs can trust trust Signal every bit as easily as PGP. Share your Signal username over your existing channel (the mailing list), thereby allowing these people to effectively "upgrade" their channel with you.
Sticking to the use case of serving that 0.1% of users, why can’t a service or other encrypted transport be a solution? Why doesn’t Signal fit the bill for instance?
The so-called web of trust is meaningless security theatre.<p>>They urgently need to make a "modern version" of GPG.<p>Absolutely not.
> The so-called web of trust is meaningless security theatre.<p>Ignoring your comment’s lack of constructive criticism, I’m going to post this meaningful implementation that an excellent cryptographer, Soatok Dreamseeker, is working on: [1].<p>You may also search for his posts in this HN thread, his nickname is “some_furry”.<p>[1]: <a href="https://github.com/fedi-e2ee/public-key-directory-specification" rel="nofollow">https://github.com/fedi-e2ee/public-key-directory-specificat...</a>
I wasn’t aware of the efail disclosure timeline. Apparently Koch responds to the report by noting that GPG prints an error when MDC is stripped, which has eerie parallels to the justification behind the recent gpg.fail WONTFIX response (see <a href="https://news.ycombinator.com/item?id=46403200">https://news.ycombinator.com/item?id=46403200</a>)
I think the two cases are different. The EFAIL researchers were suggesting that the PGP code (whatever implementation) should throw an error on an MDC integrity error and then stop. The idea was that this would be a fix for EFAIL in that the modified message would not be passed on to the rest of the system and thus was failsafe. The rest of the system could not pass the modified message along to the HTML interpreter.<p>In the gpg.fail case the researchers suggested that GPG should, instead of returning the actual message structure error (a compression error in their case), return an MDC integrity error instead. I am not entirely clear why they thought this would help. I am also not sure if they intended all message structure errors to be remapped in this way or just the single error. A message structure error means that all bets are off so they are in a sense more serious than a MDC integrity error. So the suggestion here seems to be to downgrade the seriousness of the error. Again, not sure how that would help.<p>In both cases the researchers entirely ignored regular PGP authentication. You know, the thing that specifically is intended to address these sorts of things. The MDC was added as an afterthought to support anonymous messages. I have come to suspect that people are actually thinking of things in terms of how more popular systems like TLS work. So I recently wrote an article based on that idea:<p>* <a href="https://articles.59.ca/doku.php?id=pgpfan:pgpauth" rel="nofollow">https://articles.59.ca/doku.php?id=pgpfan:pgpauth</a><p>It's occurred to me that it is possible that the GnuPG people are being unfairly criticized because of their greater understanding of how PGP actually works. They have been doing this stuff forever. Presumably they are quite aware of the tradeoffs.
The biggest issue with PGP/gpg is the difficulty of getting rid of it. If you work on big distros, or know someone who works on big distros, please (start asking them to) add <a href="https://github.com/jedisct1/minisign" rel="nofollow">https://github.com/jedisct1/minisign</a> to pre-installed packages to facilitate transition. It's almost a chicken egg problem but the sad thing is, no project wants to swap the signing tool to a better one until everyone can verify the new signatures.
I agree that age + minisign comprise a much neater stack that does basically everything I would need to use PGP for.<p>Neither of them supports hardware keys though, as much as I could see. OTOH ssh and GnuPG do support hardware keys, like smart cards or Yubikey-like devices. I suppose by the same token (not a pun, sadly) they don't support various software keychains provided by OSes, since they don't support any external PKCS11 providers (the way ssh does).<p>This may reduce the attack needed to steal a private key to a simple unprivileged infiltration, e.g. via code run during installation of a compromised npm package, or similar.
BTW apparently age has plugins that allow to use FIDO2 and TPM for cryptography.
> Neither of them supports hardware keys though, as much as I could see.<p><a href="https://github.com/str4d/age-plugin-yubikey" rel="nofollow">https://github.com/str4d/age-plugin-yubikey</a>
Probably resurfacing, because we have some new attacks thanks to CCC. [0]<p>[0] <a href="https://news.ycombinator.com/item?id=46453461">https://news.ycombinator.com/item?id=46453461</a>
Worth noting: minisign and age were also affected by a couple things here.<p>GnuPG has decided a couple things are out of scope, fixed a couple others. Not all is in distro packages yet.<p>age didn't have the clearest way to report things - discord is apparently the point of contact. Which will probably improve soon.<p>minisign was affected by most everything GnuPG was, but had a faster turnaround to patching.
The minisign bug was much less severe than the (insane) GPG signing bugs, and the age bug wasn't a cryptographic thing at all, just a dumb path sanitization thing. Minisign was <i>not</i> in fact affected by most everything GPG was. The GnuPG team <i>wontfixed</i> one of the most significant bugs!
The mark of good security is not "has no bugs". It's how the maintainers respond to security-relevant bugs.
Indeed, I saw it linked to in that thread, read it and thought it'd be worth resurfacing.
After reading the PyCon 2016 presentation about wormhole, and say my understanding of channels is correct (that is, each session on the same wireless network constitutes a session). What's stopping a hostile 3rd party, who wishes to stop a file transfer from happening, from spamming every channel with random codes?
Recently, this opinionated list of PGP alternatives went around:<p><a href="https://soatok.blog/2024/11/15/what-to-use-instead-of-pgp/" rel="nofollow">https://soatok.blog/2024/11/15/what-to-use-instead-of-pgp/</a><p>One use case I've not seen covered is sending blobs asynchronously with forward secrecy. Wormhole requires synchronously communicating the password somehow, and Signal requires reasonable buy-in by the recipient.<p>Basically, I'd like to just email sensitive banking and customer data in an encrypted attachment without needing to trust that the recipient will never accidentally leak their encryption key.
Tall order.
One of the projects I alluded to in that post makes a technological solution to what you want <i>easy</i> to build, but the harder problem to solve is societal (i.e., getting it adopted).<p><a href="https://github.com/fedi-e2ee/public-key-directory-specification/blob/main/Specification.md#auxiliary-data" rel="nofollow">https://github.com/fedi-e2ee/public-key-directory-specificat...</a><p>My current project aims to bring Key Transparency to the Fediverse for building E2EE on ActivityPub so you can have DMs that are private even against instance moderators.<p>One of the things I added to this design was the idea of "Auxiliary Data" which would be included in the transparency log. Each AuxData has a type identifier (e.g. "ssh-v2", "age-v1", "minisign-v0", but on the client-side, you can have friendly aliases like just "ssh" or "age"). The type identifier tells the server (and other clients) which "extension" to use to validate that the data is valid. (This is to minimize the risk of abuse.)<p>As this project matures, it will be increasingly easy to do this:<p><pre><code> // @var pkdClient -- A thin client-side library that queries the Public Key Directory
// @var age -- An implementation of age
async function forwardSecureEncrypt(file, identity) {
const agePKs = await pkdClient.FetchAuxData(identity, "age");
if (agePKs.length === 0) {
throw new Error("No age public keys found");
}
return age.Encrypt(file, agePKs[0]);
}
</code></pre>
And then you can send the encrypted file in an email <i>without a meaningful subject line</i> and you'll have met your stated requirements.<p>(The degree of "forward secure" here depends on how often your recipient adds a new age key and revokes their old one. Revocation is also published through the transparency log.)<p>However, email encryption is such a mess that most people don't quite appreciate, so I'm blogging about that <i>right now</i>. :)<p>Also, Filippo just created a transparency-based keyserver for age, fwiw: <a href="https://words.filippo.io/keyserver-tlog/" rel="nofollow">https://words.filippo.io/keyserver-tlog/</a>
My comments on The PGP Problem:<p>* <a href="https://articles.59.ca/doku.php?id=pgpfan:tpp" rel="nofollow">https://articles.59.ca/doku.php?id=pgpfan:tpp</a>
I'm curious. What's the advantage of using signify/minisign instead of good old PGP/GPG?
PGP/GPG is a complicated mess designed in the 1990's and only incrementally updated to add more complexity and cover more use-cases, most of which you'll never need. Part of PGP/GPG is supporting a large swath of algorithms (from DSA to RSA to ECDSA to EdDSA to whatever post-quantum abomination they'll cook up next).<p>Signify/Minisign is Ed25519. Boring, simple, fit-for-purpose.<p>You can write an implementation of Minisign in most languages with little effort. I did in PHP years ago. <a href="https://github.com/soatok/minisign-php" rel="nofollow">https://github.com/soatok/minisign-php</a><p>Complexity is the enemy of security.
PGP is horrible and way overly complicated but this article concludes by trading that for a long list of piecemeal solutions, some of which are cloud based and semi or fully proprietary.<p>PGP has hung on for a long time because it “works” and is a standard. The same can be said for Unix, which is not actually a great OS. A modern green field OS designed by experienced people with an eye to simplicity and consistency would almost certainly be better. But who’s going to use it?
Is anyone else unable to read the report on mobile? Completely broken styling for me.
Anyone know why GitHub doesn't support signing commits with signify/minisign?
GitHub is not git and does not control what features get added to git.<p>It looks like there are some wrapper scripts to make git sign commits with other tools using the GPG cli interface but nothing official.
My first guess is, "Not enough people have asked for it."<p>So let's get the party started: <a href="https://github.com/orgs/community/discussions/183391" rel="nofollow">https://github.com/orgs/community/discussions/183391</a>
How does this help people who are not following this issue regularly? gpg protected Snowden, and this article promotes tools by one of the cryptographers who promoted non-hybrid encryption:<p><a href="https://blog.cr.yp.to/20251004-weakened.html#agreement" rel="nofollow">https://blog.cr.yp.to/20251004-weakened.html#agreement</a><p>So what to do? PGP by the way never claimed to prevent traffic analysis, mixmaster was the layer that somehow got dropped, unlike Tor.
GPG, as OpenSSL, are too huge and complex in order to use them on daily basis.<p>OpenBSD has signifiy, which works fine. But I wouldn't mind something like a cleaned up age(1) but without the mentioned issues.<p>GNU tends to stack features like crazy. It had sense over the limited Unix tools
in the 90's, but nowadays 'ls -F', oksh with completion and the like make them
decent enough while respecting your freedom and not being overfeatured.<p>LibreSSL did the same over OpenSSL.
i like the approach by the bsd people. shut the f* up and code.<p>as long as there's not (audited and verified) replacements for each niche, we still have to use it.<p>sadly even gpg (because of all this fud'ing around) even falls now the grace and tries to say "well, not THAT application, only THAT".. sigh.
Can the link be updated to not be to the end of the page?
[flagged]
I think it is fair to say that usability is fairly bad for most end to end encrypted systems ... and usability and security are intertwined for E2EE. Here are my comments on Signalgate 1.0:<p>* <a href="https://articles.59.ca/doku.php?id=em:sg" rel="nofollow">https://articles.59.ca/doku.php?id=em:sg</a>
> Of course, people here who have recommended Signal are silent about these issues and rather continue to bash gpg.<p>I've reviewed Signal extensively on my blog. <a href="https://soatok.blog/2025/02/18/reviewing-the-cryptography-used-by-signal/" rel="nofollow">https://soatok.blog/2025/02/18/reviewing-the-cryptography-us...</a><p>I analyze cryptosystems based on what an attacker can do, given sufficient capabilities.<p>"The user adds the wrong person to a group chat" is not a cryptographic weakness, nor a particularly interesting one. Why would I have <i>anything</i> to say about it?<p>We aren't "silent" about your pet peeves. We just have lives and more interesting things to talk about.<p>> EDIT: tptacek enters the chat, my messages are downvoted. This is how he convinces people to use Signal.<p>This kind of comment gets people banned from Hacker News. Please stop that.
on another note: it's so funny that this says, that email should not be used, when the whole world uses email. it's so far detached from reality...
I feel like I'm taking pills, but hear me out.<p>If there's one thing we learned from the Snowden leaks is that the NSA can't break GPG.<p>Look at it from the POV of someone who like me isn't an expert: on the one hand I have ivory tower researchers telling me that GPG is "bad". On the other hand I have fact that the most advanced intelligence in the world can't break it. My personal conclusion is that GPG is actually fucking awesome.<p>What am I missing?
My impression is that GPG when used correctly is secure. But there are so many problems with it that the chances of shooting yourself with one of the footguns is too high for it to be a reliable solution.<p>The alternatives support newer encryption methods but nothing has fundamentally changed that doesn't make them less secure, but they have less footguns to worry about.<p>The weakest link in cryptography is always people.
The NSA can't break GPG assuming everything is working properly. This blog post (which to be fair I only skimmed) explains that GPG is a mess which could lead to things not working properly, and also gives real life examples. You may also want to see <a href="https://gpg.fail" rel="nofollow">https://gpg.fail</a> (you can tell they're from the ivory tower by the cat ears). The blog post also mentions bad UX, which you and I can directly appreciate (if anything I might expect ivory tower types to dismiss UX issues).
> If you’d like empirical data of your own to back this up, here’s an experiment you can run: find an immigration lawyer and talk them through the process of getting Signal working on their phone.<p>> Long term keys are almost never what you want. If you keep using a key, it eventually gets exposed.<p>Have a sentence praising Signal followed by a sentence explaining the main critique of Signal (requiring mobile number) makes me question the whole post for credibility