>> SMTP "“didn’t win because it was ‘better,’” he argued, but “just because it was easier to implement."<p>Yes - and this is actually really important! It's true of most of the important early internet technologies. It's the entire reason "internet" standards won over "telco" (in this case ITU) standards - the latter could only be deployed by big coordinated efforts, while internet standards let individual decentralized admins hook their sites together.<p>Did <i>any</i> of the ITU standards win? In the end, internet swallowed telephones and everything is now VOIP. I think the last of the X standards left is X509?
> It's the entire reason "internet" standards won over "telco" (in this case ITU) standards - the latter could only be deployed by big coordinated efforts,<p>Anyone remember the promise of ATM networking in the 90's? It was telecom grade networking which used circuit switched networking that would handle voice, video and data down one pipe. Instead of carelessly flinging packets into the ether like an savage, you had a deterministic network of pipes. You called a computer as if it were a telephone (or maybe that was Datakit?) and ATM handed the user a byte stream like TCP. Imagine never needing an IP stack or setting traffic priority because the network already handles the QoS. Was it simple to deploy? No. Was it cheap? Nooohooohooohooo. Was Ethernet any of those? YES AND YES. ATM was superior but lost to the simpler and cheaper Ethernet which was pretty crappy in its early days (thinnet, thicknet, terminators, vampire taps, AUI, etc.) but good enough.<p>The funny part is this has the unintended consequences of needing to reinvent the wheel once you get to the point where you need telecom sized/like infrastructure. Ethernet had to adapt to deterministic real-time needs so various hacks and standards have been developed to paper over these deficiencies which is what TSN is - reinventing ATM's determinism. In addition we also now have OTN, yet another protocol to further paper over the various other protocols to mux everything down a big fat pipe to the other end which allows Ethernet (and IP/ATM/etc) to ride deterministically between data-centers.
> Ethernet had to adapt to deterministic real-time needs<p>Without being able to get too into the telco detail, I think the lesson was that hard realtime is both much harder to achieve and not actually <i>needed</i>. People will happily chat over nondeterministic Zoom and Discord.<p>It's both psychological and slightly paradoxical. Once you let go of saying "the system MUST GUARANTEE this property", you get a much cheaper, better, more versatile and higher bandwidth system that ends up meeting the property anyway.
Pretty sure TSN is unrelated to ATM determinism, and comes from a completely separate area (replacing custom field buses where timing and contention is more important than bandwidth). Some of ATM complexity came from wanting to deliver the same quality of experience as plesiosynchronous networks provided for voice (that's how it got the weird cell size).<p>Once those requirements dropped down (partially because people just started to accept weird echo) the replacement became MPLS and whatever you can send IP over where Ethernet sometimes shows as package around the IP frame but has little relation to Ethernet otherwise.
ATM was superior in the context of a bill-by-the-byte telco-style network where oversubscribed links could be carefully planned. The "impedance mismatch" IP's of unreliable datagram delivery with ATM's guaranteed cell delivery created situations where ATM switches could effectively need unlimited buffer RAM to make their delivery guarantees even if the cells were containing IP datagrams that could just be discarded with no ill consequences.<p>There's likely an element of the "layering TCP on TCP" problem going on, too.<p>The classic popular treatment of the subject is: <a href="https://www.wired.com/1996/10/atm-3/" rel="nofollow">https://www.wired.com/1996/10/atm-3/</a>
WebPKI is derived from X.509, but I don't think X.509 lives on anymore. X.500 was stripped down to form LDAP, which is still in very heavy use today. There's still some X.400 systems in existence. I think some of the early cellphone generations may have used the ITU standards in the physical layer?<p>Of course, the biggest--and weirdest--success of the ITU standards is that the OSI model is still frequently the way networking stacks are described in educational materials, despite the fact that it bears no relation to how any of the networking stack was developed or is used. If you really dig into how the OSI model is supposed to work, one of the layers described only matters for teletypes--which were are a dying, if not dead, technology when the model was developed in the first place.
I'll note that while X.509 certificates are deployed widely on the Internet, they are not deployed in the manner the ITU intended. There is no global X.500 directory and Distinguished Names are just opaque identifiers that are used to help find issuers during chain building. That hardly counts as a win for the ITU in my book.
And you could add any number of the big standards group-based standards that a great deal of blood, sweat, and tears were poured into. Not universally the case, but more true than false.
The critical part of that quote "Like a car with no brakes or seatbelts."
It doesn't seem to have worked out like that? You might as well say "like a car without a man walking in front of it with a red flag" <a href="https://en.wikipedia.org/wiki/Red_flag_traffic_laws" rel="nofollow">https://en.wikipedia.org/wiki/Red_flag_traffic_laws</a>
That's a partisan framing. Another framing could be that SMTP is the golf cart SMBs were asking for, not the car they were being sold.
A lot of the IETF standards winning was vendors avoiding work even when paid for.<p>Another was NIH in considerable important places.<p>Yet another was that ITU standards promoted use of compilers generating serialization code from schema, and that required having that compiler. One common issue I found out from trying to rescue some old Unix OSI code was that the most popular option in use at many universities was apparently total crap.<p>In comparison, you could plop a grad student with telnet to experiment with SMTP. Nobody cared that it was shitty, because it was not supposed to be used long. And then nobody wanted to invest in better.
As x509 goes. I doubt many could explain it off hand with BER, DER and others being subset to ASN.1 and other obscura.<p>I’ve never been a fan
> If the history of email had gone somewhat differently, the last email you sent could have been rescinded or superseded by a newer version when you accidentally wrote the wrong thing. It could have auto-destructed if not read by midnight.<p>Immutability is one of the best things about email.
> C=no; ADMD=; PRMD=uninett; O=uninett; S=alvestrand; G=harald<p>that would be very annoying way to write e-mail and no less prone to typosquatting (if anything, more)<p>Both standards lacked hindsight we have today but x.400 would just be added complexity (as years of tacked-on extensions would build upon it) that makes non-error-prone parsing harder
Plus, having to change email addresses when you physically move, in addition to when you change providers, would be immensely annoying.
X.400 is still in use today for things like sending invoices and orders through EDI.<p>Yes, it is a pain to manage. Yes, it is all still mostly running on 20+-year-old hardware and software.<p>It is slightly ironic that the main way we communicate X.400 addresses between parties is through modern email.
SMTP won because it was simpler, but it's probably good to look at why it was simpler.<p>SMTP handled routing by piggybacking on DNS. When an email arrives the SMTP server looks at the domain part of the address, does a query, and then attempts transfer it to the results of that query.<p>Very simple. And, it turns out, immensely scalable.<p>You don't need to maintain any routing information unless you're overriding DNS for some reason - perhaps an internal secure mail transfer method between companies that are close partners, or are in a merger process.<p>By contrast X.400 requires your mail infrastructure to have defined routes for other organisations. No route? No transfer.<p>I remember setting up X.400 connectors for both Lotus Notes/Domino and for Microsoft Exchange in the mid to late 90s, but I didn't do it very often - because SMTP took over incredibly quickly.<p>An X.400 infrastructure would gain new routes slowly and methodically. That was a barrier to expanding the use of email.<p>Often X.400 was just a temporary patch during a mail migration - you'd create an artificial split in the X.400 infrastructure between the two mail systems, with the old product on one side and the new target platform on the other. That would allow you to route mails within the same organisation whilst you were in the migration period. You got rid of that the very moment your last mailbox was moved, as it was often a fragile thing...<p>The only thing worse than X.400 for email was the "workgroup" level of mail servers like MS Mail/cc:Mail. If I recall correctly they could sometimes be set up so your email address was effectively a list of hops on the route. This was because there was no centralised infrastructure to speak of - every mail server was just its own little island. It might have connections to other mail servers, but there was no overarching directory or configuration infrastructure shared by all servers.<p>If that was the case then your email address would be "johnsmith @ hop1 @ hop2 @ hop3" on one mail server, but for someone on the mail server at hop1 your email address would be "johnsmith @ hop2 @ hop3", and so on. It was an absolute nightmare for big companies, and one of the many reasons that those products were killed off in favour of their bigger siblings.
> ... why it was simpler.<p>In the early 90s I implemented a gateway between Novell email and X.400. What amused me the most was X.400 specified an exclusive enumerated list of reasons why email couldn't be delivered, including "recipient is dead". At the X.400 protocol level this was a binary number. SMTP uses a 3 digit number for general category, followed by a free form line of text. Many other Internet standards including HTTP use the same pattern.<p>It was already obvious at the time that the X.400 field was insufficient, yet also impractical for mail administrators to ensure was complete and correct.<p>That was the underlying problem with the X.400 and similar where they covered everything in advance as part of the spec, while Internet standards were more pragmatic.
My first business card when I was working for a tech company had an X.400 address on it. Nobody was memorising that. Or writing it down quickly.
The X.400 world would have had different spam economics because metered usage by your telco (who would be acting as a "Value Added Network" provider and delivering your X.400 mail) would likely have been the norm. As other comments have pointed out, this is still A Thing today with X.400 VANs being used for EDI.
Working, free implementations are better than perfect specification barelly supported only incompletely by closed, expensive implementations.
This is an example of how simplicity won over features.<p>Not even then, when people with access to computers were probably in the thousands, would anyone liked to type "C=no; ADMD=; PRMD=uninett; O=uninett; S=alvestrand; G=harald" just like in the example of the article.
You were not supposed to type it out, you looked it up using your X.500 directory.
Is this an example of simplicity winning over features, or an example of features that are advertised but don't exist failing to win over the competition?<p>Some examples from the article:<p>> You could have messaged an entire organization or department<p>This is a mailing list.<p>> So it was possible, say, for one implementation of X.400 to offer X.400 features like recalling a message, in theory at least, when such guarantees would fail as soon as messages left their walled garden. But “they couldn't buck the rules of physics,” Borenstein concluded. Once a message reached another server, the X.400 implementations could <i>say</i> that an email was recalled or permanently deleted, but there was no way to prove that it hadn’t been backed up surreptitiously.<p>This is a feature that (1) is in the spec, and also (2) is impossible to implement. That's not a real feature. It's a bug in the spec.<p>> You don’t email with X.400 today. That is, unless you work in aviation, where AMHS communications for sharing flight plans and more are still based on X.400 standards (which enables, among other things, prioritizing messages and sending them to the tower at an airport instead of a specific individual).<p>This is... also a mailing list. There's nothing difficult about having an email address for the tower. That email could go to one person, or many people. What's the difference supposed to be? What "feature" are we saying X.400 has that email didn't start with?
More like X.400 times convoluted
Gall's Law:<p>"A complex system that works is invariably found to have evolved from a simple system that worked."<p><a href="https://lawsofsoftwareengineering.com/laws/galls-law/" rel="nofollow">https://lawsofsoftwareengineering.com/laws/galls-law/</a><p>In my naive youth I always thought top-down design was the sensible way to build systems. But after witnessing so many of them fail miserably, I now agree with Gall.
i once did a contract for a company that built a product around connectors for legacy lan e-mail products and an x.400 mta. it was a gigantic steaming pile of shit and made me appreciate the simple internet protocols so much more than i already did.
> You could have been notified when the message was read a full 15 years before email had something similar tacked on.<p>Thanks to email security scanners this feature is largely broken.<p>And so are single click to unsubscribe links. So much so that we have to put our unsubscribe page behind a captcha.<p>rant over
Not trying to be rude, but If you put your unsubscribe page behind a captcha I am going to mark you as spam and move on.
> <i>You could have been notified when the message was read a full 15 years before email had something similar tacked on.</i><p>Which spammers and marketers would have <i>loved</i>.<p>I have "load remote content" disabled on my e-mail client so that tracking graphics/pixels do not leak such information to the sender.
> I have "load remote content" disabled on my e-mail client so that tracking graphics/pixels do not leak such information to the sender.<p>Often times that's meaningless as email scanner software will load and inspect all links and images regardless of the human's email client preferences. It basically comes down to can Constant Contact, or similar, detect if a link was clicked by security software or an actual human. And security software wants to look like an actual human because if security software looks like security software it's very easy for bad actors to serve safe payloads to security software and malware payloads to human actors.
Are you saying that email scanners were not only fetching the unsubscribe link but also submitting the “unsubscribe” button/form on the page?<p>I find this hard to believe since everyone else seems to manage this without a Captcha.
I think you're referring to things like tracking pixels, whereas the author was likely referring to _actual_ email read receipts, where the sender can request a read receipt, and the receiver's MUA will prompt them to send one.
No, it’s largely broken because of spam. I don’t want to be signed up to your useless email marketing list, and I want to use an email client that makes unsubscribing as easy as possible.
> Thanks to email security scanners this feature is largely broken.<p>One person's feature is another's anti-feature. I'm glad it's dead.
If I cannot just click a button and unsubscribe, guess what, you are malicious spam.
waiting for inevitable "gmail bad, why it spams my emails so much" rant