13 comments

  • dextercd4 hours ago
    You need external monitoring of certificate validity. Your ACME client might not be sending failure notifications properly (like happened to Bazel here). The client could also think everything is OK because it acquired a new cert, meanwhile the certificate isn&#x27;t installed properly (e.g., not reloading a service so it keeps using the old cert).<p>I have a simple Python script that runs every day and checks the certificates of multiple sites.<p>One time this script signaled that a cert was close to expiring even though I saw a newer cert in my browser. It turned out that I had accidentally launched another reverse proxy instance which was stuck on the old cert. Requests were randomly passed to either instance. The script helped me correct this mistake before it caused issues.
    • compumike3 hours ago
      100%, I&#x27;ve run into this too. I wrote some minimal scripts in Bash, Python, Ruby, Node.js (JavaScript), Go, and Powershell to send a request and alert if the expiration is less than 14 days from now: <a href="https:&#x2F;&#x2F;heyoncall.com&#x2F;blog&#x2F;barebone-scripts-to-check-ssl-certificate-expiration" rel="nofollow">https:&#x2F;&#x2F;heyoncall.com&#x2F;blog&#x2F;barebone-scripts-to-check-ssl-cer...</a> because anyone who&#x27;s operating a TLS-secured website (which is... basically anyone with a website) should have at least that level of automated sanity check. We&#x27;re talking about ~10 lines of Python!
    • firesteelrain4 hours ago
      There is a Prometheus plugin called ssl_exporter that will provide the ability for Grafana to display a dashboard of all of your certs and their expirations. But, the trick is that you need to know where all your certs are located. We were using Venafi to do auto discovery but a simple script to basically nmap your network provides the same functionality.
  • dvratil4 hours ago
    Happened on the first day of my first on-call rotation - a cert for one of the key services expired. Autorenew failed, because one of the subdomains on the cert no longer resolved.<p>The main lesson we took from this was: you absolutely need monitoring for cert expiration, with alert when (valid_to - now) becomes less than typical refresh window.<p>It&#x27;s easy to forget this, especially when it&#x27;s not strictly part of your app, but essential nonetheless.
  • firesteelrain4 hours ago
    Operationally, the issue is rooted in simple monitoring and accurate inventory. The article is apt: “ With SSL certificates, you usually don’t have the opportunity to build up operational experience working with them, unless something goes wrong”<p>You can update your cert to prepare for it by appending—-NEW CERT—-<p>To the same file as ——-OLD CERT—-<p>But you also need to know where all your certificates are located. We were using Venafi for the auto discovery and email notifications. Prometheus ssl_exporter with Grafana integration and email alerts works the same. The problem is knowing where all hosts, containers and systems that have certs are located. Simple nmap style scan of all endpoints can help. But, you might also have containers with certs or you might have certs baked into VM images. Sure, there all sorts of things like storing the cert in a CICD global variable, bind mounting secrets, Vault Secret Injector, etc<p>But it’s all rooted in maintaining a valid, up to date TLS inventory. And that’s hard. As the article states: “ There’s no natural signal back to the operators that the SSL certificate is getting close to expiry. To make things worse, there’s no staging of the change that triggers the expiration, because the change is time, and time marches on for everyone. You can’t set the SSL certificate expiration so it kicks in at different times for different cohorts of users.”<p>Every time this happens you whack a mole a change. You get better at it but not before you lose some credibility
    • renewiltord15 minutes ago
      Can do with any weighted LB, right? E.g. route53 or Cloudflare LB. But even manually you just need k IPs (perhaps even 2) and have host k1 and host k2 report different (overlappingly valid) certs. Then (1&#x2F;k) users will see bad cert. your usual will be near zero failures but canary will have 100% failures.<p>I’ve always used the calendar event before expiry and then manual renew option but I wonder why I didn’t do this. It’s trivial to roll out. With Route53 just make one canary LB and balance 1% traffic to it. Can be entirely automated.
  • loloquwowndueo5 hours ago
    There are plenty of other technologies whose failure mode is a total outage, it’s not exclusive to a failed certificate renewal.<p>A certificate renewal process has several points at which failure can be detected and action taken, and it sounds like this team was relying only on a “failed to renew” alert&#x2F;monitor.<p>A broken alerting system is mentioned “didn’t alert for whatever reason”.<p>If this certificate is so critical, they should also have something that alerts if you’re still serving a certificate with less than 2 weeks validity - by that time you should have already obtained and rotated in a new certificate. This gives plenty of time for someone to manually inspect and fix.<p>Sounds like a case of “nothing in this automated process can fail, so we only need this one trivial monitor which also can’t fail so meh” attitude.
    • SoftTalker22 minutes ago
      Wait until they start expiring 47 days from issue (coming soon). Though maybe this will actually help, because it will happen often enough that you (a) won&#x27;t completely forget how to deal with it and (b) have more motivation to be proactive.
    • yearolinuxdsktp5 hours ago
      Additionally, warnings can be built into the clients themselves. If you connect to a host with less than 2 weeks cert expiry time, print a warning in your client. That will be further incentive to not let certs be not renewed in time.
  • 1970-01-013 hours ago
    I agree with this. Certs are designed to function as digital cliff. They will either be accepted or they won&#x27;t, with no safe middle ground. Therefore all certs in a chain can only be as reliable as the least understood cert in your certificate management.
  • gmuslera4 hours ago
    If you think SSL certificates are dangerous, try seeing the dangers of NOT using them, specially for a service that is a central repository of artifacts meant to be automatically deployed.<p>It is not about encryption (that a self-signed certificate lasting till 2035 will suffice), but verification, who am I talking with, because reaching the right server can be messed up with DNS or routing, among other things. Yes, that adds complexity, but we are talking more about trust than technology.<p>And once you recognize that it is essential to have a trusted service, then give it the proper instrumentation to ensure that it work properly, including monitoring and expiration alerts, and documentation about it, not just &quot;it works&quot; and dismiss it.<p>May we retitle the post as &quot;The dangers of not understanding SSL Certificates&quot;?
    • duufuvkfmc4 hours ago
      Debian’s apt do not use SSL as far as I know and I am not aware of any serious security disaster. Their packages are signed and content is not considered confidental.
      • tuetuopay3 hours ago
        Debian 13 uses <a href="https:&#x2F;&#x2F;deb.debian.org" rel="nofollow">https:&#x2F;&#x2F;deb.debian.org</a> by default. Even the upgrade docs from 12 to 13 mention the https variant. They were quite hostile for a while to https, but now it seems they bit the bullet.
      • crote3 hours ago
        If I&#x27;m not mistaken, apt repositories have very similar failure modes - just using PGP certs instead of SSL certs. The repository signing key can still expire or get revoked, and you&#x27;ll have an <i>even harder</i> time getting every client to install a new one...
      • gmuslera2 hours ago
        Debian have multiple mirrors, and some distributions even promote to have local mirrors, the model is different, as you say the packages are signed so you know who made them, wherever you got them from.<p>And I said above, SSL is more than about encryption, but also knowing that you are connecting to the right party. Maybe for a repository with multiple mirrors, dns aliases and a layer of &quot;knowing from whom this come from&quot; is not that essential, but for most the rest, even if the information is public, knowing that it comes from the authoritative source or really from who you think it comes from is important.
      • direwolf204 hours ago
        The selection of packages installed on a server should be treated as confidential, but you could probably infer it from file sizes.
  • Spivak34 minutes ago
    Infra person here: you will need external monitoring at some point because checking that your site is up all over the world isn&#x27;t something you want to do in house. Not because you couldn&#x27;t but because their outages are likely to be uncorrelated with yours—AWS notwithstanding.<p>Anyway you&#x27;ll have one of these things anyway and I haven&#x27;t seen one yet that doesn&#x27;t let you monitor your cert and send you expiration notices in advance.
  • flowerlad5 hours ago
    We need a way to set multiple SSL certificates with overlapping duration. So if one certificate expires the backup certificate will become active. If the overlap is a couple of months then you have plenty of time to detect and fix the issue.<p>Having only one SSL certificate is a single point of failure, we have eliminated single points of failure almost everywhere else.
    • woodruffw5 hours ago
      You can do this pretty easily with Let’s Encrypt, to my knowledge. You can request resistance every 30 days, for example, which would give you a ladder of three 90 day certificates.<p>Edit: but to be clear, I don’t understand <i>why</i> you’d want this. If you’re worried about your CA going offline, you should shorten your renewal period instead.
      • flowerlad5 hours ago
        Do services such as K8S ingress and Azure web apps allow you to specify multiple certificates?<p>Update: looks like the answer is yes. So then the issue is people not taking advantage of this technique.
        • woodruffw4 hours ago
          I don’t think there’s a ton of benefit to the technique. If you’re worried about getting too close to your certificate expiry via automation, the solution is to renew earlier rather than complicate things with a ladder of valid certs.
          • kees994 hours ago
            Exactly. It&#x27;s not like backup certificate have validity starting at a future date.
            • flowerlad4 hours ago
              Yes the backup certificate can have validity starting at a future date. You just need to wait till that future date to create it.
    • throw0101c4 hours ago
      &gt; <i>We need a way to set multiple SSL certificates with overlapping duration.</i><p>Both Apache (SSLCertificateFile) and nginx (ssl_certificate) allow for multiple files, though they cannot be of the same algorithm: you can have one RSA, one ECC, <i>etc</i>, but not (say) an ECC and another ECC. (This may be a limitation of OpenSSL.)<p>So if the RSA expires on Feb 1, you can have the ECC expire on Feb 14 or Mar 1.
    • deIeted3 hours ago
      That&#x27;s a lot of words coming from people who were against this very idea not that long ago. Before Let&#x27;s Encrypt existed, 90% of you were violently against the idea. &quot;No, that&#x27;s not how it&#x27;s supposed to work.&quot; That&#x27;s how it was.
  • 0x0734 hours ago
    And it get worse, as they are changing the max days to until 47 in 2029.
    • JoshTriplett4 hours ago
      On the other hand, as the time gets shorter, it&#x27;ll become less likely that something will go undetected for a long time.
  • superkuh5 hours ago
    For corporations, institutions, and for-profits this matters and there&#x27;s no real good solution.<p>But for human persons and personal websites HTTP+HTTPS fixes this easily and completely. You get the best of both worlds. Fragile short lifetime pseudo-privacy if you want it (HTTPS) and long term stable access no matter what via HTTP. HTTPS-only does more harm than good. HTTP+HTTPS is far better than either alone.
    • deIeted3 hours ago
      I think your only defense would be to pretend to be a bot at this point, because what you just said was completely ridiculous and embarrassing. You realize it&#x27;s not a requirement that you have to post a comment when you have no idea what to say?
  • throw202512204 hours ago
    TLS certificates… SSL is some old Java anachronism.<p>&gt; There’s no natural signal back to the operators that the SSL certificate is getting close to expiry.<p>There is. The not after is right there in the certificate itself. Just look at it with openssl x509 -text and set yourself up some alerts… it’s so frustrating having to refute such random bs every time when talking to clients because some guy on the internet has no idea but blogs about their own inefficiencies.<p>Furthermore, their autorenew should have been failing loud and clear, everyone should know from metrics or logs… but nobody noticed anything.
    • ronsor3 hours ago
      &gt; TLS certificates… SSL is some old Java anachronism.<p>OpenSSL is still called OpenSSL. Despite &quot;SSL&quot; not being the proper name anymore, people are still going to use it.<p>By the way, TLS 1.3 is actually SSL v3.4 :)
      • throw202512203 hours ago
        You are so confused, it’s not funny. There is no such thing as SSL 3.4. OpenSSL is not SSL. There were 3 SSL versions: 1.0, 2.0, 3.0. Following the 3.0, the protocol has been renamed to TLS. As of 2025, all versions of SSL (1.0, 2.0, 3.0) and early versions of TLS (1.0, 1.1) are considered insecure and have been deprecated by major browsers and the IETF. Modern secure communications rely exclusively on TLS 1.2 and TLS 1.3.
        • RijilV2 hours ago
          except of course on the wire, where it&#x27;s wildly a mess.<p>TLS 1.3 version in the record header is 3.1 (that used by TLS 1.0), and later in the client version is 3.3 (that used by TLS 1.2). Neither is correct, they should be 3.4, or 4.0 or something incrementally larger than 3.1 and 3.3.<p>This number basically corresponds to the SSL 3.x branch from which TLS descended from. There&#x27;s a good website which visually explains this:<p><a href="https:&#x2F;&#x2F;tls13.xargs.org&#x2F;#client-hello&#x2F;annotated" rel="nofollow">https:&#x2F;&#x2F;tls13.xargs.org&#x2F;#client-hello&#x2F;annotated</a><p>As for if someone is correct or whatever for calling out TLS 1.x as SSL 3.(x+1) IDK how much it really matters. Maybe they&#x27;re correct in some nerdy way, like I could have called Solaris 3 as SunOS6 and maybe there were some artifacts in the OS to justify my feelings about that. It&#x27;s certainly more proper to call things by their marketing name, but it&#x27;s also interesting to note on they behave on the wire.
          • throw202512202 hours ago
            it could be called bunny17.1 on the wire and it would change nothing: <a href="https:&#x2F;&#x2F;datatracker.ietf.org&#x2F;doc&#x2F;rfc8446&#x2F;" rel="nofollow">https:&#x2F;&#x2F;datatracker.ietf.org&#x2F;doc&#x2F;rfc8446&#x2F;</a>
    • toast03 hours ago
      If we&#x27;re being picky, they&#x27;re x.509 certificates, not TLS or SSL.
    • tomas7894 hours ago
      I don’t think this is as simple as it seems. For example, we have our own CA and issue several mTLS certificates, with hundreds of them currently in use across our machines. We need to check every single one (which we don’t do yet) because there is an additional distribution step that might fail selectively. And that’s not even touching on expiring CAs, which is a total nightmare.
      • viraptor3 hours ago
        If you have your own CA, you log every certificate with the expiry details. It&#x27;s easier compared to an external CA because you automatically get the full asset list as long as you care to preserve it.
        • SoftTalker17 minutes ago
          When I ran my own CA I issued certificates with 99-year expiry dates, and I never worried about them again.
      • throw202512203 hours ago
        Why would it be difficult? You have a single CA, so a single place where certs are issued. That means there’s a single place with the knowledge of what certs are issued for which identity, how long are those valid for, and has there been a new cert issued for that identity prior to previous cert expiration. Could not be simpler, in fact.
    • riffic3 hours ago
      X.509 certificates
      • themafia26 minutes ago
        They specified a lot of stuff that ultimately didn&#x27;t get used but ITU is still my favorite standards organization.
  • deIeted3 hours ago
    Nobody to blame but yourselves.<p>How long did it take for us to get to a &quot;letsencrypt&quot; setup? and exactly 100ms before that existed, you (meaning 90% of you) mocked and derided that very idea
  • thecosmicfrog3 hours ago
    &gt; the failure mode is the opposite of graceful degradation. It’s not like there’s an increasing percentage of requests that fail as you get closer to the deadline. Instead, in one minute, everything’s working just fine, and in the next minute, every http request fails.<p>This has given me some interesting food for thought. I wonder how feasible it would be to create a toy webserver that did exactly this (failing an increasing percentage of requests as the deadline approaches)? My thought would be to start failing some requests as the deadline approaches a point where most would consider it &quot;far too late&quot; (e.g. 4 hours before `notAfter`). At this point, start responding to some percentage of requests with a custom HTTP status code (599 for the sake of example).<p>Probably a lot less useful than just monitoring each webserver endpoint&#x27;s TLS cert using synthetics, but it&#x27;s given me an idea for a fun project if nothing else.
    • loloquwowndueo3 hours ago
      Your idea shifts monitoring to end users, which doesn’t sound awesome.<p>Just check expiration of the active certificate; if it’s under a threshold (say 1 week, assuming you auto-renew it when it’s 3 weeks to expiry; still serving a cert when it’s 1 week to expiration is enough signal that something went wrong) then you alert.<p>Then you just need to test that your alerting system is reliable. No need to use your users as canaries.
      • thecosmicfrog3 hours ago
        Oh absolutely, I wouldn&#x27;t use this for any production system. It would be a toy hobby project. I just find the notion of turning a no-degradation failure mode into a gradual-degradation one fascinating for some reason.
    • johannes12343213 hours ago
      For a fun project it certainly is a fun idea.<p>In real life, I guess there are people who don&#x27;t monitor at all. For them failing requests would go unnoticed ... for the others monitoring must be easy.<p>But I think the core thing might be to make monitoring SSL lifetime the &quot;obvious&quot; default: All the grafana dashboards etc should have such an entry.<p>Then as soon as I setup a monitoring stack I get that reminder as well.
    • firesteelrain3 hours ago
      This canary is a good thought. The problem the article highlights is that people don’t practice updates enough and assume someone else or something is handling it. You only get better at it the more often it happens which is partly why long expirations are not ideal. This is what the article is highlighting as the main issue.
      • loloquwowndueo3 hours ago
        It’s not a good thought. Run a single client (uptime kuma) and ask it to alert you on expiration proximity. I.e. implement proper monitoring and alerting. No need to randomly degrade your users’ experience and hope they’ll notify you instead of shrugging and going to a site that doesn’t throw made-up http errors at them randomly.
        • firesteelrain3 hours ago
          If a “canary” is degrading users, it’s misdesigned.<p>The canary narrows the blast radius and time-to-detection.
          • loloquwowndueo2 hours ago
            Agreed. That’s exactly what the proposed canary is - misdesigned.