6 comments

  • pregnenolone2 hours ago
    They’re useful for attestation, boot measurement, and maybe passkeys, but I wouldn&#x27;t trust them to securely handle FDE keys for several reasons. Not only do you have to trust the TPM manufacturer – and there are many – but they also have a bad track record (look up Chris Tarnovsky’s presentation about breaking TPM 1.x chips). While parameter encryption has been phased out or not used in the first place, what&#x27;s even worse is that cryptsetup stores the key in plaintext within the TPM, and this vulnerability remains unaddressed to this day.<p><a href="https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;2304.14717" rel="nofollow">https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;2304.14717</a><p><a href="https:&#x2F;&#x2F;github.com&#x2F;systemd&#x2F;systemd&#x2F;issues&#x2F;37386" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;systemd&#x2F;systemd&#x2F;issues&#x2F;37386</a><p><a href="https:&#x2F;&#x2F;github.com&#x2F;systemd&#x2F;systemd&#x2F;pull&#x2F;27502" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;systemd&#x2F;systemd&#x2F;pull&#x2F;27502</a>
    • amluto1 hour ago
      My pet peeve is that the entire TPM design assumes that, at any given time, all running software has exactly one privilege level.<p>It’s not hard to protect an FDE key in a way that one must compromise both the TPM <i>and</i> the OS to recover it [0]. What is very awkward is protecting it such that a random user in the system who recovers the sealed secret (via a side channel or simply booting into a different OS and reading it) cannot ask the TPM to decrypt it. Or protecting one user’s TPM-wrapped SSH key from another user.<p>I have some kludgey ideas for how to do this, and maybe I’ll write them up some day.<p>[0] Seal a random secret to the TPM and wrap the actual key, <i>in software</i>, with the sealed secret. Compromising the TPM gets the wrapping key but not the wrapped key.
      • zauguin19 minutes ago
        Can&#x27;t that just be done by sealing to PCRs? By protecting the unsealing key with PCR which depends on the OS (I usually use the secure boot signing key PCRs since they are different between systems and stable across updates) and some PCR which gets extended by the OS (or for stuff stored in NV making it readlocked during boot). Then any process that launches later can no longer access it and booting another OS also doesn&#x27;t help.
    • Avamander2 hours ago
      Root-of-trust measurement (RTM) isn&#x27;t foolproof either.<p><a href="https:&#x2F;&#x2F;www.usenix.org&#x2F;system&#x2F;files&#x2F;conference&#x2F;usenixsecurity18&#x2F;sec18-han.pdf" rel="nofollow">https:&#x2F;&#x2F;www.usenix.org&#x2F;system&#x2F;files&#x2F;conference&#x2F;usenixsecurit...</a>
  • amluto3 hours ago
    &gt; The key difference in threat models is that the device manufacturer often needs to protect their intellectual property (firmware, algorithms, and data) from the end-user or third parties, whereas on a PC, the end-user is the one protecting their assets.<p>I would love to see more focus on device manufacturers protecting the user instead of trying to protect themselves.<p>Prime example where the TPM could be fantastic: embedded devices that are centrally coordinated. For example, networking equipment. Imagine if all UniFi devices performed a measured boot and attested to their PCR values before the controller would provision them. This could give a very strong degree of security, even on untrusted networks and even if devices have been previously connected and provisioned by someone else. (Yes, there’s a window when you connect a device where someone else can provision it first.<p>But instead companies seem to obsess about protecting their IP even when there is almost no commercial harm to them when someone inevitably recovers the decrypted firmware image.
    • direwolf201 hour ago
      Many of these companies outsource manufacturing to places with low intellectual property protection - it would be easy for the manufacturer to run an extra batch and sell them directly, and this is only prevented by firmware encryption. I hope this explains the paranoia of these companies.
    • ls6121 hour ago
      And I’d like a pony, but we can’t get what we want, only what we can take, and asymmetric encryption with western law enables hardware manufacturers to take control of your property away from you. I’m not holding my breath for that to change anytime soon…
  • bri3d1 hour ago
    Note that it&#x27;s really easy to conflate TPM and hardware root of trust (in part because UEFI Secure Boot was awfully named), and the two things are linked _only_ by measurements.<p>What a TPM does is provides a chip with some root key material (seeds) which can be extended with external data (PCRs) in a way which is a black box, and then that black box data can be used to perform cryptographic operations. So essentially, it is useful only for sealing data using the PCR state or attesting that the state matches.<p>This becomes an issue once you realize what&#x27;s sending the PCR values; firmware which needs its own root of trust.<p>This takes you to Intel Boot Guard and AMD PSB&#x2F;PSP, which implement traditional secure boot root of trust starting from a public key hash fused into the platform SoC. Without these systems, there&#x27;s not really much point using a TPM, because an attacker could simply send the &quot;correct&quot; hashes for each PCR and reproduce the internal black-box TPM state for a &quot;good&quot; system.
  • dfajgljsldkjag3 hours ago
    It is wild that session encryption is not enabled by default on these chips. I feel like most vendors just slap a tpm on the board and think they are safe without actually configuring it properly. The article is right that physical access usually means game over anyway so it seems like a lot of effort for a small gain.
    • derekerdmann3 hours ago
      If I remember correctly it&#x27;s up to the client program to set up the session, not something to do with the vendor&#x27;s implementation. It&#x27;s conceptually similar to how an HTTPS client performs a TLS handshake after opening a socket before it can work with plain HTTP content.
      • bangaladore2 hours ago
        It doesn&#x27;t help that the TPM spec is so full of optional features (and the N spec versions), so it&#x27;s often annoying to find out what the vendor even supports without signing an NDA + some.<p>TPMs work great when you have a mountain of supporting libraries to abstract them from you. Unfortunately, that&#x27;s often not the case in the embedded world.
        • RedShift12 hours ago
          Even on desktop it&#x27;s terrible, I wanted to protect some private keys of a Java application but there is no way to talk to a TPM using Java so handsandshouldersup gesture.
          • Nextgrid1 hour ago
            The TPM needs a way to authenticate your Java application, since the TPM otherwise does not know whether it&#x27;s actually talking to <i>your</i> application or something pretending to be it.<p>This means you generally need an authenticated boot chain (via PCR measurements) and then have your Java app &quot;seal&quot; the key material to that.<p>It&#x27;s not a problem with the TPM per-se, it&#x27;s no different if you were using an external smartcard or HSM - the HSM still needs to ensure it&#x27;s talking to the right app and not an impersonator (and if you use keypair authentication for that, then your app must store the keypair somewhere - you&#x27;ve just moved the authentication problem elsewhere).
    • bangaladore2 hours ago
      In many industries, once someone has physical access to a device, all bets are off. And when used correctly, TPMs can provide tons of value even when not encrypting the bus.
      • plagiarist39 minutes ago
        Yes, definitely. I would use a TPM on a Pi device regardless of the imperfections if I could find one for a normal price. My threat model is that I don&#x27;t store anything sensitive on the device but as a guardrail it also cannot be trivially decrypted without the hardware token.<p>I am using TMP for this on x86 machines that I want to boot headless. If I need to replace the disk I can just do a regular wipe and feel pretty comfortable.<p>I&#x27;d use a Yubikey or other security token with the Pi, but the device needs to boot without user intervention and the decryption code I&#x27;m aware of forces user presence whether or not the Yubikey requires that.
  • ValdikSS2 hours ago
    Sigma-star does many very high quality embedded blog posts, and touches not popular and hardly discussed topics pretty in-depth.
  • jhallenworld2 hours ago
    Do you really need a TPM if you have something like ARM TrustZone?
    • bri3d31 minutes ago
      They&#x27;re different problem spaces, TrustZone is a trusted execution environment and TPM is an API for performing key storage and attestation which revolves around system state (PCRs).<p>Essentially, TPM is a standardized API for implementing a few primitives over the state of PCRs. Fundamentally, TPM is just the ability to say &quot;encrypt and store this blob in a way that it can only be recovered if all of these values were sent in the right order,&quot; or &quot;sign this challenge with an attestation that can only be provided if these values match.&quot; You can use a TEE to implement a TPM and on most modern x86 systems (fTPM) this is how it is done anyway.<p>You don&#x27;t really need an fTPM either in some sense; one could use TEE primitives to write a trusted application that should perform similar tasks, however, TPM provides the API by which most early-boot systems (UEFI) provide their measurements, so it&#x27;s the easiest way to do system attestation on commodity hardware.
    • astrobe_1 hour ago
      I think the general problem is that SoC-based security relies on internal &quot;fuses&quot; that are write-once, as the name suggests, which usually means that they are usable by the manufacturer only.<p>TPMs can be reprogrammed by the customer. If the device needs to be returned for repairs, the customer can remove their TPM, so that even the manufacturer cannot crack open the box and have access to their secrets.<p>That&#x27;s only theory though, as the box could actually be &quot;dirty&quot; inside; for instance it could leak the secrets to obtained from the TPM to mass storage <i>via</i> a swap partition (I don&#x27;t think they are common in embedded systems, though).
    • ValdikSS2 hours ago
      Sure, why not? You have a reference implementation for both TrustZone OP-TEE (from Microsoft!) and in-Linux-kernel. No need to code anything, everything is already there, tested and ready to work.<p><a href="https:&#x2F;&#x2F;github.com&#x2F;OP-TEE&#x2F;optee_ftpm" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;OP-TEE&#x2F;optee_ftpm</a><p>Or you mean dedicated TPM?
      • stefan_1 hour ago
        As I understand it, you can not actually deploy a fTPM (in embedded and other scenarios) unless you run your own full PKI and have your CA signed off by Microsoft or some other TPM consortium member. So sure the code exists, but it&#x27;s also just a dummy implementation, and for any embedded product that is not super cost conscious I will forever recommend to just buy the $1 chip, connect it via SPI and live happily ever after. Check the box, in embedded most non-technical people can&#x27;t even begin to understand what FDE means anyway.<p>If you don&#x27;t need the TPM checkbox, most vendors have simple signing fuses that are a lot easier than going fTPM.
      • jhallenworld2 hours ago
        I mean a separate chip.
        • ValdikSS1 hour ago
          Well, you have much more control of lower-level boot process on ARM chips, and each of the SoC manufacturers have their own implementation of Trusted Boot which relies on the cryptography and secrets inside the SoC rather than TPM as in x86&#x2F;UEFI boot process.<p>In context of trusted boot — not much. If your specific application doesn&#x27;t require TPM 2.0 advanced features, like separate NVRAM and different locality levels, then it&#x27;s not worth to use dedicated chip.<p>However if you want something like PIN brute force protection with a cooldown on a separate chip, dTPM will do that. This is more or less exactly why Apple, Google and other major players have separate chip for most sensitive stuff—to prevent security bypasses when the attacker gained code execution (or some kind of reset) on the application processor.
          • bri3d1 hour ago
            &gt; their own implementation of Trusted Boot which relies on the cryptography and secrets inside the SoC rather than TPM as in x86&#x2F;UEFI boot process.<p>TPM and x86 trusted boot &#x2F; root of trust are completely separate things, linked _only_ by the provision of measurements from the (_presumed_!) good firmware to the TPM.<p>x86 trusted boot relies on the same SoC manufacturer type stuff as in ARM land, starting with a fused public key hash; on AMD it&#x27;s driven by the PSP (which is ARM!) and on Intel it&#x27;s a mix of TXE and the ME.<p>This is a common mistake and very important to point out because using TPM alone on x86 doesn&#x27;t prove anything; unless you _also_ have a root of trust, an attacker could just be feeding the &quot;right&quot; hashes to the TPM and you&#x27;d never know better.
            • ValdikSS31 minutes ago
              On ARM, you control the whole boot process on many SoCs, and can make your own bespoke secure&#x2F;trusted&#x2F;measured boot chain, starting from bootrom to the very latest boot stages (given that your SoC manufacturer has root of trust and all the documentation on how to use it), without TPM.<p>You more or less can&#x27;t do that on x86, and have to rely on existing proprietary code facilities to implement measured boot using TPM (as the only method), for which you can implement trusted boot, using TPM and all the previous measures proprietary code made to it.
              • bri3d25 minutes ago
                You can do that on x86 too, the main difference is a combination of openness and who you need to sign an NDA with (which, granted, is a big difference, since most ARM vendors are more likely to be your friend than Intel). However, there are a ton of x86 based arcade machines, automotive systems, and so on which have secured root of trust and do not use UEFI at all. On Intel, you get unprovisioned chips and burn your KM hash into the PCH FPFs to advance the lifecycle state at EOM, which is basically the same thing you&#x27;d do with most ARM SoCs.
    • zorgmonkey1 hour ago
      Their have been many vulnerabilities in TrustZone implementations and both Google and Apple now use separate secure element chips. In Apple&#x27;s case they put the secure element as part of their main SoC, but on devices where that wasn&#x27;t designed in house like Intel they had the T2 Security Chip. On all Pixel devices I&#x27;m pretty sure the Titan has been a separate chip (at least since they started including it at all).<p>So yes incorporating a separate secure element\TPM chip into a design is probably more secure, but ultimately the right call will always depend on your threat model.