They’re useful for attestation, boot measurement, and maybe passkeys, but I wouldn't trust them to securely handle FDE keys for several reasons. Not only do you have to trust the TPM manufacturer – and there are many – but they also have a bad track record (look up Chris Tarnovsky’s presentation about breaking TPM 1.x chips). While parameter encryption has been phased out or not used in the first place, what's even worse is that cryptsetup stores the key in plaintext within the TPM, and this vulnerability remains unaddressed to this day.<p><a href="https://arxiv.org/abs/2304.14717" rel="nofollow">https://arxiv.org/abs/2304.14717</a><p><a href="https://github.com/systemd/systemd/issues/37386" rel="nofollow">https://github.com/systemd/systemd/issues/37386</a><p><a href="https://github.com/systemd/systemd/pull/27502" rel="nofollow">https://github.com/systemd/systemd/pull/27502</a>
My pet peeve is that the entire TPM design assumes that, at any given time, all running software has exactly one privilege level.<p>It’s not hard to protect an FDE key in a way that one must compromise both the TPM <i>and</i> the OS to recover it [0]. What is very awkward is protecting it such that a random user in the system who recovers the sealed secret (via a side channel or simply booting into a different OS and reading it) cannot ask the TPM to decrypt it. Or protecting one user’s TPM-wrapped SSH key from another user.<p>I have some kludgey ideas for how to do this, and maybe I’ll write them up some day.<p>[0] Seal a random secret to the TPM and wrap the actual key, <i>in software</i>, with the sealed secret. Compromising the TPM gets the wrapping key but not the wrapped key.
Root-of-trust measurement (RTM) isn't foolproof either.<p><a href="https://www.usenix.org/system/files/conference/usenixsecurity18/sec18-han.pdf" rel="nofollow">https://www.usenix.org/system/files/conference/usenixsecurit...</a>
> The key difference in threat models is that the device manufacturer often needs to protect their intellectual property (firmware, algorithms, and data) from the end-user or third parties, whereas on a PC, the end-user is the one protecting their assets.<p>I would love to see more focus on device manufacturers protecting the user instead of trying to protect themselves.<p>Prime example where the TPM could be fantastic: embedded devices that are centrally coordinated. For example, networking equipment. Imagine if all UniFi devices performed a measured boot and attested to their PCR values before the controller would provision them. This could give a very strong degree of security, even on untrusted networks and even if devices have been previously connected and provisioned by someone else. (Yes, there’s a window when you connect a device where someone else can provision it first.<p>But instead companies seem to obsess about protecting their IP even when there is almost no commercial harm to them when someone inevitably recovers the decrypted firmware image.
Many of these companies outsource manufacturing to places with low intellectual property protection - it would be easy for the manufacturer to run an extra batch and sell them directly, and this is only prevented by firmware encryption. I hope this explains the paranoia of these companies.
And I’d like a pony, but we can’t get what we want, only what we can take, and asymmetric encryption with western law enables hardware manufacturers to take control of your property away from you. I’m not holding my breath for that to change anytime soon…
Note that it's really easy to conflate TPM and hardware root of trust (in part because UEFI Secure Boot was awfully named), and the two things are linked _only_ by measurements.<p>What a TPM does is provides a chip with some root key material (seeds) which can be extended with external data (PCRs) in a way which is a black box, and then that black box data can be used to perform cryptographic operations. So essentially, it is useful only for sealing data using the PCR state or attesting that the state matches.<p>This becomes an issue once you realize what's sending the PCR values; firmware which needs its own root of trust.<p>This takes you to Intel Boot Guard and AMD PSB/PSP, which implement traditional secure boot root of trust starting from a public key hash fused into the platform SoC. Without these systems, there's not really much point using a TPM, because an attacker could simply send the "correct" hashes for each PCR and reproduce the internal black-box TPM state for a "good" system.
It is wild that session encryption is not enabled by default on these chips. I feel like most vendors just slap a tpm on the board and think they are safe without actually configuring it properly. The article is right that physical access usually means game over anyway so it seems like a lot of effort for a small gain.
If I remember correctly it's up to the client program to set up the session, not something to do with the vendor's implementation. It's conceptually similar to how an HTTPS client performs a TLS handshake after opening a socket before it can work with plain HTTP content.
It doesn't help that the TPM spec is so full of optional features (and the N spec versions), so it's often annoying to find out what the vendor even supports without signing an NDA + some.<p>TPMs work great when you have a mountain of supporting libraries to abstract them from you. Unfortunately, that's often not the case in the embedded world.
In many industries, once someone has physical access to a device, all bets are off. And when used correctly, TPMs can provide tons of value even when not encrypting the bus.
Yes, definitely. I would use a TPM on a Pi device regardless of the imperfections if I could find one for a normal price. My threat model is that I don't store anything sensitive on the device but as a guardrail it also cannot be trivially decrypted without the hardware token.<p>I am using TMP for this on x86 machines that I want to boot headless. If I need to replace the disk I can just do a regular wipe and feel pretty comfortable.<p>I'd use a Yubikey or other security token with the Pi, but the device needs to boot without user intervention and the decryption code I'm aware of forces user presence whether or not the Yubikey requires that.
Sigma-star does many very high quality embedded blog posts, and touches not popular and hardly discussed topics pretty in-depth.
Do you really need a TPM if you have something like ARM TrustZone?
They're different problem spaces, TrustZone is a trusted execution environment and TPM is an API for performing key storage and attestation which revolves around system state (PCRs).<p>Essentially, TPM is a standardized API for implementing a few primitives over the state of PCRs. Fundamentally, TPM is just the ability to say "encrypt and store this blob in a way that it can only be recovered if all of these values were sent in the right order," or "sign this challenge with an attestation that can only be provided if these values match." You can use a TEE to implement a TPM and on most modern x86 systems (fTPM) this is how it is done anyway.<p>You don't really need an fTPM either in some sense; one could use TEE primitives to write a trusted application that should perform similar tasks, however, TPM provides the API by which most early-boot systems (UEFI) provide their measurements, so it's the easiest way to do system attestation on commodity hardware.
I think the general problem is that SoC-based security relies on internal "fuses" that are write-once, as the name suggests, which usually means that they are usable by the manufacturer only.<p>TPMs can be reprogrammed by the customer. If the device needs to be returned for repairs, the customer can remove their TPM, so that even the manufacturer cannot crack open the box and have access to their secrets.<p>That's only theory though, as the box could actually be "dirty" inside; for instance it could leak the secrets to obtained from the TPM to mass storage <i>via</i> a swap partition (I don't think they are common in embedded systems, though).
Sure, why not? You have a reference implementation for both TrustZone OP-TEE (from Microsoft!) and in-Linux-kernel. No need to code anything, everything is already there, tested and ready to work.<p><a href="https://github.com/OP-TEE/optee_ftpm" rel="nofollow">https://github.com/OP-TEE/optee_ftpm</a><p>Or you mean dedicated TPM?
As I understand it, you can not actually deploy a fTPM (in embedded and other scenarios) unless you run your own full PKI and have your CA signed off by Microsoft or some other TPM consortium member. So sure the code exists, but it's also just a dummy implementation, and for any embedded product that is not super cost conscious I will forever recommend to just buy the $1 chip, connect it via SPI and live happily ever after. Check the box, in embedded most non-technical people can't even begin to understand what FDE means anyway.<p>If you don't need the TPM checkbox, most vendors have simple signing fuses that are a lot easier than going fTPM.
I mean a separate chip.
Well, you have much more control of lower-level boot process on ARM chips, and each of the SoC manufacturers have their own implementation of Trusted Boot which relies on the cryptography and secrets inside the SoC rather than TPM as in x86/UEFI boot process.<p>In context of trusted boot — not much. If your specific application doesn't require TPM 2.0 advanced features, like separate NVRAM and different locality levels, then it's not worth to use dedicated chip.<p>However if you want something like PIN brute force protection with a cooldown on a separate chip, dTPM will do that. This is more or less exactly why Apple, Google and other major players have separate chip for most sensitive stuff—to prevent security bypasses when the attacker gained code execution (or some kind of reset) on the application processor.
> their own implementation of Trusted Boot which relies on the cryptography and secrets inside the SoC rather than TPM as in x86/UEFI boot process.<p>TPM and x86 trusted boot / root of trust are completely separate things, linked _only_ by the provision of measurements from the (_presumed_!) good firmware to the TPM.<p>x86 trusted boot relies on the same SoC manufacturer type stuff as in ARM land, starting with a fused public key hash; on AMD it's driven by the PSP (which is ARM!) and on Intel it's a mix of TXE and the ME.<p>This is a common mistake and very important to point out because using TPM alone on x86 doesn't prove anything; unless you _also_ have a root of trust, an attacker could just be feeding the "right" hashes to the TPM and you'd never know better.
On ARM, you control the whole boot process on many SoCs, and can make your own bespoke secure/trusted/measured boot chain, starting from bootrom to the very latest boot stages (given that your SoC manufacturer has root of trust and all the documentation on how to use it), without TPM.<p>You more or less can't do that on x86, and have to rely on existing proprietary code facilities to implement measured boot using TPM (as the only method), for which you can implement trusted boot, using TPM and all the previous measures proprietary code made to it.
You can do that on x86 too, the main difference is a combination of openness and who you need to sign an NDA with (which, granted, is a big difference, since most ARM vendors are more likely to be your friend than Intel). However, there are a ton of x86 based arcade machines, automotive systems, and so on which have secured root of trust and do not use UEFI at all. On Intel, you get unprovisioned chips and burn your KM hash into the PCH FPFs to advance the lifecycle state at EOM, which is basically the same thing you'd do with most ARM SoCs.
Their have been many vulnerabilities in TrustZone implementations and both Google and Apple now use separate secure element chips. In Apple's case they put the secure element as part of their main SoC, but on devices where that wasn't designed in house like Intel they had the T2 Security Chip. On all Pixel devices I'm pretty sure the Titan has been a separate chip (at least since they started including it at all).<p>So yes incorporating a separate secure element\TPM chip into a design is probably more secure, but ultimately the right call will always depend on your threat model.