Half the entropy is trying to figure out which pieces of this article's text are supposed to be the silly falsehoods being corrected, and which pieces are just the second or third paragraph of a preceding 'Fact'. Deadpool is easier to follow.
Hey, someone submitted my old article. On my birthday!<p>Oh, people hate it… and even someone I definitely look up to.<p>You‘re absolutely right, though, I don‘t remember it being that bad, and probably I just read over it when resurrecting the article, because I‘m so familiar with every word.<p>I‘ll slap some <hr> tags on it when I‘m back home from my holiday.
If it helps, it used to look like this: <a href="https://web.archive.org/web/20140309183752/http://www.2uo.de/myths-about-urandom/" rel="nofollow">https://web.archive.org/web/20140309183752/http://www.2uo.de...</a><p>Definitely a lot more readable! Something must have changed in the meantime.<p>It looks like some links have gone too, like the one in the sentence "how does /dev/random know how much entropy there is available to give out?"
Most importantly -- Happy birthday!!!
Label the myths rather than leaving them as plain statements.
Happy birthday!<p>This'll all wait for later, hope you're enjoying a nice mai tai on your holiday.
I saw a note from an earlier year's discussion saying the css has been changed over the years. Perhaps it was easier then to discern fact or myth, truth or fiction.
I pulled up a random version from 2014, and it's more readable: <a href="https://web.archive.org/web/20141023082929/https://www.2uo.de/myths-about-urandom/" rel="nofollow">https://web.archive.org/web/20141023082929/https://www.2uo.d...</a>
glad i’m not the only one. i’m more or less baffled reading that.
the article is why you need to tell your LLM to 'make noistakes'
This is a good place as any to ask, last time I didn't get any answer: has there ever been a serious Linux exploit from manipulating/predicting bad PRNG? Apart from the Debian SSH key generation fiasco from years ago, of course.<p>Having a good entropy source makes mathematical sense, and you want something a bit more "random" than a dice roll, but I wonder at which point it becomes security theatre.<p>Of all the possible avenues for exploiting a modern OS might have, I figure kernel PRNG prediction to be very, very far down the list of things to try.
Some of the paranoia has been proven correct. For example both Intel and AMD had RDRAND bugs so not relying on it as sole source was the correct choice.
It’s both hard to attack but also a hugely audited system with a lot of attention paid.<p>That being said, [1] from 2012. The challenge with security is that structural weaknesses can take a long time to be discovered but once they are it’s catastrophic. Modern Linux finally switched to CSPRNG and proper construction and relies less on the numerology of entropy estimation it had been using (ie real security instead of theater). RDRAND has also been there for a long time on the x86 side which is useful because even if it’s insecure it gets mixed with other entropy sources like instruction execution time and scheduling jitter to protect standalone servers and iot devices.<p>Of course you hit the nail on the head in terms of the challenge of distinguishing security theater because you won’t know if the hardening is useful until there’s a problem, but there’s enough knowledgeable people on it that it’s less security theater than it might seem if you know what’s going on.<p>[1] <a href="https://www.usenix.org/system/files/conference/usenixsecurity12/sec12-final228.pdf" rel="nofollow">https://www.usenix.org/system/files/conference/usenixsecurit...</a>
30 years ago BSDs already had non-blocking /dev/random (there was no difference to /dev/urandom). OpenBSD especially wouldn’t have shipped something known insecure. Blocking random probably caused more issues (DOS, random hangs, etc.) than a no blocking CSPRNG would have.
Linux did /dev/random first, so naturally it had the oldest design for a few years, without the security expert scrutiny and experience, which the other OSes had for their implementations.<p>OpenBSD didn't exist yet when /dev/random and /dev/urandom were created for Linux.
/dev/[u]random is actually a CSPRNG. it uses a cryptographic hash function to mix in every drop of randomness accessible to the kernel. predicting it without compromising the kernel entails predicting all the randomness that went into it, past a certain point you are better off bruteforcing the internal state directly and that's intractable.<p>the greatest danger is right after boot where it's possible the kernel didn't have enough randomness to mix in yet. not as much of an issue on modern systems.
I think this one is among the most significant findings:
<a href="https://factorable.net/" rel="nofollow">https://factorable.net/</a><p>I also believe there were some android ASLR issues based on the same weakness (i.e., low early boot-time entropy).<p>But this is all quite old, and there've been massive improvements. Basically, "don't use a very old linux kernel" is your mitigation for these issues.
There was a bitcoin key generation flaw on android, and AFAIK people lost money.
You can analyze it much like you'd analyze a password. If you construct a password from four words taken from a list of 1024 words, that's 40 bits of entropy. On average, a brute force attacker would have to try 2^39 (half the possibilities) random passwords before cracking your account. You can then apply that number to the time/money required for one attempt, and see if it's sufficiently secure for your tastes. If the answer comes back as 10 minutes, maybe it's not good enough. If it's 10 quadrillion years, you're probably OK.<p>If you have bad PRNG, you should be able to quantify it in terms of bits. The Debian bug resulted in 15 bits of randomness, since all inputs to the PRNG were erased except for the pid, which was 15 bits at the time.<p>Another real-world example, albeit not Linux. I once worked on a program that had the option of encrypting save files. The encryption was custom (not done by me!) and had a bit of an issue. The encryption itself was not bad, but the save file's master encryption key was generated from the current time. This reduced the number of bits of randomness to well within brute-force range, especially if you could guess at roughly when the key was created. This was convenient for users who had lost their passwords, but somewhat less convenient for users who wanted to actually protect their data.<p>An attacker isn't going to spontaneously try breaking your PRNG, but if you do have an issue, it's a real concern. It'll be far down the list of things to try just because any modern system will hopefully have very good randomness.
Original discussion from 2014:<p>* <a href="https://news.ycombinator.com/item?id=7359992">https://news.ycombinator.com/item?id=7359992</a><p>Also:<p>2020: <a href="https://news.ycombinator.com/item?id=22683627">https://news.ycombinator.com/item?id=22683627</a><p>2018: <a href="https://news.ycombinator.com/item?id=17779657">https://news.ycombinator.com/item?id=17779657</a><p>2017: <a href="https://news.ycombinator.com/item?id=13332741">https://news.ycombinator.com/item?id=13332741</a><p>2015: <a href="https://news.ycombinator.com/item?id=10149019">https://news.ycombinator.com/item?id=10149019</a>
Back in the dinosaur days (around 2005) I was working on a PHP CMS used by a big registrar. Occasionally page loads would block for seconds. It appeared randomly (natch) and was relatively unreproducible.<p>I couldn’t find any good way to debug it and a friend suggested GDB. I had never thought of using such a low level debugger on a scripting language, but what choice did I have? Fired it up, found a blocked process and sure enough it was blocked on reads to /dev/random.<p>I leaned two things that day: the decision to make and keep /dev/random blocking was dumb and GDB (or lldb, or valgrind, etc.) is useful for debugging just about anything.
That was hard to tell where the additional commentary on the fact ended and the next myth started.
The CSS has broken some time in the last 12 years, people have posted archive links that make it much clearer [1].<p>The author is on holiday (and enjoying their birthday!) and will get to it when they're back home.<p>[1] <a href="https://web.archive.org/web/20140309183752/http://www.2uo.de/myths-about-urandom/" rel="nofollow">https://web.archive.org/web/20140309183752/http://www.2uo.de...</a>
I woke up around 4am, read this, and wondered if I was still in a dream state given the meandering nature of it.<p>Were the man page musings written in response to the (alleged, but... uh... NSA) kleptographic backdoor in Dual_EC_DRBG? It requires multiple successive outputs to compromise and derive internal PRNG state, if memory serves.<p>In that <i>one</i> construction, /dev/random blocking on seeding would have a mild state-hiding advantage over /dev/urandom, I imagine... but, sheesh. Nobody use that generator.
Twelve years later, if there's still so much misconception about /dev/(u)random, has the man page been fixed?<p>Edit: can't count.
Yes. It’s mentioned at <a href="https://www.thomas-huehn.com/myths-about-urandom-revisited/" rel="nofollow">https://www.thomas-huehn.com/myths-about-urandom-revisited/</a><p>Of course, when searching for man urandom you still found the old versions at the top of the search results for years and years afterwards. And the German Wikipedia page will probably never change.
(2014)
[flagged]