5 comments

  • woodruffw5 hours ago
    At a very quick look, no evidence is given that the &quot;bugs&quot; found in requests are in fact reachable, i.e. not prevented by construction. And sure enough, the very first one is impossible because of a validating guard[1]: `address_in_network` only gets called after `is_valid_cidr`, which enforces the presence of a slash.<p>I think we should hold claims about effective static analysis and&#x2F;or program verification to a higher standard than this.<p>[1]: <a href="https:&#x2F;&#x2F;github.com&#x2F;psf&#x2F;requests&#x2F;blob&#x2F;4bd79e397304d46dfccd76f36c07f66c0295ff82&#x2F;src&#x2F;requests&#x2F;utils.py#L783-L784" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;psf&#x2F;requests&#x2F;blob&#x2F;4bd79e397304d46dfccd76f...</a>
    • JimDabell3 hours ago
      &gt; the very first one is impossible because of a validating guard[1]: `address_in_network` only gets called after `is_valid_cidr`, which enforces the presence of a slash.<p>It’s correct to flag this code. The check is performed manually outside of the function in question. If you call the function directly, the bug surfaces.<p>There is no mention in the function documentation of the validation requirement, making it easy to call incorrectly. Also, if it is required to call the validator before calling this function, then the function could just call it itself.<p>In short, it’s possible to make this code safe by definition, but instead it relies upon the developer to always make the undocumented right choices every single time it is called. I would expect something more rigorous from verified code.
      • teraflop2 hours ago
        That doesn&#x27;t mean there&#x27;s a problem with the code, only with the documentation. So the article is wrong to call it a &quot;real bug&quot;. <i>At most</i> it&#x27;s poor code style that could theoretically lead to a bug in the future.<p>There&#x27;s nothing inherently wrong with a function throwing an exception when it receives invalid input. The math.sqrt function isn&#x27;t buggy because it fails if you pass it a negative argument.
        • Someone1 hour ago
          &gt; That doesn&#x27;t mean there&#x27;s a problem with the code, only with the documentation.<p>I disagree. If the obvious way to use an API is the incorrect way, there is a problem with the code.<p>If you must call A each time before calling B, drop A and have B do both things.<p>If you must call A once before calling B, make A return a token that you then must pass to B to show you called A.<p>As another example, look at <a href="https:&#x2F;&#x2F;blog.trailofbits.com&#x2F;2026&#x2F;02&#x2F;18&#x2F;carelessness-versus-craftsmanship-in-cryptography&#x2F;" rel="nofollow">https:&#x2F;&#x2F;blog.trailofbits.com&#x2F;2026&#x2F;02&#x2F;18&#x2F;carelessness-versus-...</a> (HN discussion: <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=47060334">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=47060334</a>):<p><i>“Two popular AES libraries, aes-js and pyaes, “helpfully” provide a default IV in their AES-CTR API, leading to a large number of key&#x2F;IV reuse bugs. These bugs potentially affect thousands of downstream projects.”</i><p>Would you call that “poor code style that could theoretically lead to a bug in the future”, too?
    • seanmcdirmid4 hours ago
      Most (all?) static analyzers are conservative, and reducing your false positive rate is always a struggle. You should never expect a false positive rate of zero (it’s probably impossible to not have false positives), but you shouldn’t be presenting your false positives as successes either.
      • woodruffw4 hours ago
        Sure, but this one doesn’t pass the sniff test. I’ve written plenty of static analysis tools (including ones that do symbolic execution), and one of the first things you do to ensure that your results are valid is create <i>some</i> model of tainting&#x2F;reachability. Even an analysis that’s 1-callsite sensitive would have caught this and discarded it as a false positive.<p>(In case it isn’t clear, I’m saying this is slop that someone whipped up and didn’t even bother to spot check.)
  • saithound4 hours ago
    What if you asked your favorite AI agent to produce mathematics at the level of Vladimir Voevodsky, Fields Medal-winning, foundation-shaking work but directed toward something the legendary Nikolaj Bjørner (co-creator of Z3) could actually use?<p>Well, you&#x27;d get this embarrassing mess, apparently.
    • geraneum10 minutes ago
      That’s because they didn’t add “and don’t make mistakes!”.<p>And yes, the exclamation mark matters!
  • grey-area3 hours ago
    I miss the days when humans submitted things they had done to this site, instead of generating long slop articles in 5 minutes: ‘LLM‑based code synthesis—while mind-numbingly effective—’ about slop code they generated in 5 minutes (or worse in hours) with foolish prompts:’Produce mathematics at the level of Vladimir Voevodsky, Fields Medal-winning, foundation-shaking work’.<p>Should we even read this or should we get an LLM to summarise it onto a few bullet points again?<p>This bit was interesting in illuminating the human authors’ credulity (assuming they believe in their own article):<p>‘The central move was elegant: stop asking only “is the system safe?“, start asking “how far is it from safety?“‘<p>This ersatz profundity couched in a false opposition is common in generated text - does it have anything at all to do with the code generated or is it all just convincing bullshit?
  • dhjjdjjjd1 hour ago
    [dead]
  • naillang4 hours ago
    [dead]