23 comments

  • qubex2 hours ago
    About a month ago I had a rather annoying task to perform, and I found an NPM package that handled it. I threw “brew install NPM” or whatever onto the terminal and watched a veritable deluge of dependencies download and install. Then I typed in ‘npm ’ and my hand hovered on the keyboard after the space as I suddenly thought long and hard about where I was on the risk/benefit curve and then I backspaced and typed “brew uninstall npm” instead, and eventually strung together an oldschool unix utilities pipeline with some awk thrown in. Probably the best decision of my life, in retrospect.
    • sigmoid108 minutes ago
      This is why you want containerisation or, even better, full virtualisation. Running programs built on node, python or any other ecosystem that makes installing tons of dependencies easy (and thus frustratingly common) on your main system where you keep any unrelated data is a surefire way to get compromised by the supply chain eventually. I don't even have the interpreters for python and js on my base system anymore - just so I don't accidentally run something in the host terminal that shouldn't run there.
    • kubafu1 hour ago
      Same story from a month ago. The moment I saw the sheer number of dependencies artillery wanted to pull I gave up.
  • mikkupikku1 hour ago
    &gt; <i>&quot;This creates a dangerous scenario. If GitHub mass-deletes the malware&#x27;s repositories or npm bulk-revokes compromised tokens, thousands of infected systems could simultaneously destroy user data.&quot;</i><p>Pop quiz, hot shot! A terrorist is holding user data hostage, got enough malware strapped to his chest to blow a data center in half. Now what do you do?<p>Shoot the hostage.
    • hsbauauvhabzb9 minutes ago
      The hostage naively walked past all the police and into the data centre, and you’re shooing them in the leg. They’ll probably survive, but they knowingly or incompetently made their choice. Sucks to be them.
  • wonderfuly3 hours ago
    I&#x27;m a victim of this.<p>In addition to concerns about npm, I&#x27;m now hesitant to use the GitHub CLI, which stores a highly privileged OAuth token in plain text in the HOME directory. After the attacker accesses it, they can do almost anything on behalf of me, for example, they turned many of my private repos to public.
    • douglascamata1 hour ago
      Apparently, The Github CLI only stores its oauth token in the HOME directory if you don&#x27;t have a keyring. They also say it may not work on headless systems. See <a href="https:&#x2F;&#x2F;github.com&#x2F;cli&#x2F;cli&#x2F;discussions&#x2F;7109" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;cli&#x2F;cli&#x2F;discussions&#x2F;7109</a>.<p>For example, in my macOS machines the token is safely stored in the OS keyring (yes, I double checked the file where otherwise it would&#x27;ve been stored as plain text).
    • didntcheck3 hours ago
      That&#x27;s true, but the same may already be true of your browser&#x27;s cookie file. I believe Chrome on MacOS and Windows (unsure about Linux) now does use OS features to prevent it being read from other executables, but Firefox doesn&#x27;t (yet)<p>But protecting specific directories is just whack-a-mole. The real fix is to properly sandbox code - an access whitelist rather than endlessly updating a patchy blacklist
      • mcny3 hours ago
        &gt; But protecting specific directories is just whack-a-mole. The real fix is to properly sandbox code - an access whitelist rather than blacklist<p>I believe Wayland (don&#x27;t quote me on this because I know exactly zero technical details) as opposed to x is a big step in this direction. Correct me if I am wrong but I believe this effort alone has been ongoing for a decade. A proper sandbox will take longer and risks being coopted by corporate drones trying to take away our right to use our computers as we see fit.
        • rkangel3 hours ago
          Wayland is a significant improvement in one specific area (and it&#x27;s not this one).<p>All programs in X were trusted and had access to the same drawing space. This meant that one program could see what another one was drawing. Effectively this meant that any compromised program could see your whole screen if you were using X.<p>Wayland has a different architecture where programs only have access to the resources to draw their own stuff, and then a separate compositor joins all the results together.<p>Wayland does nothing about the REST of the application permission model - ability to access files, send network requests etc. For that you need more sandboxing e.g. Flatpak, Containers, VMs
        • akshitgaur200552 minutes ago
          Maybe I am missing something but how and why would a display protocol have anything to do with file access model??
    • sierra10111 hour ago
      I&#x27;m also a victim of this. Last time I try and install Backstage.<p>Have you wiped your laptop&#x2F;infected machine? If not I would recommend it; part of it created a ~&#x2F;.dev-env directory which turned my laptop into a GitHub runner, allowing for remote code execution.<p>I have a read-only filesystem OS (Bluefin Linux) and I don&#x27;t know quite how much this has saved me, because so much of the attack happens in the home directory.
    • febusravenga3 hours ago
      this, this, this<p>All our tokens should be in is protected keychain and there are no proper cross-platform solutions for this. All gclouds, was aww sdks, gh and other tools just store them in dotfile.<p>And worst thing, afaik there is no way do do it correctly in MacOS for example. I&#x27;d like to be corrected though.
      • mcny3 hours ago
        What is a proper solution for this? I don&#x27;t imagine gpg can help if you encrypt it but decrypt it when you login to gnome, right? However, it would be too much of a hassle to have to authenticate each time you need a token. I imagine macOS people have access to the secure enclave using touch ID but then even that is not available on all devices.<p>I feel like we are barking up the wrong tree here. The plain text token thing can&#x27;t be fixed. We have to protect our computers from malware to begin with. Maybe Microsoft was right to use secure admin workstations (saw) for privileged access but then again it is too much of a hassle.
        • flir17 minutes ago
          It might be possible to lash up a cross-plaform solution with KeePassXC. It&#x27;s got an API that can be accessed from the command line (chezmoi uses it to add secrets to dotfiles). Yes, you&#x27;d be authenticating every time you need a token but that might not be too much of a burden if you spend most of your time on a machine with a fingerprint scanner.<p>otoh I wouldn&#x27;t do it, because I don&#x27;t believe I could implement it securely.
        • sakisv1 hour ago
          The way I solve the plain text problem is through a combination of direnv[1] and pass[2].<p>For a given project, I have a `.&#x2F;creds` directory which is managed with pass and it contains all the access tokens and api keys that are relevant for that project, one per file, for example, `.&#x2F;creds&#x2F;cloudflare&#x2F;api_token`. Pass encrypts all these files via gpg, for which I use a key stored on a Yubikey.<p>Next to the `.&#x2F;creds` directory, I have an `.envrc` which includes some lines that read the encrypted files and store their values in environment variables, like so: `export CLOUDFLARE_API_TOKEN=$(pass creds&#x2F;cloudflare&#x2F;api_token)`.<p>Every time that I `cd` into that project&#x27;s directory, direnv reads and executes that file (just once) and all these are stored as environment variables, but only for that terminal&#x2F;session.<p>This solves the problem of plain-text files, but of course the values remain in ENV and something malicious could look for some well known variable names to extract from there. Personally I try to install things in a new termux tab every time which is less than ideal.<p>I&#x27;d like to see if and how other people solve this problem<p>[1]: <a href="https:&#x2F;&#x2F;direnv.net&#x2F;" rel="nofollow">https:&#x2F;&#x2F;direnv.net&#x2F;</a> [2]: <a href="https:&#x2F;&#x2F;www.passwordstore.org&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.passwordstore.org&#x2F;</a>
          • gerardnico11 minutes ago
            You can even go further and delete all your secrets from your env by creating wrapper scripts<p>Example : <a href="https:&#x2F;&#x2F;github.com&#x2F;combostrap&#x2F;devfiles&#x2F;blob&#x2F;main&#x2F;dev-scripts&#x2F;wrappers&#x2F;jreleaser" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;combostrap&#x2F;devfiles&#x2F;blob&#x2F;main&#x2F;dev-scripts...</a><p>It’s not completely full proof but at least gpg asks my passphrase only when I run the script
        • L-four1 hour ago
          I think the correct solution is to use a keyring. On Linux there&#x27;s gnome keyring and last time I worked on a IOS app there was something similar.<p>This does mean entering your keyring password a lot.<p><a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;GNOME_Keyring" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;GNOME_Keyring</a>
          • 171862744025 minutes ago
            &gt; This does mean entering your keyring password a lot.<p>Not when you put that keyrings password into the user keyring. I think it is also cached by default.
      • 17186274402 hours ago
        This doesn&#x27;t sound like a technical problem to me. Even my throw-away bash scripts call to `secret-tool lookup`, since that is actually easier than implementing your own configuration.<p>Also this is a complete non-issue on Unix(-like) systems, because everything is designed around passing small strings between programs. Getting a secret from another program is the same amount of code, as reading it from a text file, since everything is a file.
  • dawnerd4 hours ago
    Everyone is blaming npm but GitHub should be put on blast too for allowing the repos to be created and not quickly flagged.<p>GitHub has a massive malware problem as it is and it doesn’t get enough attention.
    • baobun2 hours ago
      I would put blame on contemporary GitHub for a few things but this is not one of them. We need better community practices and tools. We can&#x27;t expect to rely on Microsoft to content-filter.
    • princevegeta893 hours ago
      I love! how Github, as a corporate company now owned by Microsoft, is directly tied to GoLang as the main repository of the vast majority of packages&#x2F;dependencies.<p>Imagine the number of things that can go wrong when they try to regulate or introduce restrictions for build workflows for the purpose of making some extra money... lol<p>The original Java platform is a good example to think about.
      • oefrha3 hours ago
        Golang builds pulling a github.com&#x2F;foo&#x2F;bar&#x2F;baz module don&#x27;t rely on any GitHub &quot;build workflow&quot;, so unless you mean they&#x27;re going to start restricting or charging for git clones for public repos (before you mention Docker Hub, yes I know), nothing&#x27;s gonna change. And even if they&#x27;re crazy enough to do that, Go module downloads default to a proxy (proxy.golang.org by default, can be configured and&#x2F;or self-hosted) and only fall back to vcs if the module&#x27;s not available, so a module only needs to be downloaded once from GitHub anyway. Oh and once a module is cached in the proxy, the proxy will keep serving it even if the repo&#x2F;tag is removed from GitHub.
      • Cthulhu_2 hours ago
        &quot;The original Java platform&quot; had no package management though, that came with Maven and later Gradle, that have similar vectors for supply chain attacks (that is, nobody reviews anything before it&#x27;s made available on package repositories).<p>And (to put on my Go defender hat), the Go ecosystem doesn&#x27;t like having many dependencies, in part because of supply chain attack vectors and the fact that Node&#x27;s ecosystem went a bit overboard with libraries.
    • benatkin4 hours ago
      They&#x27;re part of the same company, but that&#x27;s a good point. They both have mediocre security.
    • testdelacc13 hours ago
      Wouldn’t have been that hard to write a rule that matches the repositories being created by this malware. It literally does the same thing to every victim.
  • efortis2 hours ago
    Mitigate this attack vector by adding:<p><pre><code> ignore-scripts=true </code></pre> to your .npmrc<p><a href="https:&#x2F;&#x2F;blog.uxtly.com&#x2F;getting-rid-of-npm-scripts" rel="nofollow">https:&#x2F;&#x2F;blog.uxtly.com&#x2F;getting-rid-of-npm-scripts</a>
  • arkh3 hours ago
    Most of those attacks do the same kind of things.<p>So I&#x27;m surprised to never see something akin to &quot;our AI systems flagged a possible attack&quot; in those posts. Or the fact Github from AI pusher fame Microsoft does not already use their AI to find this kind of attacks before they become a problem.<p>Where is this miracle AI for cybersecurity when you need it?
    • michaelt3 hours ago
      The security product marketers ruined “a possible attack” as a brag 25 years ago. Every time a firewall blocks something, it’s a <i>possible</i> attack being blocked, and imagine how often that happens.
    • nottorp3 hours ago
      Current &quot;AI&quot; is generative &quot;AI&quot;. It can generate bullshit not evaluate anything.<p>Edit: see the curl posts about them being bombarded with &quot;AI&quot; generated security reports that mean nothing and waste their time.
  • wiradikusuma8 hours ago
    Does anyone know why NPM seems to be the only attractive target? Python and Java are very popular, but I haven&#x27;t heard anything in those ecosystems for a while. Is it because something inherently &quot;weak&quot; about NPM, or simply because, like Windows or JavaScript, everyone uses it?
    • broeng3 hours ago
      Compared to the Java ecosystem, I think there&#x27;s a couple of issues in the NPM ecosystem that makes the situation a lot worse:<p>1) The availability of the package post-install hook that can run any command after simply resolving and downloading a package[1].<p>That, combined with:<p>2) The culture with using version ranges for dependency resolution[2] means that any compromised package can just spread with ridiculous speed (and then use the post-install hook to compromise other packages). You also have version ranges in the Java ecosystem, but it&#x27;s not the norm to use in my experience, you get new dependencies when you actively bump the dependencies you are directly using because everything depends on specific versions.<p>I&#x27;m no NPM expert, but that&#x27;s the worst offenders from a technical perspective, in my opinion.<p>[1]: I&#x27;m sure it can be disabled, and it might even be now by default - I don&#x27;t know. [2]: Yes, I know you can use a lock file, but it&#x27;s definitely not the norm to actively consider each upgraded version when refreshing the lockfile.
      • Cthulhu_2 hours ago
        To add a few:<p>* NPM has a culture of &quot;many small dependencies&quot;, so there&#x27;s a very long tail of small projects that are mostly below the radar that wouldn&#x27;t stand out initially if they get a patch update. People don&#x27;t look critically into updated versions because there&#x27;s so many of them.<p>* Developers have developed a culture of staying up-to-date as much as possible, so any patch release is applied as soon as possible, often automated. This is mainly sold as a security feature, so that a vulnerability gets patched and released before disclosure is done. But it was (is?) also a thing where if you wait too long to update, updating takes more time and effort because things keep breaking.
    • kace916 hours ago
      One factor is that node&#x27;s philosophy is to have a very limited standard library and rely on community software for a ton of stuff.<p>That means that not only the average project has a ton of dependencies, but also any given dependency will in turn have a ton of dependencies as well. there’s multiplicative effects in play.
      • louiskottmann4 hours ago
        This is my take as well. I&#x27;ve never come accross a JS project where the built-in datastructures were exclusively used.<p>One package for lists, one for sorting, and down the rabbit hole you go.
        • sensanaty1 hour ago
          I think this is mostly historical baggage unfortunately. Every codebase I&#x27;ve ever worked in there was a huge push to only use native ES6 functionality, like Sets, Maps, all the Iterable methods etc., but there was still a large chunk of files that were written before these were standardized and widely used, so you get mixes of Lodash and a bunch of other cursed shit.<p>Refactoring these also isn&#x27;t always trivial either, so it&#x27;s a long journey to fully get rid of something like Lodash from an old project
      • rhubarbtree5 hours ago
        This is the main reason. Pythons ecosystem also has silly trends and package churn, and plenty of untrained developers. It’s the lack of a proper standard library. As bad a language as it may be, Java shows how to get this right.
        • palata5 hours ago
          &gt; As bad a language as it may be, Java shows how to get this right.<p>To be fair Java has improved <i>a lot</i> over the last few years. I really have the feeling that Java is getting better, while C++ is getting worse.
        • PhilipRoman3 hours ago
          What? Python&#x27;s standard library seems far more extensive than Java&#x27;s.
    • parliament327 hours ago
      Larger attack surface (JS has been the #1 language on GitHub for years now) and more amateur developers (who are more likely to blindly install dependencies, not harden against dev attack vectors, etc).
      • Sophira6 hours ago
        Unfortunately, blindly installing dependencies at compile-time is something that many projects will do by default nowadays. It&#x27;s not just &quot;more amateur developers&quot; who are at risk here.<p>I&#x27;ve even seen &quot;setup scripts&quot; for projects that will use root (with your permission) to install software. Such scripts are less common now with containers, but unfortunately containers aren&#x27;t everything.
        • Cthulhu_2 hours ago
          Yes, exactly; I followed a Github course at one point and it was Strongly Recommended that you enable Dependabot for your project which will keep your dependencies up to date. It&#x27;s basically either already enabled or a one-click setup action at this point. The norm that Github pushes is that you should trust them to keep your stuff updated and secure.
        • 17186274402 hours ago
          &gt; blindly installing dependencies at compile-time is something that many projects will do by default nowadays.<p>I consider this to be a sign that someone is still an amateur, and this is a reason to not use the software and quickly delete it.<p>If you need a dependency, you can call the OS package manager, or tell me to compile it myself. If you start a network connection, you are malware in my eyes.
      • dboreham7 hours ago
        Also: a culture of constant churn in libraries which in combination with the potential for security bugs to be fixed in any new release leads to a common practice of ingesting a continual stream of mystery meat. That makes filtering out malware very hard. Too much noise to see the signal. None of the above cultural factors is present in the other ecosystems.
    • Balinares4 hours ago
      As far as I understand, NPM packages are not self-contained like e.g. Python wheels and can (and often need to) run scripts <i>on install</i>.<p>So just installing a package can get you compromised. If the compromised box contains credentials to update your own packages in NPM, then it&#x27;s an easy vector for a worm to propagate.
      • magnetometer3 hours ago
        Python wheels don&#x27;t run arbitrary code on install, but source distributions do. And you can upload both to pypy. So you would have to run<p>pip install &lt;package&gt; --only-binary :all:<p>to only install wheels and fail otherwise.
    • dtech6 hours ago
      Npm has weak security boundaries.<p>Basically any dependency can (used to?) run any script with the develop permissions on install. JVM and python package managers don&#x27;t do this.<p>Of course in all ecosystems once you actually run the code it can do whatever with the permissions of the executes program, but this is another hurdle.
      • lights01236 hours ago
        Python absolutely can run scripts in installation. Before pyproject.toml, arbitrary scripts were the <i>only</i> way to install a package. It&#x27;s the reason PyPi.org doesn&#x27;t show a dependency graph, as dependencies are declared in the Turing-complete setup.py.
        • oefrha6 hours ago
          Wrong. Wheels were available long before pyproject.toml, and you could instruct pip to only install from wheels. setup.py was needed to build the wheels, but the build step wasn’t a necessary part of installation and could be disabled. In that sense its role is similar to that of pre-publish build step of npm packages, unless wheels aren’t available.
    • Karliss2 hours ago
      For the last 2 years PyPi (main Python package repository) requires mandatory 2FA.<p>Last time I did anything with Java, felt like use of multiple package repositories including private ones was a lot more popular.<p>Although higher branching factor for JavaScript and potential target count are probably very important factors as well.
    • nottorp3 hours ago
      Maybe some technical reasons, but more like the mind set of the JS &quot;community&quot; that if you don&#x27;t have the latest version of a package 30 seconds after it&#x27;s pushed you&#x27;re hopelessly behind.<p>In other &quot;communities&quot; you upgrade dependencies when you have time to evaluate the impact.
    • Ekaros5 hours ago
      I feel with Python upgrade cycle is slower. I upgrade dependencies when something is broken or there is known issue. That means any active vulnerabilities propagate slower. Slower propagation means lower risk. And also as there is fewer upstream packages impact of compromised maintainer is more limited.
  • mrklol3 hours ago
    Is there any reason to keep using postinstall scripts allowed instead of asking e.g. the user? Are they even needed in most cases?
    • Cthulhu_2 hours ago
      If you ask the user &quot;should I run this script&quot; after installing, they will just hit yes every time. But also, a lot (I&#x27;m confident it&#x27;s &quot;most&quot;) of NPM install operations are done on a CI server, which need to run without human interaction.
  • Aeolun6 hours ago
    I thought this was a really insightful post, until they used it to try and sell me on Gitlab’s security features.
    • norman7844 hours ago
      You are not the target then, but people using Gitlab might find insightful.
    • jaggirs6 hours ago
      Why would that make it any less insightfull?
      • Aeolun2 hours ago
        It didn’t make it less insightful, but it recontextualized what was, in hindsight, a pretty strong bias towards fearmongering.
      • hu35 hours ago
        Because bias and incentives matter.<p>There&#x27;s a reason disclosures are obligatory in academic papers.
        • baq4 hours ago
          It’s published on gitlab.com, not arxiv
          • rockskon4 hours ago
            It&#x27;s almost like the speakers are motivated by advertising a product to solve a problem in their own garden.
        • serial_dev4 hours ago
          They pulled a little sneaky on ya, mentioning GitLab security features available to GitLab users in a GitLab Security blog post with GitLab logos everywhere.<p>Call me a conspiracy theorist, but I start to think these people might be affiliated with GitLab.
  • thepasswordapp7 hours ago
    The credential harvesting aspect is what concerns me most for the average developer. If you&#x27;ve ever run `npm install` on an affected package, your environment variables, .npmrc tokens, and potentially other cached credentials may have been exfiltrated.<p>The action item for anyone potentially affected: rotate your npm tokens, GitHub PATs, and any API keys that were in environment variables. And if you&#x27;re like most developers and reused any of those passwords elsewhere... rotate those too.<p>This is why periodic credential rotation matters - not just after a breach notification, but proactively. It reduces the window where any stolen credential is useful.
    • Ferret74463 hours ago
      &gt; if you&#x27;re like most developers and reused any of those passwords elsewhere<p>Is this true? God I hope not, if developers don&#x27;t even follow basic security practices then all hope is lost.<p>I&#x27;d assume this is stating the obvious, but storing credentials in environment variables or files is a big no-no. Use a security key or at the very least an encrypted file, and never reuse any credential for anything.
      • lionkor43 minutes ago
        I think so. I know too many developers who cannot be bothered to have a password-manager, beyond the chrome&#x2F;firefox default one. Anything else, and even those, are usually &quot;the standard 2-3 passwords&quot; they use.
    • throwawayqqq112 hours ago
      To me, the worming aspect and taking developers data as hostages against infrastructure take down is most concerning.<p>Previously, you had isolated places to clean up a compromise and you were good to go again. This attack approaches the semi-distributed nature and attacks the ecosystem as a whole and i am affraid this approch will get more sophisticated in the future. It reminds me a little of malicious transactions written into a distributed ledger.
    • Towaway695 hours ago
      &gt; anyone potentially affected<p>How does one know one is affected?<p>What&#x27;s the point of rotating tokens if I&#x27;m not sure that I&#x27;ve been affected - the new tokens will just be ex-filtrated as well.<p>First step would be to identify infection, then clean up and then rotate tokens.
      • mcintyre19944 hours ago
        The article has some indicators of compromise, the main one locally would be .truffler-cache&#x2F; in the home directory. It’s more obvious for package maintainers with exposed credentials, who will have a wormed version of their own packages deployed.<p>From what I’ve read so far (and this definitely could change), it doesn’t install persistent malware, it relies on a postinstall script. So new tokens wouldn’t be automatically exfiltrated, but if you npm install any of an increasing number of packages then it will happen to you again.
        • sierra10111 hour ago
          It does install a GitHub runner and registers the infected machine as a runner, so remote code execution remains possible. It might be a stretch to call it persistent but it definitely tries.
    • dawnerd4 hours ago
      Also a good reminder that you should be storing secrets in some kind of locker, not in plain text via environment variables or config files. Impossible to get everyone on board but if you can you should as much as possible.<p>I hate that high profile services still default to plain text for credential storage.
    • mcintyre19944 hours ago
      Also the user data destruction if it stops being able to propagate itself.
  • ksynwa1 hour ago
    What are the &quot;sha1-hulud&quot; github repositories for exactly? I see files like secrets.json but the contents seems to not be valid json. Are these encrypted?
  • xomodo1 hour ago
    I think I found some repos here: <a href="https:&#x2F;&#x2F;github.com&#x2F;search?q=in:description+Sha1-Hulud&amp;type=repositories" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;search?q=in:description+Sha1-Hulud&amp;type=r...</a>
  • Yokohiii2 hours ago
    I have an friend that starts an project next month that will rely on npm. He is quite a noob and didn&#x27;t code in ages. He will have almost no clue how to harden against this, he will probably not even notice if he becomes a victim until something really bad happens.<p>Pretty sad.
    • mkesper2 hours ago
      At least make them run pnpm instead of npm, disabling post-install scripts. <a href="https:&#x2F;&#x2F;pnpm.io&#x2F;supply-chain-security" rel="nofollow">https:&#x2F;&#x2F;pnpm.io&#x2F;supply-chain-security</a>
  • austin-cheney1 hour ago
    Are there any good alternatives to ESLint? ESLint is now my only dev dependency with hundreds of dependencies of its own.
    • tuzemec1 hour ago
      Biome: <a href="https:&#x2F;&#x2F;biomejs.dev&#x2F;" rel="nofollow">https:&#x2F;&#x2F;biomejs.dev&#x2F;</a><p>Also the whole ecosystem around OXS looks very promising: <a href="https:&#x2F;&#x2F;oxc.rs&#x2F;" rel="nofollow">https:&#x2F;&#x2F;oxc.rs&#x2F;</a>
      • jackwilsdon1 hour ago
        Both of those have over &gt;400 dependencies each [0] [1] but just in Rust instead - there hasn&#x27;t been a Rust supply chain attack yet but is this any better? [2]<p>Admittedly you&#x27;re not normally downloading the dependencies to your machine as you&#x27;re often using pre-built binaries, but a malicious package could still run if a version was shipped with it.<p>[0]: <a href="https:&#x2F;&#x2F;github.com&#x2F;biomejs&#x2F;biome&#x2F;blob&#x2F;93182ea8e9d479fd0187ce21ff8fdcdf143d64cf&#x2F;Cargo.lock" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;biomejs&#x2F;biome&#x2F;blob&#x2F;93182ea8e9d479fd0187ce...</a><p>[1]: <a href="https:&#x2F;&#x2F;github.com&#x2F;oxc-project&#x2F;oxc&#x2F;blob&#x2F;65bd5584bfce0c7da90ff46f8e1052861e14b7eb&#x2F;Cargo.lock" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;oxc-project&#x2F;oxc&#x2F;blob&#x2F;65bd5584bfce0c7da90f...</a><p>[2]: <a href="https:&#x2F;&#x2F;users.rust-lang.org&#x2F;t&#x2F;yet-another-npm-supply-chain-attack-is-cargo-any-safer&#x2F;133766" rel="nofollow">https:&#x2F;&#x2F;users.rust-lang.org&#x2F;t&#x2F;yet-another-npm-supply-chain-a...</a>
        • brabel55 minutes ago
          Wow that’s terrifying.
  • akdor11543 hours ago
    Jesus Christ, i can&#x27;t even get my own package to reliably self-publish in CI without ending up with a fragile pile of twigs, I&#x27;m awed they are able to automate infection like that.
  • yupyupyups11 hours ago
    Something helpful here would be to enable developers to optionally identify themselves. Not Discord-style where only the platform knows their real identity, but publically as well.
    • gruez9 hours ago
      So, EV code signing certificates? Windows has that, and it&#x27;ll verify that right in the OS. Git for instance, shows as being signed by<p>CN = Johannes Schindelin O = Johannes Schindelin S = Nordrhein-Westfalen C = DE<p>Downside is the cost. Certificates cost hundreds of dollars per year. There&#x27;s probably some room to reduce cost, but not by much. You also run into issues of paying some homeless person $50 to use their identity for cyber crimes.
      • brabel53 minutes ago
        You don’t need certificates , just use PGP keys like Maven.
      • mc328 hours ago
        How would the homeless chap have the creds or gravitas for people to trust him or her?
        • veeti5 hours ago
          I don&#x27;t really know who Johannes Schindelin is either but use git quite happily.
    • dcrazy9 hours ago
      This is what macOS codesigning does. Notarization goes one step further and anchors the signature to an Apple-owned CA to attest that Apple has tied the signature to an Apple developer account.
      • laserbeam8 hours ago
        As I understand it, this attack works because the worm looks for improperly stored secrets&#x2F;keys&#x2F;credentials. Once it find them it publishes malicious versions of those packages. It hits NPM because it’s an easy target… but I could easily imagine it hitting pip or the repo of some other popular language.<p>In principle, what’s stopping the technique from targeting macos CI runners which improperly store keys used for Notorization signing? Or… is it impossible to automate a publishing step for macos? Does that always require a human to do a manual thing from their account to get a project published?
    • morkalork9 hours ago
      You don&#x27;t think bad actors don&#x27;t have access to entire countries worth of stolen identities to use for supply chain attacks?
      • hirsin8 hours ago
        This was largely the reason I rejected &quot;real name verification&quot; ideas at GitHub after the xz attack. (Especially if they are state sponsored) it&#x27;s not that hard for a dedicated actor (which xz certainly was) to get a quality stolen identity.<p>The inevitable evolution of such a feature is a button on your repo saying&quot; block all contributors from China, Russia, and N other countries&quot;. I personally think that&#x27;s the antithesis of OSS and therefore couldn&#x27;t find the value in such a thing.
        • morkalork8 hours ago
          That would be easily defeated by a VPN. The inevitable evolution would be some kind of in-person attestation of identity backed up with some kind of insurance on the contributor&#x27;s work, and, well you&#x27;re converging on the employer-employee relationship then.
          • hirsin8 hours ago
            Yep, I saw the cat and mouse ending at ever increasingly invasive verifications involving more parties, that could ultimately still be worked around by a state actor. We already get asked for &quot;block access from these country ip ranges please&quot; as a security measure despite it being trivially bypassed, so it is easy to predict a useless but strong demand for blocking users based on their verified country.
          • berdario3 hours ago
            &quot;defeated&quot;, yes<p>&quot;easily&quot;, not so much...<p>As in, services can still detect if you&#x27;re connecting through a VPN, and if you ever connect directly (because you forgot to enable the VPN), your real location might be detected. And the consequences there might not be &quot;having to refresh the page with the VPN enabled&quot;, but instead: &quot;find the whole organisation&#x2F;project blocked, because of the connection of one contributor&quot;<p>This is why Comaps is using codeberg, after its predecessor (before the fork) project got locked by GitHub<p><a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=43525395">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=43525395</a><p><a href="https:&#x2F;&#x2F;mastodon.social&#x2F;@organicmaps&#x2F;114155428924741370" rel="nofollow">https:&#x2F;&#x2F;mastodon.social&#x2F;@organicmaps&#x2F;114155428924741370</a><p>Moreover, this kind of stuff is also the reason I stopped accessing Imgur:<p>- if I try without VPN, imgur stops me, because of the UK&#x27;s Online Safety Act<p>- if I try with my personal VPN, I get a 403 error every single time<p>I&#x27;m sure I could get around it by using a different service (e.g. Mullvad), but imgur is just not important enough for me to bother, so I just stopped accessing it altogether
          • 17186274402 hours ago
            So... GPG?
  • dmitrygr7 hours ago
    Lucky for us C programmers. Each distro provides its own trusted libc, and my code has no other dependencies. :)
    • john01dav3 hours ago
      Do you rewrite fundamental data structures over and over, like maps, of just not use them?
      • 17186274402 hours ago
        C (actually POSIX) has a hashmap implementation: <a href="https:&#x2F;&#x2F;man7.org&#x2F;linux&#x2F;man-pages&#x2F;man3&#x2F;hsearch.3.html" rel="nofollow">https:&#x2F;&#x2F;man7.org&#x2F;linux&#x2F;man-pages&#x2F;man3&#x2F;hsearch.3.html</a><p>What it doesn&#x27;t have is a hashmap type, but in C types are cheap and are created on an ad-hoc basis. As long as it corresponds to the correct interface, you can declare the type anyway you like.
    • TheTxT4 hours ago
      But how do you left pad a string?
      • 17186274402 hours ago
        <p><pre><code> char * left_pad (const char * string, unsigned int pad) { char tmp[strlen (string)+pad+1]; memset (tmp, &#x27; &#x27;, pad); strcpy (tmp+pad, string); return strdup (tmp); } </code></pre> Doesn&#x27;t sound too hard in my opinion. This only works for strings, that fit on the stack, so if you want to make it robust, you should check for the string size. It (like everything in C) can of course fail. Also it is a quite naive implementation, since it calculates the string size three times.
        • brabel47 minutes ago
          Not a C expert but you’re using a dynamic array right on the stack, and then returning the duplicate of that. Shouldn’t that be Malloc’ed instead?? Is it safe to return the duplicate of a stack allocated array, wouldn’t the copy be heap allocated anyway? Not to mention it blows the stack and you get segmentation fault?
          • 171862744032 minutes ago
            &gt; and then returning the duplicate of that. Shouldn’t that be Malloc’ed instead??<p>Like the sibling already wrote, that&#x27;s what strdup does.<p>&gt; Is it safe to return the duplicate of a stack allocated<p>Yeah sure, it&#x27;s a copy.<p>&gt; wouldn’t the copy be heap allocated anyway?<p>Yes. I wouldn&#x27;t commit it like that, it is a naive implementation. But honestly I wouldn&#x27;t commit leftpad at all, it doesn&#x27;t sound like a sensible abstraction boundary to me.<p>&gt; Not to mention it blows the stack and you get segmentation fault?<p>Yes and I already mentioned that in my comment.<p>---<p>&gt; dynamic array right on the stack<p>Nitpick: It&#x27;s a variable length array and it is auto allocated. Dynamic allocation refers to the heap or something similar, not already done by the compiler.
          • lionkor40 minutes ago
            strdup allocates<p><a href="https:&#x2F;&#x2F;en.cppreference.com&#x2F;w&#x2F;c&#x2F;experimental&#x2F;dynamic&#x2F;strdup" rel="nofollow">https:&#x2F;&#x2F;en.cppreference.com&#x2F;w&#x2F;c&#x2F;experimental&#x2F;dynamic&#x2F;strdup</a>
    • testdelacc13 hours ago
      How do you create a hashmap?
  • AmbroseBierce3 hours ago
    Microsoft should just bite the bullet and make a huge JS standard library and then send GitHub notifications to all the project maintainers who are using anything that could be replaced by something from there suggesting them to do such replacement. This would likely significantly reduce the number of supply chain attacks on the npm ecosystem.
    • dominicrose2 hours ago
      JS also has a stability issue. The language evolved fast, the tools and the number of tools evolved fast and in different directions. The module system is a mess and trying to make it better caused more mess. There&#x27;s Node.js, TypeScript and the browser. That&#x27;s a lot to handle when trying to make something &quot;std&quot;.<p>Meanwhile I have been using Ruby for 15 years and it has evolved in a stable way without breaking everything and without having to rewrite tons of libraries. It&#x27;s not as powerful in terms of performance and I&#x2F;O, it&#x27;s not as far-reaching as JS is because it doesn&#x27;t support the browser, it doesn&#x27;t have a typescript equivalent, but it&#x27;s mature and stable and its power is that it&#x27;s human-friendly.
    • nottorp3 hours ago
      There&#x27;s an xckd for that :)<p>The one with 12 competing standards going to 13 competing standards, or something like that.
      • AmbroseBierce2 hours ago
        Pretty sure Microsoft is exponentially bigger than 99% of the library authors out there, and add to that the giant communication channel that GitHub gives it over developers, so the analogy breaks pretty fast.
        • nottorp2 hours ago
          Or it&#x27;s worse, because there&#x27;s a good bunch of devs that don&#x27;t trust MS by default?
      • latexr2 hours ago
        <a href="https:&#x2F;&#x2F;xkcd.com&#x2F;927&#x2F;" rel="nofollow">https:&#x2F;&#x2F;xkcd.com&#x2F;927&#x2F;</a>
    • testdelacc13 hours ago
      This is harder than it sounds. Look at the amount of effort it took to standardise temporal (new time library) and then for all the runtimes to implement it. It’s a lot of work.<p>And what’s more, people have proposed a standard library through tc39 without success - <a href="https:&#x2F;&#x2F;github.com&#x2F;tc39&#x2F;proposal-built-in-modules" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;tc39&#x2F;proposal-built-in-modules</a><p>Of course any large company could create a massive standard library on their own without going through the standards process but it might not be adopted by developers.
    • h4ck_th3_pl4n3t2 hours ago
      That is literally how the CycloneDX SBOM packages work, well, after the fact and after the disclosure process.
  • xyzal4 hours ago
    Okay ... what best practices should I as a mere dev follow to be protected? Is the &quot;cooldown&quot; approach enough, or should every npm command be run in bubblewrap ... ?
    • mcintyre19944 hours ago
      In this narrow case, using pnpm or something similar that blocks postinstall scripts by default should be sufficient. In general, you probably want to use a container&#x2F;vm&#x2F;sandbox of some sort so dev stuff can’t access anything else on your machine.
  • zx80802 hours ago
    Everyone wanted to centralise as much as possible to save every cent. No wonder what it got us all into.<p>Enjoy it while saving your cent!
    • Flere-Imsaho1 hour ago
      Also layer upon layer of abstractions - to the point where no single person understands the stack from top to bottom.<p>Perhaps there is a light at the end of the tunnel: with AI coding assistance, the whole application can be written from scratch (like the old days). All the code is there, not buried deep within someone else&#x27;s codebase.
  • TZubiri14 hours ago
    Not all the npm packages, but always an npm package
    • cyanydeez13 hours ago
      While you think this is a producer problem, it&#x27;s simply a userland market.<p>Just like in the 90s when viruses primarily went to windows, it&#x27; wasn&#x27;t some magical property of windows, it was the market of users available.<p>Also, following this logic, it then becomes survivorship bias, in that the more attacks they get, the more researchers spend time looking &amp; documenting.
      • elwebmaster8 hours ago
        While it can happen to anyone npm does preselect the users most likely to unknowingly amplify such an attack. Just today I was working on a simple JS script while disconnected from the Internet, Qwen Coder suggested I “npm install glob” which I couldn’t because there was no internet, so I asked for an alternative and sure enough the alternative solution was two lines of vanilla JS. This is just one example but it is the modus operandi of the NPM ecosystem.
      • KevinMS10 hours ago
        &gt; it&#x27; wasn&#x27;t some magical property of windows<p>no, it really was windows
        • foobiekr8 hours ago
          It really wasn&#x27;t. MacOS classic was full of vulnerabilities as was OS&#x2F;2 and Linux up through 2004. Windows dominated because it was the biggest ecosystem.
          • elwebmaster8 hours ago
            And had the highest proportion of ignorant users.
          • ndsipa_pomu3 hours ago
            What made Windows easy to exploit was that it enabled a bunch of network services by default. I don&#x27;t know about MacOS, but Linux disabled network services by default and generally had a better grasp of network security such as requiring authentication for services (e.g. compare telnet and ssh).<p>Also, Windows had the ridiculous default of immediately running things when a user put in a CD or USB stick - that behaviour led to many infections and is obviously a stupid default option.<p>I&#x27;m not even going to mention the old Windows design of everyone running with admin privileges on their desktop.
      • TZubiri12 hours ago
        right, npm users. The extreme demand for simple packages and the absent consideration creates an opportunity for attackers to insert &quot;free&quot; solutions. The problem are the &#x27;npm install&#x27; happy developers no doubt.
  • ChrisArchitect20 hours ago
    Discussion: <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=46032539">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=46032539</a>
    • ares62313 hours ago
      Phew, thought it was another one.
      • gchamonlive13 hours ago
        &gt; Our internal monitoring system has uncovered multiple infected packages containing what appears to be an evolved version of the &quot;Shai-Hulud&quot; malware.<p>Although it&#x27;s not entirely new, it&#x27;s something else.
        • prophesi8 hours ago
          Gitlab&#x27;s post and the linked discussion thread are both from November 24th 2025. I may be misreading the parent comment, but I&#x27;m personally thankful there isn&#x27;t a Return of the Return of Shai-Hulud, as I assumed this was a third recent incident. For those concerned about these attacks, Helixguard&#x27;s post (from the linked discussion) lists out the packages they found to be effected, while Gitlab&#x27;s post gives more information on how the attack works. Since it&#x27;s self-propagating though, assume the list of affected packages might be longer as more NPM tokens are compromised.
  • Incipient12 hours ago
    Surely in this day and age we can fairly trivially find out these come from the usual suspects - China, Russia, Iran, etc. Being in such a digital age, where our economies are built on this tech...is this not effectively (economic) warfare? Why are so many governments blase about it?
    • lionkor36 minutes ago
      I wonder that, too. Surely, this is a fantastic opportunity to claim that it comes from whoever is declared evil right now, and force a harder us-vs-them mindset. If people don&#x27;t have a clearly defined &quot;evil bad guy&quot; that is responsible for <i>everything bad</i>, how will you get teenagers to die for your country in war?<p>Or, in other words; maybe the nature of humans and the inherent pressure of our society to perform, to be rich, to be successful, drives people to do bad things without any state actor behind it?
    • bhouston6 hours ago
      The US and Israel also have advanced penetration teams. But they wouldn&#x27;t be this sloppy - they want persistent advanced access. I suspect Iran, Russia and China also wouldn&#x27;t be this sloppy. This is too wide ranging and easily detectable and scattershot.<p>This feels like opportunistic cyber criminals, or North Korea (which acts like cyber criminals.)
      • Towaway695 hours ago
        Or anti-virus companies selling more of their wares.<p>This kind of large scale attack is perfect advertising for anyone selling protection against such attacks.<p>Spy agencies have no interest in selling protection.
    • halJordan10 hours ago
      It shouldn&#x27;t be a &quot;get the foreigners!&quot; situation. Sure that is a method of solving the symptoms. But what you&#x27;re really asking for is ... a software bill of materials. Why dont we have that yet? Bc it&#x27;s cheaper to get ripped off than it is to pay for a bom. Thats the real problem
      • c0balt9 hours ago
        SBOMs exist. You can get them generated for most software via package managers in standard forms like cyclonedx.<p>It&#x27;s just not that effective when the SBOM becomes unmanageable. For example, our JS project at $work has 2.3k dependencies just from npm. I can give you that SBOM (and even include the system deps with nix) but that won&#x27;t really help you.<p>They are only really effective when the size is reasonable.
      • Ekaros5 hours ago
        SBOM really doesn&#x27;t do much when compromise happens before or while you are building it. It really is orthogonal to these types of attacks. Best you can do is to find that you were compromise afterwards.
    • Nextgrid11 hours ago
      Proving the attack is state-sponsored is difficult (as any attack you attribute to a country can very well be a false-flag operation), and “state sponsorship” is itself a spectrum; for example, you could argue India’s insufficient action against tech-support scammers is effectively state-sanctioned.<p>This can of course be resolved, but here’s the kicker: our own governments equally enjoy this ambiguity to do their own bidding; so no government truly has an incentive to actually improve cross-border identity verification and cybercrime enforcement.<p>Not to mention, even besides government involvement, these malicious actors still “engage” or induce “engagement” which happens to be the de-facto currency of the technology industry, so even businesses don’t actually have any incentive of fighting them.
      • mc328 hours ago
        A one or two off can be a false flag, thousand upon thousands is not going to be a false flag.
    • epolanski9 hours ago
      They aren&#x27;t, in fact the very true happens, that we are bombarded non stop with information that everything is the fault of actors from these companies even when it isn&#x27;t.<p>We should fight this kind of behavior (and our privacy) regardless of whose involved, yet our governments in the west have nurtured this narrative of always pointing at big tech and foreign actors as scape goats for anything privacy or hacking related.<p>Also, any cyber attack tracker will show you this is a global issue, if you think there aren&#x27;t millions of attacks carried out from our own countries, you&#x27;re not looking enough.
    • csomar9 hours ago
      We are still bound to our primal instincts. If you cut the throat of a baby in the middle of Times Square, the outrage will be insane. Yet, lack of financing to hospitals can do that many times over but people are numb to it.<p>Take the Jaguar hack, the economic loss is estimated at 2.5bn. Given an average house price in the UK of $300k, that’s like destroying ~8.000 homes.<p>Do you think the public and international response will be the same if Russia or China leveled a small neighborhood even with no human casualties?
    • kachapopopow7 hours ago
      majority of these are actually north korea, india and america. the really disappointing ones are usually india and american and ones that lay dormant code are usually north korea.