These attacks may just be the final push I needed to take server rendering (without js) more seriously. The HTMX folks convinced me that I can get REALLY far without any JavaScript, and my apps will probably be faster and less janky anyway.
Socket:
- Sep 15 (First post on breach): https://socket.dev/blog/tinycolor-supply-chain-attack-affect...
- Sep 16: https://socket.dev/blog/ongoing-supply-chain-attack-targets-...
StepSecurity – https://www.stepsecurity.io/blog/ctrl-tinycolor-and-40-npm-p...
Aikido - https://www.aikido.dev/blog/s1ngularity-nx-attackers-strike-...
Ox - https://www.ox.security/blog/npm-2-0-hack-40-npm-packages-hi...
Safety - https://www.getsafety.com/blog-posts/shai-hulud-npm-attack
Phoenix - https://phoenix.security/npm-tinycolor-compromise/
Semgrep - https://semgrep.dev/blog/2025/security-advisory-npm-packages...
These attacks may just be the final push I needed to take server rendering (without js) more seriously. The HTMX folks convinced me that I can get REALLY far without any JavaScript, and my apps will probably be faster and less janky anyway.
I see this odd take a lot - the automatic narrowing of the scope of an attack to the single ecosystem it occurred in most recently, without any real technical argument for doing so.
What's especially concerning is I see this take in the security industry: mitigations put in place to target e.g. NPM, but are then completely absent for PyPi or Crates. It's bizarre not only because it leaves those ecosystems wide open, but also because the mitigation measures would be very similar (so it would be a minimal amount of additional effort for a large benefit).
I ask because think the directionality is backwards here: I’ve been involved in packaging ecosystem security for the last few years, and I’m generally of the opinion that PyPI has been ahead of the curve on implementing mitigations. Specifically, I think widespread trusted publishing adoption would have made this attack less effective since there would be fewer credentials to steal, but npm only implemented trusted publishing recently[1]. Crates also implemented exactly this kind of self-scoping, self-expiring credential exchange ahead of npm.
(This isn’t to malign any ecosystem; I think people are also overcorrect in treating this like a uniquely JavaScript-shaped problem.)
[1]: https://github.blog/changelog/2025-07-31-npm-trusted-publish...
But don't brush off "special status" of NPM here. It is unique in that JS being language of both front-end and back-end, it is much easier for the crooks to sneak in malware that will end up running in visitor's browser and affect them directly. And that makes it a uniquely more attractive target.
As far as I know crates.io has everything that npm has, plus
- strictly immutable versions[1]
- fully automated and no human in the loop perpetual yanking
- no deletions ever
- a public and append only index
Go modules go even further and add automatic checksum verification per default and a cryptographic transparency log.
Contrast this with docker hub for example, where not even npm's basic properties hold.
So, it is more like
docker hub ⊂ npm ⊂ crates.io ⊂ Go modules
[1] Nowadays npm has this arguably too
1- thoroughly and fully analyze any dependency tree you plan to include 2- immediately freeze all its versions 3- never update without very good reason or without repeating 1 and 2
in other words: simply be professional, face logical consequences if you aren't. if you think one package manager is "safer" than others because magic reasons odds are you'll find out the hard way sooner or later.
But NPM has a much, much bigger problem on the client side, that makes many of these mitigations almost moot. And that is that `npm install` will upgrade every single package you depend on to its latest version that matches your declared dependency, and in JS land almost everyone uses lax dependency declarations.
So, an attacker who simply publishes a new patch version of a package they have gained access to will likely poison a good chunk of all of the users of that package in a relatively short amount of time. Even if the projects using this are careful and use `npm ci` instead of `npm install` for their CI builds, it will still easily get developers to download and run the malicious new version.
Most other ecosystems don't have this unsafe-by-default behavior, so deploying a new malicious version of a previously safe package is not such a major risk as it is in NPM.
I'm not referring to mitigations in public repositories (which you're right, are varied, but that's a separate topic). I'm purely referring to internal mitigations in companies leveraging open-source dependencies in their software products.
These come in many forms, everything from developer education initiatives to hiring commercial SCA vendors, & many other things in between like custom CI automations. Ultimately, while many of these measures are done broadly for all ecosystems when targeting general dependency vulnerabilities (CVEs from accidental bugs), all of the supply-chain-attack motivated initiatives I've seen companies engage in are single-ecosystem. Which seems wasteful.
NPM is special in the same way as Windows is special when it comes to malware: it's a more lucrative target.
However, the issue here is that - unlike Windows - targetting NPM alone does not incur significantly less overhead than targetting software registries more broadly. The trade-off between focusing purely on NPM & covering a lot of popular languages isn't high, & imo isn't a worthwhile trade-off.
They do, BUT.
Dependency versioning schemes are much more strictly adhered to within JS land than in other ecosystems. PyPi is a mishmash of PEP 440, SemVer, some packages incorrectly using one in the format of the other, & none of the 3 necessarily adhering to the standard they've chosen. Other ecosystems are even worse.
Also - some ecosystems (PyPi again) are committing far worse offences than lax versioning - versionless dependency declaration. Heavy reliance on requirements.txt without lockfiles where half the time version isn't even specified at all. Astral/Poetry are improving the situation here but things are still bad.
Maven land is full of plugins with automated pom.xml version templating that has effectively the same effect as lax versioning, but without any strict adherence to any kind of standard like semver.
Yes, the situation in JS land isn't great, but there are much worse offenders out there.
You'd have to go out of your way to make your project as bad as you're describing.
- Axios and Jest have "native" options now (fetch and node --test). fetch is especially nice because it is the same API in the browser and in Node (and Deno and Bun).
- Redux is self-contained.
- React itself is sort of self-contained, it's the massive ecosystem that makes React the most appealing that starts to drive dependency bloat. I can't speak to Svelte.
I'm in the C ecosystem mostly. Is one NPM package the equivalent of one object file? Can NPM packages call internal functions for their dependencies instead of relying so heavily on bringing in so many external ones? I guess it's a problem either way, internal dependencies having bugs vs supply chain attacks like these. Doesn't bringing in so many dependencies lead to a lot of dead code and much larger codebases then necessary?
Overall, publishing a new malicious version of a package is a much lesser problem in virtually any ecosystem other than NPM; in NPM, it's almost an automatic remote code execution vulnerability for every NPM dev, and a persistent threat for many NPM packages even without this.
Please elaborate on this. I'm a long-time Java developer and have never once seen something akin to what you're describing here. Maven has support for version ranges but in practice it's very rarely used. I can expect a project to build with the exact same dependencies resolved today and in six months or a year from now.
> In short, the main differences between using npm install and npm ci are:
> The project must have an existing package-lock.json or npm-shrinkwrap.json.
> If dependencies in the package lock do not match those in package.json, npm ci will exit with an error, instead of updating the package lock.
By default npm will create a lock file and give you the exact same version every time unless you manually initiate an upgrade. Additionally you could even remove the package-lock.json and do a new npm install and it still wouldn't upgrade the package if it already exists in your node_modules directory.
Only time this would be true is if you manually bump the version to something that is incompatible, or remove both the package-lock.json and your node_modules folder.
Cargo lockfiles contain checksums and Cargo has used these for automatic verification since time immemorial, well before Go implemented their current packaging system. In addition, Go doesn't enforce the use of go.sum files, it's just an optional recommendation: https://go.dev/wiki/Modules#should-i-commit-my-gosum-file-as... I'm not aware of any mechanism which would place Go's packaging system at the forefront of mitigation implementations as suggested here.
Indeed, crates.io implemented PyPI's trusted publishing and explicitly called out PyPI as their inspiration: https://blog.rust-lang.org/2025/07/11/crates-io-development-...
No. The closest thing to a package (on almost every language) is an entire library.
> Can NPM packages call internal functions for their dependencies instead of relying so heavily on bringing in so many external ones?
Yes, they can. They just don't do it.
> Doesn't bringing in so many dependencies lead to a lot of dead code and much larger codebases then necessary?
There aren't many unecessary dependencies, because the number of direct dependencies on each package is reasonable (on the order of 10). And you don't get a lot of unecessary code because the point of tiny libraries is to only import what you need.
Dead code is not the problem, instead the JS mentality evolved that way to minimize dead code. The problem is that dead code is actually not that much of an issue, but dependency management is.
I'm going to assume this is you running this locally to generate releases, presumably for personal projects?
If you're building your projects in CI you're not pulling in the same version without a lockfile in place.
- NPM Install without modifying the package-lock.json https://www.mikestreety.co.uk/blog/npm-install-without-modif...
- Why does "npm install" rewrite package-lock.json? https://stackoverflow.com/questions/45022048/why-does-npm-in...
- npm - How to actually use package-lock.json for installing based on locked versions? https://stackoverflow.com/questions/47480617/npm-how-to-actu...
I know this is possible with custom plugins but I've mainly just seen it using maven wrapper & user properties.
> The point is still different. In PyPI, if I put `requests` in my requirements.txt, and I run `pip install -r requirements.txt` every time I do `make build`, I will still only get one version of requests - the latest available the first time I installed it.
Only because your `make build` is a custom process that doesn't use build isolation and relies on manually invoking pip in an existing environment.
Ecosystem standard build tools (including pip itself, using `pip wheel` — which really isn't meant for distribution, but some people seem to use it anyway) default to setting up a new virtual environment to build your code (and also for each transitive dependency that requires building — to make sure that your dependencies' build tools aren't mutually incompatible, or broken by other things in the envrionment). They will read `requests` from `[project.dependencies]` in your pyproject.toml file and dump the latest version in that new environment, unless you use tool-specific configuration (or of course a better specification in pyproject.toml) to prevent that. And if your dependencies were only available as sdists, the build tool would even automatically, recursively attempt to build those, potentially running arbitrary code from the package in the process.
I would actually say ripgrep is not especially typical here. I put a lot of energy into keeping my dependency tree slim. Many Rust applications have hundreds of dependencies.
We aren't quite at thousands of dependencies yet though.
personally i keep dependencies at a minimum and are very picky with them, partly because of nr1, but as a general principle. of course if people happily suck in entire trees without supervision just to print ansi colors on the terminal or, as in this case, use fancy aliases for colors then bad things are bound to happen. (tbf tinycolor has one single devDependency, shim-deno-test, which only requires typescript. that should be manageable)
i'll grant you that the js ecosystem is special, partly because the business has traditionally reinforced the notion of it being accessory, superficial and not "serious" development. well, that's just naivety, it is as critical a component as any other. ideally you should even have a security department vetting the dependencies for you.
Funny that text editor is being presented here as some kind of behemoth, not representative of typical software written in Rust. I guess typical would be 1234th JSON serialization library.
I bet it's hundreds.
Jest alone adds 300 packages.
Consequently I doubt that you in fact "thoroughly and fully" analyzed all your dependencies.
Unless what you're shipping isn't a feature rich app, what you proposed seems entirely unrealistic.
Not sure how he actually got the number; this was just a frustrated Slack message like 4 years ago
A sibling comment mentions we could have been using Cargo workspaces wrong... So, maybe?
But you definitely aren't finding hundreds of versions of `regex` in the same dependency tree.
Thank you for your honesty, and like you and I said, you put a lot of energy into keeping the dependency tree slim. This is not as common as one would like to believe.
Hundreds? Yes, absolutely. That's common.
In retrospect, I should have kept a list of these projects. I probably have not deleted these directories though, so I probably still could make a list of some of these projects.
(ETA: Also, you may not need much for a mocks library because JS' Proxy meta-object isn't that hard to work with directly.)
As well, both Dependabot and Renovate in isolated environments withour secrets or privileges, need to be manually approved, and have minimum publication ages before recommending a package update to prevent basic supply chain attacks or lockfile corruption from a pinned package version being de-published (up to a 3 day window on NPM).
The way you describe it working doesn't even pass a basic "common sense" check. Just think about what you're saying: despite having a `package-lock.json`, every developer who works on a project will get every dependency updated every time they clone it and get to work?
The entire point of the lockfile is that installations respect it to keep environments agreed. The only difference with `clean install` is that it removes `node_modules` (no potential cache poisoning) and non-zero exits if there is a conflict between `package.json` and `package-lock.json`.
`install` will only update the lockfile where the lockfile conflicts with the `package.json` to allow you to make changes to that file manually (instead of via `npm` commands).
2&3. NPM 5 had a bug almost a decade ago. They literally link to it in both of those pages. Here[^1] is a developer repeating how I've said its supposed to work.
It would have taken you less work to just try this in a terminal than search for those "citations".
[^1]: https://github.com/npm/npm/issues/17979#issuecomment-3327012...