These attacks may just be the final push I needed to take server rendering (without js) more seriously. The HTMX folks convinced me that I can get REALLY far without any JavaScript, and my apps will probably be faster and less janky anyway.
Socket:
- Sep 15 (First post on breach): https://socket.dev/blog/tinycolor-supply-chain-attack-affect...
- Sep 16: https://socket.dev/blog/ongoing-supply-chain-attack-targets-...
StepSecurity – https://www.stepsecurity.io/blog/ctrl-tinycolor-and-40-npm-p...
Aikido - https://www.aikido.dev/blog/s1ngularity-nx-attackers-strike-...
Ox - https://www.ox.security/blog/npm-2-0-hack-40-npm-packages-hi...
Safety - https://www.getsafety.com/blog-posts/shai-hulud-npm-attack
Phoenix - https://phoenix.security/npm-tinycolor-compromise/
Semgrep - https://semgrep.dev/blog/2025/security-advisory-npm-packages...
These attacks may just be the final push I needed to take server rendering (without js) more seriously. The HTMX folks convinced me that I can get REALLY far without any JavaScript, and my apps will probably be faster and less janky anyway.
There is a difference, but it's not an order of magnitude and neither is a true island.
Granted, deciding not to use JS on the server is reasonable in the context of this article, but for the client htmx is as much a js lib with (dev) dependencies as any other.
https://github.com/bigskysoftware/htmx/blob/master/package.j...
Unless this lack of scrutiny is exclusive to JavaScript ecosystem, then this attack could just as well have happened in Rust or Golang.
Supply chain attacks happen at every layer where there is package management or a vector onto the machine or into the code.
What NPM should do if they really give a shit is start requiring 2FA to publish. Require a scan prior to publish. Sign the package with hard keys and signature. Verify all packages installed match signatures. Semver matching isn’t enough. CRC checks aren’t enough. This has to be baked into packages and package management.
Languages without package managers have a lot more friction to pull in dependencies. You usually rely on the operating system and its package-manager-humans to provide your dependencies; or on primitive OSes like Windows or macOS, you package the dependencies with your application, which involves integrating them into your build and distribution systems. Both of those involve a lot of manual, human effort, which reduces the total number of dependencies (attack points), and makes supply-chain issues like this more likely to be noticed.
The language package managers make it trivial to pull in dozens or hundreds of dependencies, straight from some random source code repository. Your dependencies can add their own dependencies, without you ever knowing. When you have dozens or hundreds of unvetted dependencies, it becomes trivial for an attacker to inject code they control into just one of those dependencies, and then it's game over for every project that includes that one dependency anywhere in their chain.
It's not impossible to do that in the OS-provided or self-managed dependency scenario, but it's much more difficult and will have a much narrower impact.
I see this odd take a lot - the automatic narrowing of the scope of an attack to the single ecosystem it occurred in most recently, without any real technical argument for doing so.
What's especially concerning is I see this take in the security industry: mitigations put in place to target e.g. NPM, but are then completely absent for PyPi or Crates. It's bizarre not only because it leaves those ecosystems wide open, but also because the mitigation measures would be very similar (so it would be a minimal amount of additional effort for a large benefit).
> Server-side-rendering without JavaScript is just back to the stuff Perl and PHP give you.
As well as Ruby, Python, Go, etc.
While technically true, I have yet to see Go projects importing thousands of dependencies. They may certainly exist, but are absolutely not the rule. JS projects, however...
We have to realize, that while supply chain attacks can happen everywhere, the best mitigations are development culture and solid standard library - looking at you, cargo.
I am a JS developer by trade and I think that this ecosystem is doomed. I absolutely avoid even installing node on my private machine.
I just went to crates.io and picked a random newly updated crate, which happened to be pixelfix, which fixes transparent pixels in pngs.
It has six dependencies and hundreds of transient dependencies, may of which appear to be small and highly specific a la left-pad.
https://crates.io/crates/pixelfix/0.1.1/dependencies
Maybe this package isn't representative, but it feels pretty identical to the JS ecosystem.
I ask because think the directionality is backwards here: I’ve been involved in packaging ecosystem security for the last few years, and I’m generally of the opinion that PyPI has been ahead of the curve on implementing mitigations. Specifically, I think widespread trusted publishing adoption would have made this attack less effective since there would be fewer credentials to steal, but npm only implemented trusted publishing recently[1]. Crates also implemented exactly this kind of self-scoping, self-expiring credential exchange ahead of npm.
(This isn’t to malign any ecosystem; I think people are also overcorrect in treating this like a uniquely JavaScript-shaped problem.)
[1]: https://github.blog/changelog/2025-07-31-npm-trusted-publish...
But don't brush off "special status" of NPM here. It is unique in that JS being language of both front-end and back-end, it is much easier for the crooks to sneak in malware that will end up running in visitor's browser and affect them directly. And that makes it a uniquely more attractive target.
As far as I know crates.io has everything that npm has, plus
- strictly immutable versions[1]
- fully automated and no human in the loop perpetual yanking
- no deletions ever
- a public and append only index
Go modules go even further and add automatic checksum verification per default and a cryptographic transparency log.
Contrast this with docker hub for example, where not even npm's basic properties hold.
So, it is more like
docker hub ⊂ npm ⊂ crates.io ⊂ Go modules
[1] Nowadays npm has this arguably too
At some point people need to realize and go back to writing vanilla js, which will be very hard.
The rust ecosystem is also the same. Too much dependence on packages.
An example of doing it right is golang.
Just defending Rust.
> 5 remaining dependencies have lots of dependencies of their own.
Mostly well-known crates like rayon, crossbeam, tracing, etc.
NPM has also been sending out nag emails for the last 2+ years about 2FA. If anything, that constituted an assist in the attack on the Junon account that we saw a couple weeks ago.
That is, if some attacker create some dummy trivial but convenient package and 2 years latter half the package hub depends on it somehow, the attacker will just use its legit credential to pown everyone and its dog. This is not even about stilling credentials. It’s a cultural issue with bare blind trust to use blank check without even any expiry date.
Assuming 'go mod tidy' is periodically run go.mod should contain all dependencies (which in this case seems to be shy of 300, still a lot).
1- thoroughly and fully analyze any dependency tree you plan to include 2- immediately freeze all its versions 3- never update without very good reason or without repeating 1 and 2
in other words: simply be professional, face logical consequences if you aren't. if you think one package manager is "safer" than others because magic reasons odds are you'll find out the hard way sooner or later.
However, processes and practices around NodeJS and npm are in dire need of a security overhaul. leftpad is a cultural problem that needs to be addressed. To start with, snippets don't need to be on npm.
That's really the core issue. Developer-signed packages (npm's current attack model is "Eve doing a man-in-the-middle attack between npm and you," which is not exactly the most common threat here) and a transparent key registry should be minimal kit for any package manager, even though all, or at least practically all, the ecosystems are bereft of that. Hardening API surfaces with additional MFA isn't enough; you have to divorce "API authentication" from "cryptographic authentication" so that compromising one doesn't affect the other.
If you can sneak malware into a JavaScript application that runs in millions of browsers, that's a lot more useful that getting a some number servers running a module as part of a script, who's environment is a bit unknown.
Javascript really could do with a standard library.
The isolated context is gone and a single instance of code talking to an individual client has access to your entire database. It’s a completely different threat model.
What I'm wondering if it would help the ecosystem, if you were able to rather load raw snippets into your codebase, and source control as opposed to having them as dependencies.
So e.g. shadcn component pasting approach.
For things like leftPad, cli colors and others you would just load raw typescript code from a source, and there you would immediately notice something malicious or during code reviews.
You would leave actual npm packages to only actual frameworks / larger packages where this doesn't make sense and expect higher scrutiny, multi approvals of releases there.
<https://docs.npmjs.com/configuring-two-factor-authentication...>
> You can't expect people to re-write everything over and over.
That’s the excuse everyone is giving, then you see thousands of terminal libraries and calendar pickers.
Any Rust project I have ever compiled pulled in over 1000 dependencies. Recently it was Zed with its >2000 dependencies.
While npm is a huge and easy target, the general problem exists for all package repositories. Hopefully a supply chain attack mitigation strategy can be better than hoping attackers target package repositories you aren't using.
While there's a culture prevalent in Javascript development to ignore the costs of piling abstractions on top of abstractions, you don't have to buy into it. Probably the easiest thing to do is count transitive dependencies.
This works for deps of deps as well, so anything in your node_modules has access to this hook.
It's a terrible idea and something that ought to be removed or replaced by something much safer.
You all really need to stop using this term when it comes to OSS. Supply chain implies a relationship, none of these companies or developers have a relationship with the creators other than including their packages.
Call it something like "free code attacks" or "hobbyist code attacks."
This said, less potential vendors supplying packages 'may' reduce exposure, but doesn't remove it.
Either way, not running the bleeding edge packages unless it's a known security fix seems like a good idea.
I'm not quite sure what that would mean, but if it solves the problem for browsers, why not for server?
“code I somehow took a dependency on when copying bits of someone’s package.json file”
“code which showed up in my lock file and I still don’t know how it got there”
Lists of things that won't happen. Companies are filled with node_modules importers these days.
Even worse, now you have to check for security flaws in that JS that's been written by node_modules importers.
That or there could someone could write a standard library for JS?
As another subthread mentioned (https://news.ycombinator.com/item?id=45261303), there is something which can be done: auditing of new packages or versions, by a third party, before they're used. Even doing a simple diff between the previous version and the current version before running anything within the package would already help.
I would bet that you'll find a third party `leftpad` implementation in org.apache.commons or in Spring or in some other collection of utils in Java. The difference isn't the need for 3rd party software to fix gaps in the standard library - it's the preference for hundreds of small dependencies instead of one or two larger ones.
And even on client side, the sandboxing helps isolate any malicious webpage, even ones that are accidentally malicious, from other webpages and from the rest of your machine.
If malicious actors could get gmail.com to run their malicious JS on the client side through this type of supply-chain attack, they could very very easily steal all of your emails. The browser sandbox doesn't offer any protection from 1st party javascript.
I’d never worked in any other ecosystem, and I wish I realized that advice was specific to JS culture
How does 2FA prevent malware? Anyone can get a phone number to receive a text or add an authenticator to their phone.
I would argue a subscrption model for 1 EUR/month would be better. The money received could pay for certification of packages and the credit card on file can leverage the security of the payments system.
But NPM has a much, much bigger problem on the client side, that makes many of these mitigations almost moot. And that is that `npm install` will upgrade every single package you depend on to its latest version that matches your declared dependency, and in JS land almost everyone uses lax dependency declarations.
So, an attacker who simply publishes a new patch version of a package they have gained access to will likely poison a good chunk of all of the users of that package in a relatively short amount of time. Even if the projects using this are careful and use `npm ci` instead of `npm install` for their CI builds, it will still easily get developers to download and run the malicious new version.
Most other ecosystems don't have this unsafe-by-default behavior, so deploying a new malicious version of a previously safe package is not such a major risk as it is in NPM.
I'm not referring to mitigations in public repositories (which you're right, are varied, but that's a separate topic). I'm purely referring to internal mitigations in companies leveraging open-source dependencies in their software products.
These come in many forms, everything from developer education initiatives to hiring commercial SCA vendors, & many other things in between like custom CI automations. Ultimately, while many of these measures are done broadly for all ecosystems when targeting general dependency vulnerabilities (CVEs from accidental bugs), all of the supply-chain-attack motivated initiatives I've seen companies engage in are single-ecosystem. Which seems wasteful.
NPM is special in the same way as Windows is special when it comes to malware: it's a more lucrative target.
However, the issue here is that - unlike Windows - targetting NPM alone does not incur significantly less overhead than targetting software registries more broadly. The trade-off between focusing purely on NPM & covering a lot of popular languages isn't high, & imo isn't a worthwhile trade-off.
They do, BUT.
Dependency versioning schemes are much more strictly adhered to within JS land than in other ecosystems. PyPi is a mishmash of PEP 440, SemVer, some packages incorrectly using one in the format of the other, & none of the 3 necessarily adhering to the standard they've chosen. Other ecosystems are even worse.
Also - some ecosystems (PyPi again) are committing far worse offences than lax versioning - versionless dependency declaration. Heavy reliance on requirements.txt without lockfiles where half the time version isn't even specified at all. Astral/Poetry are improving the situation here but things are still bad.
Maven land is full of plugins with automated pom.xml version templating that has effectively the same effect as lax versioning, but without any strict adherence to any kind of standard like semver.
Yes, the situation in JS land isn't great, but there are much worse offenders out there.
You'd have to go out of your way to make your project as bad as you're describing.
- Axios and Jest have "native" options now (fetch and node --test). fetch is especially nice because it is the same API in the browser and in Node (and Deno and Bun).
- Redux is self-contained.
- React itself is sort of self-contained, it's the massive ecosystem that makes React the most appealing that starts to drive dependency bloat. I can't speak to Svelte.
In other languages, you'd have a few dependencies on larger libraries providing related functionality, where the Javascript culture is to use a bunch of tiny libraries to give the same functionality.
I'm in the C ecosystem mostly. Is one NPM package the equivalent of one object file? Can NPM packages call internal functions for their dependencies instead of relying so heavily on bringing in so many external ones? I guess it's a problem either way, internal dependencies having bugs vs supply chain attacks like these. Doesn't bringing in so many dependencies lead to a lot of dead code and much larger codebases then necessary?
> However, processes and practices around NodeJS and npm are in dire need of a security overhaul. leftpad is a cultural problem that needs to be addressed. To start with, snippets don't need to be on npm.
Traditional JS is the reason we have all of these problems around NodeJS and npm. It's a lot better than it was, but a lot of JS tooling came up in the time when ES5 and older were the standard, and to call those versions of the language lacking is... charitable. There were tons of things that you simply couldn't count on the language or its standard library to do right, so a culture of hacks and bandaids grew up around it. Browser disparities didn't help either.
Then people said, "Well, why don't we all share these hacks and bandaids so that we don't have to constantly reinvent the wheel?", and that's sort of how npm got its start. And of course, it was the freewheeling days of the late 00s/early 10s, when you were supposed to "move fast and break things" as a developer, so you didn't have time to really check if any of this was secure or made any sense. The business side wanted the feature and they wanted it now.
The ultimate solution would be to stop slapping bandaids and hacks on the JS ecosystem by making a better language but no one's got the resolve to do that.
Overall, publishing a new malicious version of a package is a much lesser problem in virtually any ecosystem other than NPM; in NPM, it's almost an automatic remote code execution vulnerability for every NPM dev, and a persistent threat for many NPM packages even without this.
Please elaborate on this. I'm a long-time Java developer and have never once seen something akin to what you're describing here. Maven has support for version ranges but in practice it's very rarely used. I can expect a project to build with the exact same dependencies resolved today and in six months or a year from now.
> In short, the main differences between using npm install and npm ci are:
> The project must have an existing package-lock.json or npm-shrinkwrap.json.
> If dependencies in the package lock do not match those in package.json, npm ci will exit with an error, instead of updating the package lock.
Edit: Ghostty is a good counter-example that is open source. https://github.com/ghostty-org/ghostty/tree/main/pkg
Call me crazy but I think agentic coding tools may soon make it practical for people to not be bogged down by the tedium of implementing the same basic crap over and over again, without having to resort to third party dependencies.
I have a little pavucontrol replacement I'm walking Claude Code through. It wanted to use pulsectl but, to see what it could do, I told it no. Write your own bindings to libpulse instead. A few minutes later it had that working. It can definitely write crap like leftpad.
By default npm will create a lock file and give you the exact same version every time unless you manually initiate an upgrade. Additionally you could even remove the package-lock.json and do a new npm install and it still wouldn't upgrade the package if it already exists in your node_modules directory.
Only time this would be true is if you manually bump the version to something that is incompatible, or remove both the package-lock.json and your node_modules folder.
https://github.blog/changelog/2022-11-01-high-impact-package...
Cargo lockfiles contain checksums and Cargo has used these for automatic verification since time immemorial, well before Go implemented their current packaging system. In addition, Go doesn't enforce the use of go.sum files, it's just an optional recommendation: https://go.dev/wiki/Modules#should-i-commit-my-gosum-file-as... I'm not aware of any mechanism which would place Go's packaging system at the forefront of mitigation implementations as suggested here.
Indeed, crates.io implemented PyPI's trusted publishing and explicitly called out PyPI as their inspiration: https://blog.rust-lang.org/2025/07/11/crates-io-development-...
No. The closest thing to a package (on almost every language) is an entire library.
> Can NPM packages call internal functions for their dependencies instead of relying so heavily on bringing in so many external ones?
Yes, they can. They just don't do it.
> Doesn't bringing in so many dependencies lead to a lot of dead code and much larger codebases then necessary?
There aren't many unecessary dependencies, because the number of direct dependencies on each package is reasonable (on the order of 10). And you don't get a lot of unecessary code because the point of tiny libraries is to only import what you need.
Dead code is not the problem, instead the JS mentality evolved that way to minimize dead code. The problem is that dead code is actually not that much of an issue, but dependency management is.
- npm should require 2FA disallow tokens for publishing. This is an option, but it should be a requirement.
- npm should require using a trusted publisher and provenance for package with over 100k downloads a week and their dependencies.
- Github should require a 2FA step for automated publishing
- npm should add a cool down period where if won't install brand new packages without a flag
- npm should stop running postinstall scripts.
- npm should have an option to not install packages without provenance.
In a hypothetical scenario where npm supports signed packages, let's say the user is in the middle of installing the latest signed left-pad. Suddenly, npm prints a warning that says the identity used to sign the package is not in the user's local database of trusted identities.
What exactly is the user supposed to do in response to this warning?
I'm going to assume this is you running this locally to generate releases, presumably for personal projects?
If you're building your projects in CI you're not pulling in the same version without a lockfile in place.
- NPM Install without modifying the package-lock.json https://www.mikestreety.co.uk/blog/npm-install-without-modif...
- Why does "npm install" rewrite package-lock.json? https://stackoverflow.com/questions/45022048/why-does-npm-in...
- npm - How to actually use package-lock.json for installing based on locked versions? https://stackoverflow.com/questions/47480617/npm-how-to-actu...
I know this is possible with custom plugins but I've mainly just seen it using maven wrapper & user properties.
JS library authors in general could decide to write their own (or carefully copy-paste from libraries) utility functions for things rather than depend on a huge mess of packages. This isn't always a great path; obviously reinventing the wheel can come with its own problems.
So yes, I'd agree that the ecosystem encourages JS/TS developers to make use of the existing set of libraries and packages with deep dependency trees, but no one is holding a gun to anyone's head. There are other ways to do it.
Sure, there will be a step/stage that will require access to NPM publish credentials to publish to NPM. But why does this stage need to execute any code except a very small footprint of vetted code? It should just pickup a packaged, signed binary and move it to NPM.
The compilation/packaging step on the other hand doesn't need publishing rights to NPM. Ideally, it should only get a filesystem with the sources, dependencies and a few shared libraries and /sys or /proc dependencies it may need to function. Why does some dependency downloading need access to your entire filesystem? Maybe it needs some allowed secrets, but eh.
It's certainly a lot of change into existing pipelines and ideas, and it's certainly possible to poke holes into there if you want things to be easy. But it'd raise the bar quite a bit.
> The point is still different. In PyPI, if I put `requests` in my requirements.txt, and I run `pip install -r requirements.txt` every time I do `make build`, I will still only get one version of requests - the latest available the first time I installed it.
Only because your `make build` is a custom process that doesn't use build isolation and relies on manually invoking pip in an existing environment.
Ecosystem standard build tools (including pip itself, using `pip wheel` — which really isn't meant for distribution, but some people seem to use it anyway) default to setting up a new virtual environment to build your code (and also for each transitive dependency that requires building — to make sure that your dependencies' build tools aren't mutually incompatible, or broken by other things in the envrionment). They will read `requests` from `[project.dependencies]` in your pyproject.toml file and dump the latest version in that new environment, unless you use tool-specific configuration (or of course a better specification in pyproject.toml) to prevent that. And if your dependencies were only available as sdists, the build tool would even automatically, recursively attempt to build those, potentially running arbitrary code from the package in the process.
Honestly I wish Python worked this way too. The reason people use Requests so much is because urllib is so painful. Changes to a first-party standard library have to be very conservative, which ends up leaving stuff in place that nobody wants to use any more because they have higher standards now. It'd be better to keep the standard library to a minimum needed more or less just to make the REPL work, and have all of that be "builtin" the way that `sys` is; then have the rest available from the developers (including a default "full-fat" distribution), but in a few separately-obtainable pieces and independently versioned from the interpreter.
And possibly maintained by a third party like Boost, yeah. I don't know how important that is or isn't.
JS apps, despite the HN narrative, have a much stronger incentive to reduce bundle/“executable” size compared to most other software, because the expectation is for your web app to “download” nearly instantly for every new user. (Compare to nearly any other type of software, client or server, where that’s not an expectation.)
JS comes with exactly zero tools out of the box to make that happen. You have to go out of your way to find a modern toolchain that will properly strip out dead code and create optimized scripts that are as small as possible.
This means the “massive JS library which includes everything” also depends on having a strong toolchain for compiling code. And while may professional web projects have that, the basic script tag approach is still the default and easiest way to get started… and pulling in a massive std library through that is just a bad idea.
This baseline — the web just simply having different requirements around runtime execution — is part of where the culture comes from.
And because the web browser traditionally didn’t include enough of a standard library for making apps, there’s a strong culture of making libraries and frameworks to solve that. Compare to native apps, where there’s always an official sdk or similar for building apps, and libraries like boost are more about specific “lower level” language features (algorithms, concurrency, data structures, etc) and less about building different types of software like full-blown interactive applications and backend services.
There are attempts to solve this (Deno is probably the best example), but buy-in at a professional level requires a huge commitment to migrate and change things, so there’s a lot of momentum working against projects like that.
Have fun, seems like a misguided reason to do that though.
A. A package hosted somewhere using a language was compromised!
B. I am not going to program in the language anymore!
I don't see how B follows A.
I would actually say ripgrep is not especially typical here. I put a lot of energy into keeping my dependency tree slim. Many Rust applications have hundreds of dependencies.
We aren't quite at thousands of dependencies yet though.
E.g. there is a built in function that takes elements pairwise from a list! That level of minutia being included feels nuts having come from other languages.
personally i keep dependencies at a minimum and are very picky with them, partly because of nr1, but as a general principle. of course if people happily suck in entire trees without supervision just to print ansi colors on the terminal or, as in this case, use fancy aliases for colors then bad things are bound to happen. (tbf tinycolor has one single devDependency, shim-deno-test, which only requires typescript. that should be manageable)
i'll grant you that the js ecosystem is special, partly because the business has traditionally reinforced the notion of it being accessory, superficial and not "serious" development. well, that's just naivety, it is as critical a component as any other. ideally you should even have a security department vetting the dependencies for you.
Funny that text editor is being presented here as some kind of behemoth, not representative of typical software written in Rust. I guess typical would be 1234th JSON serialization library.
So yes, the sprawling deps tree and culture is the problem. We would need to start reducing dependencies of the basic tools first. Otherwise it seems rather pointless to bother app developers with reducing dependencies.
I bet it's hundreds.
Jest alone adds 300 packages.
Consequently I doubt that you in fact "thoroughly and fully" analyzed all your dependencies.
Unless what you're shipping isn't a feature rich app, what you proposed seems entirely unrealistic.
Many who claim to fully analyze all dependencies are probably lying. I did not see anyone in the comments sharing their actual dependency count.
Even if you depend only on Jest - Meta's popular test runner - you add 300 packages.
Unless your setup is truly minimalistic, you probably have hundreds of dependencies already, which makes obsessing over some more rather pointless.
Not sure how he actually got the number; this was just a frustrated Slack message like 4 years ago
A sibling comment mentions we could have been using Cargo workspaces wrong... So, maybe?
But you definitely aren't finding hundreds of versions of `regex` in the same dependency tree.
Thank you for your honesty, and like you and I said, you put a lot of energy into keeping the dependency tree slim. This is not as common as one would like to believe.
Hundreds? Yes, absolutely. That's common.
In retrospect, I should have kept a list of these projects. I probably have not deleted these directories though, so I probably still could make a list of some of these projects.
(ETA: Also, you may not need much for a mocks library because JS' Proxy meta-object isn't that hard to work with directly.)
Now imagine you're another developer who needs to install a specific NPM package published by someone overseas who has zero vouches by anyone in your web of trust. What exactly are you going to do?
In reality, forcing package publishers to sign packages would achieve absolutely nothing. 99.99 % of package consumers would not even bother to even begin building a web of trust, and just blindly trust any signature.
The remaining 0.01 % who actually try are either going to fail to gain any meaningful access to a WoT, or they're going to learn that most identities of package publishers are completely unreachable via any WoT whatsoever.
As well, both Dependabot and Renovate in isolated environments withour secrets or privileges, need to be manually approved, and have minimum publication ages before recommending a package update to prevent basic supply chain attacks or lockfile corruption from a pinned package version being de-published (up to a 3 day window on NPM).
The way you describe it working doesn't even pass a basic "common sense" check. Just think about what you're saying: despite having a `package-lock.json`, every developer who works on a project will get every dependency updated every time they clone it and get to work?
The entire point of the lockfile is that installations respect it to keep environments agreed. The only difference with `clean install` is that it removes `node_modules` (no potential cache poisoning) and non-zero exits if there is a conflict between `package.json` and `package-lock.json`.
`install` will only update the lockfile where the lockfile conflicts with the `package.json` to allow you to make changes to that file manually (instead of via `npm` commands).
2&3. NPM 5 had a bug almost a decade ago. They literally link to it in both of those pages. Here[^1] is a developer repeating how I've said its supposed to work.
It would have taken you less work to just try this in a terminal than search for those "citations".
[^1]: https://github.com/npm/npm/issues/17979#issuecomment-3327012...