Most active commenters
  • lucideer(8)
  • johnisgood(4)
  • Tadpole9181(4)
  • burntsushi(4)
  • znort_(3)
  • simiones(3)
  • WorldMaker(3)
  • Yeroc(3)

←back to thread

1208 points jamesberthoty | 63 comments | | HN request time: 0.002s | source | bottom
Show context
codemonkey-zeta ◴[] No.45261026[source]
I'm coming to the unfortunate realizattion that supply chain attacks like this are simply baked into the modern JavaScript ecosystem. Vendoring can mitigate your immediate exposure, but does not solve this problem.

These attacks may just be the final push I needed to take server rendering (without js) more seriously. The HTMX folks convinced me that I can get REALLY far without any JavaScript, and my apps will probably be faster and less janky anyway.

replies(18): >>45261086 #>>45261121 #>>45261140 #>>45261165 #>>45261220 #>>45261265 #>>45261285 #>>45261457 #>>45261571 #>>45261702 #>>45261970 #>>45262601 #>>45262619 #>>45262851 #>>45267210 #>>45268405 #>>45269073 #>>45273081 #
1. lucideer ◴[] No.45261265[source]
> I'm coming to the unfortunate realizattion that supply chain attacks like this are simply baked into the modern JavaScript ecosystem.

I see this odd take a lot - the automatic narrowing of the scope of an attack to the single ecosystem it occurred in most recently, without any real technical argument for doing so.

What's especially concerning is I see this take in the security industry: mitigations put in place to target e.g. NPM, but are then completely absent for PyPi or Crates. It's bizarre not only because it leaves those ecosystems wide open, but also because the mitigation measures would be very similar (so it would be a minimal amount of additional effort for a large benefit).

replies(7): >>45261389 #>>45261408 #>>45261464 #>>45262010 #>>45263376 #>>45266913 #>>45270888 #
2. woodruffw ◴[] No.45261389[source]
Could you say more about what mitigations you’re thinking of?

I ask because think the directionality is backwards here: I’ve been involved in packaging ecosystem security for the last few years, and I’m generally of the opinion that PyPI has been ahead of the curve on implementing mitigations. Specifically, I think widespread trusted publishing adoption would have made this attack less effective since there would be fewer credentials to steal, but npm only implemented trusted publishing recently[1]. Crates also implemented exactly this kind of self-scoping, self-expiring credential exchange ahead of npm.

(This isn’t to malign any ecosystem; I think people are also overcorrect in treating this like a uniquely JavaScript-shaped problem.)

[1]: https://github.blog/changelog/2025-07-31-npm-trusted-publish...

replies(1): >>45265093 #
3. kees99 ◴[] No.45261408[source]
I agree other repos deserve a good look for potential mitigations as well (PyPI too, has a history of publishing malicious packages).

But don't brush off "special status" of NPM here. It is unique in that JS being language of both front-end and back-end, it is much easier for the crooks to sneak in malware that will end up running in visitor's browser and affect them directly. And that makes it a uniquely more attractive target.

replies(1): >>45261968 #
4. weinzierl ◴[] No.45261464[source]
Which mitigations specifically are in npm but not in crates.io?

As far as I know crates.io has everything that npm has, plus

- strictly immutable versions[1]

- fully automated and no human in the loop perpetual yanking

- no deletions ever

- a public and append only index

Go modules go even further and add automatic checksum verification per default and a cryptographic transparency log.

Contrast this with docker hub for example, where not even npm's basic properties hold.

So, it is more like

docker hub ⊂ npm ⊂ crates.io ⊂ Go modules

[1] Nowadays npm has this arguably too

replies(2): >>45263419 #>>45265052 #
5. znort_ ◴[] No.45261968[source]
npm in itself isn't special at all, maybe the userbase is but that's irrelevant because the mitigation is pretty easy and 99.9999% effective, works for every package manager and boils down to:

1- thoroughly and fully analyze any dependency tree you plan to include 2- immediately freeze all its versions 3- never update without very good reason or without repeating 1 and 2

in other words: simply be professional, face logical consequences if you aren't. if you think one package manager is "safer" than others because magic reasons odds are you'll find out the hard way sooner or later.

replies(3): >>45262164 #>>45262676 #>>45274141 #
6. WD-42 ◴[] No.45262010[source]
I mostly agree. But NPM is special, in that the exposure is so much higher. The hypothetical python+htmx web app might have 10s of dependencies (including transitive) whereas your typical Javascript/React will have 1000s. All an attacker needs to do is find one of many packages like TinyColor or Leftpad or whatever and now loads of projects are compromised.
replies(3): >>45262394 #>>45262453 #>>45263490 #
7. tbrownaw ◴[] No.45262164{3}[source]
Your item #1 there may be simple, but that's not the same as being easy.
replies(1): >>45270695 #
8. skydhash ◴[] No.45262394[source]
Stuff like Babel, React, Svelte, Axios, Redux, Jest… should be self contained and not depend on anything other than being a peer dependency. They are core technological choices that happens early in the project and is hard or impossible to replace afterwards.
replies(1): >>45263639 #
9. johnisgood ◴[] No.45262453[source]
Well, your typical Rust project has over 1000 dependencies, too. Zed has over 2000 in release mode.
replies(2): >>45263514 #>>45265047 #
10. moi2388 ◴[] No.45262676{3}[source]
Good luck with nr 1 in the js ecosystem and its 30k dependencies 50 branches deep per package
replies(2): >>45264021 #>>45270837 #
11. simiones ◴[] No.45263376[source]
Most people have addressed the package registry side of NPM.

But NPM has a much, much bigger problem on the client side, that makes many of these mitigations almost moot. And that is that `npm install` will upgrade every single package you depend on to its latest version that matches your declared dependency, and in JS land almost everyone uses lax dependency declarations.

So, an attacker who simply publishes a new patch version of a package they have gained access to will likely poison a good chunk of all of the users of that package in a relatively short amount of time. Even if the projects using this are careful and use `npm ci` instead of `npm install` for their CI builds, it will still easily get developers to download and run the malicious new version.

Most other ecosystems don't have this unsafe-by-default behavior, so deploying a new malicious version of a previously safe package is not such a major risk as it is in NPM.

replies(2): >>45263567 #>>45263619 #
12. lucideer ◴[] No.45263419[source]
To clarify (a lot of sibling commenters misinterpreted this too so probably my fault - can't edit my comment now):

I'm not referring to mitigations in public repositories (which you're right, are varied, but that's a separate topic). I'm purely referring to internal mitigations in companies leveraging open-source dependencies in their software products.

These come in many forms, everything from developer education initiatives to hiring commercial SCA vendors, & many other things in between like custom CI automations. Ultimately, while many of these measures are done broadly for all ecosystems when targeting general dependency vulnerabilities (CVEs from accidental bugs), all of the supply-chain-attack motivated initiatives I've seen companies engage in are single-ecosystem. Which seems wasteful.

13. lucideer ◴[] No.45263490[source]
> NPM is special, in that the exposure is so much higher.

NPM is special in the same way as Windows is special when it comes to malware: it's a more lucrative target.

However, the issue here is that - unlike Windows - targetting NPM alone does not incur significantly less overhead than targetting software registries more broadly. The trade-off between focusing purely on NPM & covering a lot of popular languages isn't high, & imo isn't a worthwhile trade-off.

14. spoiler ◴[] No.45263514{3}[source]
Not saying this in defence of Rust or Cargo, but often times those dependencies are just different versions of the same thing. In a project at one of my previous companies, a colleague noticed we had LOADS of `regex` crate versions. Forgot the number but it was well over 100
replies(3): >>45263778 #>>45263895 #>>45269137 #
15. lucideer ◴[] No.45263567[source]
> in JS land almost everyone uses lax dependency declarations

They do, BUT.

Dependency versioning schemes are much more strictly adhered to within JS land than in other ecosystems. PyPi is a mishmash of PEP 440, SemVer, some packages incorrectly using one in the format of the other, & none of the 3 necessarily adhering to the standard they've chosen. Other ecosystems are even worse.

Also - some ecosystems (PyPi again) are committing far worse offences than lax versioning - versionless dependency declaration. Heavy reliance on requirements.txt without lockfiles where half the time version isn't even specified at all. Astral/Poetry are improving the situation here but things are still bad.

Maven land is full of plugins with automated pom.xml version templating that has effectively the same effect as lax versioning, but without any strict adherence to any kind of standard like semver.

Yes, the situation in JS land isn't great, but there are much worse offenders out there.

replies(2): >>45264065 #>>45264222 #
16. Tadpole9181 ◴[] No.45263619[source]
`npm install` uses a lockfile by default and will not change versions. No, not transitives either. You would have to either manually change `package.json` or call `npm update`.

You'd have to go out of your way to make your project as bad as you're describing.

replies(2): >>45263716 #>>45264258 #
17. WorldMaker ◴[] No.45263639{3}[source]
- I feel that you are unlikely to need Babel in 2025, most things it historically transpiled are Baseline Widely Available now (and most of the things it polyfilled weren't actually Babel's but brought in from other dependencies like core-js, which you probably don't need either in 2025). For the rest of the things it still transpiles (pretty much just JSX) there are cheaper/faster transpilers with fewer external dependencies and runtime dependencies (Typescript, esbuild). It should not be hard to replace Babel in your stack: if you've got a complex webpack solution (say from CRA reasons) consider esbuild or similar.

- Axios and Jest have "native" options now (fetch and node --test). fetch is especially nice because it is the same API in the browser and in Node (and Deno and Bun).

- Redux is self-contained.

- React itself is sort of self-contained, it's the massive ecosystem that makes React the most appealing that starts to drive dependency bloat. I can't speak to Svelte.

replies(2): >>45271031 #>>45274150 #
18. lucideer ◴[] No.45263716{3}[source]
A lot of people use tools like Dependabot which automates updates to the lockfile.
replies(1): >>45284666 #
19. treyd ◴[] No.45263778{4}[source]
That seems like a failure in workspace management. The most duplicates I've seen was 3, with crates like url or uuid, even in projects with 1000+ distinct deps.
20. ◴[] No.45263895{4}[source]
21. godshatter ◴[] No.45264021{4}[source]
As an outsider looking in as I don't deal with NPM on a daily basis, the 30k dependencies going 50 branches deep seems to be the real problem here. Code reuse is an admiral goal but this seems absurd. I have no idea if these numbers are correct or exaggerations but from my limited time working with NPM a year or two ago it seems like it's a definite problem.

I'm in the C ecosystem mostly. Is one NPM package the equivalent of one object file? Can NPM packages call internal functions for their dependencies instead of relying so heavily on bringing in so many external ones? I guess it's a problem either way, internal dependencies having bugs vs supply chain attacks like these. Doesn't bringing in so many dependencies lead to a lot of dead code and much larger codebases then necessary?

replies(1): >>45265158 #
22. simiones ◴[] No.45264065{3}[source]
The point is still different. In PyPI, if I put `requests` in my requirements.txt, and I run `pip install -r requirements.txt` every time I do `make build`, I will still only get one version of requests - the latest available the first time I installed it. This severely reduces the attack radius compared to NPM's default, where I would get the latest (patch) version of my dependency every day. And the ecosystem being committed to respecting semver is entirely irrelevant to supply chain security. Malicious actors don't care about semver.

Overall, publishing a new malicious version of a package is a much lesser problem in virtually any ecosystem other than NPM; in NPM, it's almost an automatic remote code execution vulnerability for every NPM dev, and a persistent threat for many NPM packages even without this.

replies(3): >>45264671 #>>45266678 #>>45267852 #
23. Yeroc ◴[] No.45264222{3}[source]
> Maven land is full of plugins with automated pom.xml version templating that has effectively the same effect as lax versioning, but without any strict adherence to any kind of standard like semver.

Please elaborate on this. I'm a long-time Java developer and have never once seen something akin to what you're describing here. Maven has support for version ranges but in practice it's very rarely used. I can expect a project to build with the exact same dependencies resolved today and in six months or a year from now.

replies(1): >>45266860 #
24. simiones ◴[] No.45264258{3}[source]
No, this is just wrong. It might indeed use package-lock.json if it matches your node_modules (so that running `npm install` multiple times won't download new versions). But if you're cloning a repo off of GitHub and running npm install for the first time (which a CI setup might do), it will take the latest deps from package.json and update the package-lock.json - at least this is what I've found many responses online claim. The docs for `npm ci` also suggest that it behaves differently from `npm install` in this exact respect:

> In short, the main differences between using npm install and npm ci are:

> The project must have an existing package-lock.json or npm-shrinkwrap.json.

> If dependencies in the package lock do not match those in package.json, npm ci will exit with an error, instead of updating the package lock.

replies(3): >>45264422 #>>45267351 #>>45284723 #
25. Rockslide ◴[] No.45264422{4}[source]
Well but the docs you cited don't match what you stated. You can delete node_modules and reinstall, it will never update the package-lock.json, you will always end up with the exact same versions as before. The package-lock updating happens when you change version numbers in the package.json file, but that is very much expected! So no, running npm install will not pull in new versions randomly.
replies(1): >>45266848 #
26. debazel ◴[] No.45264671{4}[source]
> This severely reduces the attack radius compared to NPM's default, where I would get the latest (patch) version of my dependency every day.

By default npm will create a lock file and give you the exact same version every time unless you manually initiate an upgrade. Additionally you could even remove the package-lock.json and do a new npm install and it still wouldn't upgrade the package if it already exists in your node_modules directory.

Only time this would be true is if you manually bump the version to something that is incompatible, or remove both the package-lock.json and your node_modules folder.

replies(1): >>45271428 #
27. Klonoar ◴[] No.45265047{3}[source]
Your typical Rust project does not have over 1000 dependencies.

Zed is not a typical Rust project; it's a full fledged editor that includes a significant array of features and its own homegrown UI framework.

replies(2): >>45267992 #>>45273067 #
28. kibwen ◴[] No.45265052[source]
> Go modules go even further and add automatic checksum verification per default

Cargo lockfiles contain checksums and Cargo has used these for automatic verification since time immemorial, well before Go implemented their current packaging system. In addition, Go doesn't enforce the use of go.sum files, it's just an optional recommendation: https://go.dev/wiki/Modules#should-i-commit-my-gosum-file-as... I'm not aware of any mechanism which would place Go's packaging system at the forefront of mitigation implementations as suggested here.

29. kibwen ◴[] No.45265093[source]
> PyPI has been ahead of the curve on implementing mitigations

Indeed, crates.io implemented PyPI's trusted publishing and explicitly called out PyPI as their inspiration: https://blog.rust-lang.org/2025/07/11/crates-io-development-...

30. marcosdumay ◴[] No.45265158{5}[source]
> Is one NPM package the equivalent of one object file?

No. The closest thing to a package (on almost every language) is an entire library.

> Can NPM packages call internal functions for their dependencies instead of relying so heavily on bringing in so many external ones?

Yes, they can. They just don't do it.

> Doesn't bringing in so many dependencies lead to a lot of dead code and much larger codebases then necessary?

There aren't many unecessary dependencies, because the number of direct dependencies on each package is reasonable (on the order of 10). And you don't get a lot of unecessary code because the point of tiny libraries is to only import what you need.

Dead code is not the problem, instead the JS mentality evolved that way to minimize dead code. The problem is that dead code is actually not that much of an issue, but dependency management is.

31. lucideer ◴[] No.45266678{4}[source]
> every time I do `make build`

I'm going to assume this is you running this locally to generate releases, presumably for personal projects?

If you're building your projects in CI you're not pulling in the same version without a lockfile in place.

32. 0cf8612b2e1e ◴[] No.45266848{5}[source]
The internet disagrees. NPM will gladly ignore and update lock files. There may exist a way to actually respect lock files, but the default mode of operation does not work as you would naively expect.

- NPM Install without modifying the package-lock.json https://www.mikestreety.co.uk/blog/npm-install-without-modif...

- Why does "npm install" rewrite package-lock.json? https://stackoverflow.com/questions/45022048/why-does-npm-in...

- npm - How to actually use package-lock.json for installing based on locked versions? https://stackoverflow.com/questions/47480617/npm-how-to-actu...

replies(2): >>45268083 #>>45284785 #
33. lucideer ◴[] No.45266860{4}[source]
I'm not a Java (nor Kotlin) developer - I've only done a little Java project maintenance & even less Kotlin - I've mainly come at this as a tooling developer for dependency management & vulnerability remediation. But I have seen a LOT of varied maven-managed repos in that line of work (100s) and the approaches are wide - varied.

I know this is possible with custom plugins but I've mainly just seen it using maven wrapper & user properties.

replies(1): >>45267584 #
34. jacques_chester ◴[] No.45266913[source]
Most of the biggest repositories already cooperate through the OpenSSF[0]. Last time I was involved in it, there were representatives from npm, PyPI, Maven Central, Crates and RubyGems. There's also been funding through OpenSSF's Alpha-Omega program for a bunch of work across multiple ecosystems[1], including repos.

[0] https://github.com/ossf/wg-securing-software-repos

[1] https://alpha-omega.dev/grants/grantrecipients/

35. typpilol ◴[] No.45267351{4}[source]
That's not true. Ci will never take new versions from your lock file.
36. Yeroc ◴[] No.45267584{5}[source]
There are things that are potentially possible such as templating pom.xml build files or adjusting dependencies based on user properties (this that what you're suggesting?), but what you're describing is definitely not normal, or best practice in the ecosystem and shouldn't be presented as if it's normal practice.
replies(1): >>45268891 #
37. zahlman ◴[] No.45267852{4}[source]
Generally you have the right of it, but a word of caution for Pythonistas:

> The point is still different. In PyPI, if I put `requests` in my requirements.txt, and I run `pip install -r requirements.txt` every time I do `make build`, I will still only get one version of requests - the latest available the first time I installed it.

Only because your `make build` is a custom process that doesn't use build isolation and relies on manually invoking pip in an existing environment.

Ecosystem standard build tools (including pip itself, using `pip wheel` — which really isn't meant for distribution, but some people seem to use it anyway) default to setting up a new virtual environment to build your code (and also for each transitive dependency that requires building — to make sure that your dependencies' build tools aren't mutually incompatible, or broken by other things in the envrionment). They will read `requests` from `[project.dependencies]` in your pyproject.toml file and dump the latest version in that new environment, unless you use tool-specific configuration (or of course a better specification in pyproject.toml) to prevent that. And if your dependencies were only available as sdists, the build tool would even automatically, recursively attempt to build those, potentially running arbitrary code from the package in the process.

38. worik ◴[] No.45267992{4}[source]
What is a "typical Rust project", I wonder?
replies(1): >>45268783 #
39. Rockslide ◴[] No.45268083{6}[source]
Those stackoverflow posts are ancient and many major npm releases old, so in other words: irrelevant. That blog post is somewhat up to date but also very vague about the circumstances which would update the lockfile. Which certainly isn't that npm install updates dependencies to newer versions within the semver range, because it absolutely does not.
40. cesarb ◴[] No.45268783{5}[source]
One famous example is ripgrep (https://github.com/BurntSushi/ripgrep). Its Cargo.lock (which contains all direct and indirect dependencies) lists 65 dependencies (it has 66 entries, but one of them is for itself).
replies(2): >>45269118 #>>45277060 #
41. lucideer ◴[] No.45268891{6}[source]
Attackers don't need these practices to be normal, they just need them to be common enough (significant minority of)
replies(1): >>45271145 #
42. burntsushi ◴[] No.45269118{6}[source]
Also, that lock file includes development dependencies and dependencies for opt-in features like PCRE2. A normal `cargo build` will use quite a bit fewer than 65 dependencies.

I would actually say ripgrep is not especially typical here. I put a lot of energy into keeping my dependency tree slim. Many Rust applications have hundreds of dependencies.

We aren't quite at thousands of dependencies yet though.

replies(1): >>45277112 #
43. burntsushi ◴[] No.45269137{4}[source]
That doesn't make sense. The most it could be is 3: regex 0.1.x, regex 0.2.y and regex 1.a.b. You can't have more because Cargo unifies on semver compatible versions and regex only has 3 semver incompatible releases. Plus, regex 1.0 has been out for eons. Pretty much everyone has moved off of 0.1 and 0.2.
replies(1): >>45274201 #
44. znort_ ◴[] No.45270695{4}[source]
agreed, bad wording. it so happens though that sw development includes many problems and practices that aren't easy and are still part of the job.
45. znort_ ◴[] No.45270837{4}[source]
there are indeed monster packages but you should ask yourself if you need them at all, because if you really do there is no way around performing nr1. you get the code, you own it. you propagate malware by negligence, you're finished as a sw engineer. simple as that.

personally i keep dependencies at a minimum and are very picky with them, partly because of nr1, but as a general principle. of course if people happily suck in entire trees without supervision just to print ansi colors on the terminal or, as in this case, use fancy aliases for colors then bad things are bound to happen. (tbf tinycolor has one single devDependency, shim-deno-test, which only requires typescript. that should be manageable)

i'll grant you that the js ecosystem is special, partly because the business has traditionally reinforced the notion of it being accessory, superficial and not "serious" development. well, that's just naivety, it is as critical a component as any other. ideally you should even have a security department vetting the dependencies for you.

46. worik ◴[] No.45270888[source]
The Rust folks are in denial about this
47. johnny22 ◴[] No.45271031{4}[source]
Last i checked react's new compiler still depends on babel! :(
replies(1): >>45278070 #
48. Yeroc ◴[] No.45271145{7}[source]
You're talking about things that aren't in the significant minority here.
49. k3vinw ◴[] No.45271428{5}[source]
Ahh this might explain the behavior I observed when running npm install from a freshly checked out project where it basically ignored the lock file. If I recall in that situation the solution was to run an npm clean install or npm ci and then it would use the lock file.
50. wolvesechoes ◴[] No.45273067{4}[source]
> Zed is not a typical Rust project; it's a full fledged editor

Funny that text editor is being presented here as some kind of behemoth, not representative of typical software written in Rust. I guess typical would be 1234th JSON serialization library.

51. user34283 ◴[] No.45274141{3}[source]
Do tell: how many packages are in your dependency graph?

I bet it's hundreds.

Jest alone adds 300 packages.

Consequently I doubt that you in fact "thoroughly and fully" analyzed all your dependencies.

Unless what you're shipping isn't a feature rich app, what you proposed seems entirely unrealistic.

52. user34283 ◴[] No.45274150{4}[source]
Jest alone adds 300 packages by the way.
replies(1): >>45277996 #
53. spoiler ◴[] No.45274201{5}[source]
The reason he went down this rabbit hole was because he was chronically running low on disk space, and his target dir was one of the largest contributors.

Not sure how he actually got the number; this was just a frustrated Slack message like 4 years ago

A sibling comment mentions we could have been using Cargo workspaces wrong... So, maybe?

replies(1): >>45275133 #
54. burntsushi ◴[] No.45275133{6}[source]
He probably just needed to run `cargo clean` occasionally.

But you definitely aren't finding hundreds of versions of `regex` in the same dependency tree.

55. johnisgood ◴[] No.45277060{6}[source]
Not quite. He is a better developer than most who happen to minimize dependencies, but according to my experiences it is not as common as you would like to believe. Do I really need to make a list of all the Rust projects I have compiled that pulled in over 1000 dependencies? If I need to do it to convince you, I will do so, as my time allows.
56. johnisgood ◴[] No.45277112{7}[source]
> I would actually say ripgrep is not especially typical here. I put a lot of energy into keeping my dependency tree slim. Many Rust applications have hundreds of dependencies.

Thank you for your honesty, and like you and I said, you put a lot of energy into keeping the dependency tree slim. This is not as common as one would like to believe.

replies(1): >>45277366 #
57. burntsushi ◴[] No.45277366{8}[source]
I agree it's not common. But neither are Rust applications with 1000+ dependencies. I don't think I've ever compiled a Rust project with over 1,000 dependencies.

Hundreds? Yes, absolutely. That's common.

replies(1): >>45277672 #
58. johnisgood ◴[] No.45277672{9}[source]
Maybe I am just unlucky enough to always running into Rust projects that pull in over 1000 dependencies. :D

In retrospect, I should have kept a list of these projects. I probably have not deleted these directories though, so I probably still could make a list of some of these projects.

59. WorldMaker ◴[] No.45277996{5}[source]
Yep, which is part of why it feels real good to delete Jest and switch to `node --test`. I realize for a lot of projects that is easier said than done because Jest isn't just the test harness but the assertions framework (`node:assert/strict` isn't terrible; Chai is still a good low-dependency option for fancier styles of assertions) and mocks/substitutes framework (I'm sure there are options there; I never liked Jest's style of mocks so I don't have a recommendation handy).

(ETA: Also, you may not need much for a mocks library because JS' Proxy meta-object isn't that hard to work with directly.)

60. WorldMaker ◴[] No.45278070{5}[source]
Yeah, I still don't understand a lot of the architecture choices behind the new compiler, including why the new compiler isn't mostly just a set of eslint suggestions with auto-fixes. I've seen the blog posts trying to explain it, but they don't seem to answer my questions. But then I also haven't done enough direct React work recently enough to have need of or actually tried to use the new compiler, so maybe I am just asking the wrong questions.
61. Tadpole9181 ◴[] No.45284666{4}[source]
That's unrelated to this.

As well, both Dependabot and Renovate in isolated environments withour secrets or privileges, need to be manually approved, and have minimum publication ages before recommending a package update to prevent basic supply chain attacks or lockfile corruption from a pinned package version being de-published (up to a 3 day window on NPM).

62. Tadpole9181 ◴[] No.45284723{4}[source]
Literally just try it yourself?

The way you describe it working doesn't even pass a basic "common sense" check. Just think about what you're saying: despite having a `package-lock.json`, every developer who works on a project will get every dependency updated every time they clone it and get to work?

The entire point of the lockfile is that installations respect it to keep environments agreed. The only difference with `clean install` is that it removes `node_modules` (no potential cache poisoning) and non-zero exits if there is a conflict between `package.json` and `package-lock.json`.

`install` will only update the lockfile where the lockfile conflicts with the `package.json` to allow you to make changes to that file manually (instead of via `npm` commands).

63. Tadpole9181 ◴[] No.45284785{6}[source]
1. This guy clearly doesn't know how NPM works. Don't use `--no-save` regularly or you'll be intentionally desyncing your lockfile from reality.

2&3. NPM 5 had a bug almost a decade ago. They literally link to it in both of those pages. Here[^1] is a developer repeating how I've said its supposed to work.

It would have taken you less work to just try this in a terminal than search for those "citations".

[^1]: https://github.com/npm/npm/issues/17979#issuecomment-3327012...