Is anyone doing this in a "security as a service" fashion for JavaScript packages? I imagine a kind of package escrow/repository that only serves known secure packages, and actively removes known vulnerable ones.
Socket:
- Sep 15 (First post on breach): https://socket.dev/blog/tinycolor-supply-chain-attack-affect...
- Sep 16: https://socket.dev/blog/ongoing-supply-chain-attack-targets-...
StepSecurity – https://www.stepsecurity.io/blog/ctrl-tinycolor-and-40-npm-p...
Aikido - https://www.aikido.dev/blog/s1ngularity-nx-attackers-strike-...
Ox - https://www.ox.security/blog/npm-2-0-hack-40-npm-packages-hi...
Safety - https://www.getsafety.com/blog-posts/shai-hulud-npm-attack
Phoenix - https://phoenix.security/npm-tinycolor-compromise/
Semgrep - https://semgrep.dev/blog/2025/security-advisory-npm-packages...
Is anyone doing this in a "security as a service" fashion for JavaScript packages? I imagine a kind of package escrow/repository that only serves known secure packages, and actively removes known vulnerable ones.
But what you describe is an interesting idea I hadn't encountered before! I assume such a thing would have lower adoption within a relatively fast-moving ecosystem like Node.js though.
The closest thing I can think of (and this isn't strictly what you described) is reliance on dependabot, snyk, CodeQL, etc which if anything probably contributes to change management fatigue that erodes careful review.
This is why package malware creates news, but enterprises mirroring package registries do not get affected. Building a mirroring solution will be pricey though mainly due to high egress bandwidth cost from Cloud providers.
Some other vendors do AI scanning
I doubt anyone would want to touch js packages with manual review.
I think the bare minimum is heavy use of auditjs (or Snyk, or anything else that works this way), and maybe a mandatory waiting period (2-4 weeks?) before allowing new packages in. That should help wave off the brunt of package churn and give auditjs enough time to catch up to new package vulnerabilities. The key is to not wait too long so folks can address CVE's in their software, but also not be 100% at the bleeding edge.
> The closest thing I can think of (and this isn't strictly what you described) is reliance on dependabot, snyk, CodeQL, etc which if anything probably contributes to change management fatigue that erodes careful review.
It's not glamorous work, that's for sure. And yes, it would have to rely heavily on automated scanning to close the gap on the absolutely monstrous scale that npmjs.org operates at. Such a team would be the Internet's DevOps in this one specific way, with all the slog and grind that comes with that. But not all heroes wear capes.