←back to thread

1208 points jamesberthoty | 5 comments | | HN request time: 0.701s | source
Show context
pragma_x ◴[] No.45267685[source]
So, other packaging environments have a tendency to slow down the rate of change that enters the user's system. Partly through the labor of re-packaging other people's software, but also as a deliberate effort. For instance: Ubuntu or RedHat.

Is anyone doing this in a "security as a service" fashion for JavaScript packages? I imagine a kind of package escrow/repository that only serves known secure packages, and actively removes known vulnerable ones.

replies(2): >>45267827 #>>45273776 #
1. kilobaud ◴[] No.45267827[source]
I've worked in companies that do this internally, e.g., managed pull-through caches implemented via tools like Artifactory, or home-grown "trusted supply chain" automation, i.e., policy enforcement during CI/CD prior to actually consuming a third-party dependency.

But what you describe is an interesting idea I hadn't encountered before! I assume such a thing would have lower adoption within a relatively fast-moving ecosystem like Node.js though.

The closest thing I can think of (and this isn't strictly what you described) is reliance on dependabot, snyk, CodeQL, etc which if anything probably contributes to change management fatigue that erodes careful review.

replies(3): >>45268007 #>>45270099 #>>45278533 #
2. kjok ◴[] No.45268007[source]
> managed pull-through caches implemented via tools like Artifactory

This is why package malware creates news, but enterprises mirroring package registries do not get affected. Building a mirroring solution will be pricey though mainly due to high egress bandwidth cost from Cloud providers.

3. tom1337 ◴[] No.45270099[source]
How does a pull-through cache prevent this issue? Wouldn’t it also just pull the infected version from the upstream registry?
replies(1): >>45278395 #
4. pragma_x ◴[] No.45278395[source]
I think it's implied that packages can be blocked and/or evicted from said cache administratively. This deliberately breaks builds, and forces engineers to upgrade/downgrade away from bad packages as needed.
5. pragma_x ◴[] No.45278533[source]
Exactly. Everyone is doing this, maybe well, maybe poorly. Consider Sonatype Nexus and its "repository firewall" product. Their business model _depends_ on everyone not cooperating, so there's likely a ton of folks that would love to pay less to get the same results.

> The closest thing I can think of (and this isn't strictly what you described) is reliance on dependabot, snyk, CodeQL, etc which if anything probably contributes to change management fatigue that erodes careful review.

It's not glamorous work, that's for sure. And yes, it would have to rely heavily on automated scanning to close the gap on the absolutely monstrous scale that npmjs.org operates at. Such a team would be the Internet's DevOps in this one specific way, with all the slog and grind that comes with that. But not all heroes wear capes.