←back to thread

1369 points universesquid | 5 comments | | HN request time: 0.001s | source
Show context
junon ◴[] No.45169794[source]
Hi, yep I got pwned. Sorry everyone, very embarrassing.

More info:

- https://github.com/chalk/chalk/issues/656

- https://github.com/debug-js/debug/issues/1005#issuecomment-3...

Affected packages (at least the ones I know of):

- ansi-styles@6.2.2

- debug@4.4.2 (appears to have been yanked as of 8 Sep 18:09 CEST)

- chalk@5.6.1

- supports-color@10.2.1

- strip-ansi@7.1.1

- ansi-regex@6.2.1

- wrap-ansi@9.0.1

- color-convert@3.1.1

- color-name@2.0.1

- is-arrayish@0.3.3

- slice-ansi@7.1.1

- color@5.0.1

- color-string@2.1.1

- simple-swizzle@0.2.3

- supports-hyperlinks@4.1.1

- has-ansi@6.0.1

- chalk-template@1.1.1

- backslash@0.2.1

It looks and feels a bit like a targeted attack.

Will try to keep this comment updated as long as I can before the edit expires.

---

Chalk has been published over. The others remain compromised (8 Sep 17:50 CEST).

NPM has yet to get back to me. My NPM account is entirely unreachable; forgot password system does not work. I have no recourse right now but to wait.

Email came from support at npmjs dot help.

Looked legitimate at first glance. Not making excuses, just had a long week and a panicky morning and was just trying to knock something off my list of to-dos. Made the mistake of clicking the link instead of going directly to the site like I normally would (since I was mobile).

Just NPM is affected. Updates to be posted to the `/debug-js` link above.

Again, I'm so sorry.

replies(39): >>45169833 #>>45169877 #>>45169899 #>>45169922 #>>45170115 #>>45170202 #>>45170608 #>>45170631 #>>45170738 #>>45170943 #>>45171084 #>>45171127 #>>45171420 #>>45171444 #>>45171619 #>>45171648 #>>45171666 #>>45171859 #>>45172334 #>>45172346 #>>45172355 #>>45172660 #>>45172846 #>>45174599 #>>45174607 #>>45175160 #>>45175246 #>>45176250 #>>45176355 #>>45176505 #>>45177184 #>>45177316 #>>45178543 #>>45178719 #>>45182153 #>>45183937 #>>45194407 #>>45194912 #>>45229781 #
33a ◴[] No.45171859[source]
We also caught this right away at Socket,

https://socket.dev/blog/npm-author-qix-compromised-in-major-...

While it sucks that this happened, the good thing is that the ecosystem mobilized quickly. I think these sorts of incidents really show why package scanning is essential for securing open source package repositories.

replies(3): >>45173871 #>>45173938 #>>45174071 #
Yoric ◴[] No.45174071[source]
So how do you detect these attacks?
replies(2): >>45174292 #>>45175681 #
33a ◴[] No.45175681[source]
We use a mix of static analysis and AI. Flagged packages are escalated to a human review team. If we catch a malicious package, we notify our users, block installation and report them to the upstream package registries. Suspected malicious packages that have not yet been reviewed by a human are blocked for our users, but we don't try to get them removed until after they have been triaged by a human.

In this incident, we detected the packages quickly, reported them, and they were taken down shortly after. Given how high profile the attack was we also published an analysis soon after, as did others in the ecosystem.

We try to be transparent with how Socket work. We've published the details of our systems in several papers, and I've also given a few talks on how our malware scanner works at various conferences:

* https://arxiv.org/html/2403.12196v2

* https://www.youtube.com/watch?v=cxJPiMwoIyY

replies(2): >>45176864 #>>45208822 #
ATechGuy ◴[] No.45176864[source]
You rely on LLMs riddled with hallucinations for malware detection?
replies(4): >>45177009 #>>45177608 #>>45177731 #>>45178490 #
Culonavirus ◴[] No.45177731[source]
He literally said "Flagged packages are escalated to a human review team." in the second sentence. Wtf is the problem here?
replies(1): >>45182830 #
1. ATechGuy ◴[] No.45182830[source]
What about packages that are not "flagged"? There could be hallucinations when deciding to (or not) "flag packages".
replies(1): >>45183112 #
2. orbital-decay ◴[] No.45183112[source]
>What about packages that are not "flagged"?

You can't catch everything with normal static analysis either. LLM just produces some additional signal in this case, false negatives can be tolerated.

replies(1): >>45183273 #
3. ATechGuy ◴[] No.45183273[source]
static analysis DOES NOT hallucinate.
replies(2): >>45184369 #>>45209477 #
4. Twirrim ◴[] No.45184369{3}[source]
So what? They're not replacing standard tooling like static analysis with it. As they mention, it's being used as additional signal alongside static analysis.

There are cases an LLM may be able to catch that their static analysis can't currently catch. Should they just completely ignore those scenarios, thereby doing the worst thing by their customers, just to stay purist?

What is the worst case scenario that you're envisioning from an LLM hallucinating in this use case? To me the worst case is that it might incorrectly flag a package as malicious, which given they do a human review anyway isn't the end of the world. On the flip side, you've got LLM catching cases not yet recognised by static analysis, that can then be accounted for in the future.

If they were just using an LLM, I might share similar concerns, but they're not.

5. tripzilch ◴[] No.45209477{3}[source]
well, you've never had a non-spam email end up in your spam folder? or the other way around?

when static analysis does it, it's called a "misclassification"