←back to thread

114 points elchief | 1 comments | | HN request time: 0.001s | source
Show context
elchief ◴[] No.40941810[source]
The JFrog Security Research team has recently discovered and reported a leaked access token with administrator access to Python’s, PyPI’s and Python Software Foundation’s GitHub repositories, which was leaked in a public Docker container hosted on Docker Hub.

As a community service, the JFrog Security Research team continuously scans public repositories such as Docker Hub, NPM, and PyPI to identify malicious packages and leaked secrets. The team reports any findings to the relevant maintainers before attackers can take advantage of them. Although we encounter many secrets that are leaked in the same manner, this case was exceptional because it is difficult to overestimate the potential consequences if it had fallen into the wrong hands – one could supposedly inject malicious code into PyPI packages (imagine replacing all Python packages with malicious ones), and even to the Python language itself!

The JFrog Security Research team identified the leaked secret and immediately reported it to PyPI’s security team, who revoked the token within a mere 17 minutes!

This post will explain how we found a GitHub PAT that provided access to the entire Python infrastructure and prevented a supply chain disaster. Using this case, we will discuss the importance of (also) shifting right in secrets detection – searching for secrets in binaries and production artifacts, not just on source code.

replies(1): >>40944046 #
sans_souse ◴[] No.40944046[source]
How were these leaked in this specific example? Am I right in assuming it was more user-error than malicious intent, and - if so, shouldn't this be a resolvable problem for the back-end as far as coding practices concerning public vs private keys and such? (note I am by no means an expert on coding or security)
replies(2): >>40944799 #>>40944800 #
ketch ◴[] No.40944800[source]
The article has answers

> It seems that the original author

> - Briefly added the authorization token to their source code

> - Ran the source code (Python script), which got compiled into a .pyc binary with the auth token

> - Removed the authorization token from the source code, but didn’t clean the .pyc

> - Pushed both the clean source code and the unclean .pyc binary into the docker image

replies(4): >>40945411 #>>40946267 #>>40948927 #>>40961218 #
bheadmaster ◴[] No.40945411[source]
> - Pushed both the clean source code and the unclean .pyc binary into the docker image

Oof.

Honestly, I can't blame the guy for a mistake like this, it's just so easy to make. But then again, deploying images built on a development laptop is generally an error-prone activity. This is why build and deployment servers exist.

replies(1): >>40945505 #
sitkack ◴[] No.40945505[source]
Why does one person have admin on all those repos?

Why is Python still running any classic access tokens?

Why is the access token EVER in the source code?

What other stuff is “running on their laptop”?

No pass!

People with access to the repos shouldn’t also have access to push bits to the world. It puts those people with that access in grave physical danger.

Edit, https://blog.pypi.org/posts/2024-07-08-incident-report-leake...

This token has been in the wild for 15 months! The JFrog post cannot say that disaster was averted because we do not know.

replies(3): >>40946106 #>>40947616 #>>40948391 #
1. belter ◴[] No.40947616[source]
> This token has been in the wild for 15 months! The JFrog post cannot say that disaster was averted because we do not know.

"Binary secret scanning helped us prevent (what might have been) the worst supply chain attack you can imagine"

The above comment from them sounds as weird, as the whole ecosystem security based out of a developer laptop...