PyPI's user docs[1] have some more details, as does the underlying PEP[2].
PyPI's user docs[1] have some more details, as does the underlying PEP[2].
(That issue will still be great to resolve, but just to avoid any confusion about its scope.)
[1]: https://docs.github.com/en/actions/security-for-github-actio...
The corresponding ToB blog post says the following:
> Longer term, we can do even better: doing “one off” verifications means that the client has no recollection of which identities should be trusted for which distributions. To address this, installation tools need a notion of “trust on first use” for signing identities, meaning that subsequent installations can be halted and inspected by a user if the attesting identity changes (or the package becomes unattested between versions).
Agree: signing is only as good as verification. However, trust-on-first-use (TOFU) is not the most secure way to map packages to attestations because nothing stops attackers who have taken over PyPI from tampering with the _unsigned_ mapping of identities to attestations in package lockfiles for new clients (certainly in containerized environments where everything could look new), and even just new versions of packages.
Although [PEP 458](https://peps.python.org/pep-0458/) is about signing the Python package index, it sets the foundation for being able to securely map packages to signed in-toto _policies_, which would in turn securely map identities to attestations. I think it is worth noting how these different PEPs can work together :)
https://news.ycombinator.com/item?id=40941809
JFrog also discovered multiple malicious package exploits later.
Now we get a Github centric new buzzword that could be replaced by trusted SHA256 sums. Python is also big on business speak like SBOM. The above key leak of course occurred after all these new security "experts" manifested themselves out of nowhere.
The procedure remains the same. Download a package from the original creators, audit it, use a local repo and block PyPI.
> The generation and publication of attestations happens by default, and no changes are necessary for projects that meet all of these conditions: publish from GitHub Actions; via Trusted Publishing; and use the pypa/gh-action-pypi-publish action to publish.
If you then click on "The manual way" it adds a big disclaimer:
> STOP! You probably don't need this section; it exists only to provide some internal details about how attestation generation and uploading work. If you're an ordinary user, it is strongly recommended that you use one of the official workflows described above.
Where the only official workflow is "Use GitHub Actions".
I guess I am an idealist but as a maintainer this falls short of my expectations for the openness of Python and PyPI.
The standard behind this (PEP 740) supports anything that can be used with Trusted Publishing[1]. That includes GitLab, Google Cloud, ActiveState, and can include any other OIDC IdP if people make a good case for including it.
It's not tied to Microsoft or GitHub in any particular way. The only reason it emphasizes GitHub Actions is because that's where the overwhelming majority of automatic publishing traffic comes from, and because it follows a similar enablement pattern as Trusted Publishing did (where we did GitHub first, followed by GitLab and other providers).
> STOP! You probably don't need this section;
In https://docs.pypi.org/attestations/producing-attestations/#t...
Perhaps also add a few of the providers you listed as well?
> The only reason it emphasizes GitHub Actions is because that's where the overwhelming majority of automatic publishing traffic comes from
GitHub being popular is a self-reinforcing process, if GitHub is your first class citizen for something as crucial as trusted publishing then projects on GitHub will see a higher adoption and become the de-facto "secure choice".
1. The primary benefit of Trusted Publishing over a manual API token is knowing that the underlying OIDC IdP has an on-call staff, proper key management and rotation policies, etc. These can be guaranteed for GitHub, GitLab, etc., but they're harder to prove for one-off self-hosted CI setups. For the latter case, the user is no better off than they would be with a manual API token, which is still (and will always be) supported.
2. If the overwhelming majority of traffic comes from a single CI/CD provider, adding more code to support generic OIDC IdPs increases PyPI's attack surface for only marginal user benefit.
There also is no "open interface" for PyPI to really use here: this is all built on OIDC, but each OIDC provider needs to have its unique claims mapped to something intelligible by PyPI. That step requires thoughtful, manual, per-IdP consideration to avoid security issues.
I can soften it, but I think you're reading it excessively negatively: that warning is there to make sure people don't try to do the fiddly, error-prone cryptographic bits if they don't need to. It's a numerical fact that most project owners don't need that section, since most are either using manual API tokens or are publishing via GitHub Actions.
> Perhaps also add a few of the providers you listed as well?
They'll be added when they're enabled. Like I said in the original comment, we're using a similar enablement pattern as happened with Trusted Publishing: GitHub was enabled first because it represents the majority of publishing traffic, followed by GitLab and the others.
> GitHub being popular is a self-reinforcing process, if GitHub is your first class citizen for something as crucial as trusted publishing then projects on GitHub will see a higher adoption and become the de-facto "secure choice".
I agree, but I don't think this is PyPI's problem to solve. From a security perspective, PyPI should prioritize the platforms where the traffic is.
(I'll note that GitLab has been supported by Trusted Publishing for a while now, and they could make the publishing workflow more of a first class citizen, the way it is on GHA.)
https://docs.pypi.org/trusted-publishers/creating-a-project-...
> Support for automatic attestation generation and publication from other Trusted Publisher environments is planned. While not recommended, maintainers can also manually generate and publish attestations.
Good security doesn't demand perfection, that's why security is both prevention and preparedness. The response from our admin was in every way beyond what you'd expect from many other orgs: prompt response (on the order of minutes), full audit of activity for the credential (none found), and full public disclosure ahead of the security researcher's report.
> JFrog also discovered multiple malicious package exploits later.
If you're referencing malicious packages on PyPI then yes! We want to keep PyPI freely open to all to use, and that has negative knock-on effects. Turns out that exploiting public good code repositories is quite popular, but I maintain that the impact is quite low and that our ability to respond to these sorts of attacks is also very good due to our network of security engineers and volunteers who are triaging their reports. Thanks to the work of Mike Fiedler (quarantining packages, API for reporting malicious packages, better UI for triagers) our ability to respond to malicious packages will become even better.
> Now we get a Github centric new buzzword that could be replaced by trusted SHA256 sums.
In a way, this feature is what you're describing but is easier to automate (therefore: good for you as a user) and is more likely to be correct because every attestation is verified by PyPI before it's made available to others (which is also good for users). The focus on GitHub Actions is because this is where many Python projects publish from, there is little incentive to create a feature that no one will use.
> Python is also big on business speak like SBOM.
Indeed, there is legislation in many places that will require SBOMs for all software placed in their markets so there is plenty of interest in these standards. I'm working on this myself to try to do the most we can for users while minimizing the impact this will have on upstream open source project maintainers.
>Where the only official workflow is "Use GitHub Actions".
Well you can do it manually with other solutions... as long as they are one of the four trusted publishers (see "Producing attestations manually does not bypass (...) restrictions on (...) Trusted Publishers":
https://docs.pypi.org/trusted-publishers/adding-a-publisher/...
This means that you literally can't do it manually, you have to rely on one of:
* Github
* Google Cloud
* ActiveState (I'm not familiar with it)
* Github.com (not just github, only that one instance)
Really surprising development, IMO.
To me that's a bit of a weird statement, PyPI is part of the Python foundation, making sure that the project remains true to its open-source nature is reasonable?
My concern is that these type of things ultimately play out as "we are doing the right thing to limit supply chain attacks" which is good an defendable, but in ~5 years PyPI will have an announcement that they are sunsetting PyPI package upload in favor of the trusted provider system. pip (or other tooling) will add warnings whenever I install a package that is not "trusted". Maybe I am simply pessimistic.
That being said we can agree to disagree, I am not part of the PSF and I did preface my first comment with "I guess I am an idealist".
What about this, in your estimation, undermines the open-source nature of PyPI? Nothing about this is proprietary, and I can't think of any sane definition of OSS in which PyPI choosing to verify OIDC tokens from GitHub (among other IdPs!) meaningfully subverts PyPI's OSS committment.
> PyPI package upload in favor of the trusted provider system. pip (or other tooling) will add warnings whenever I install a package that is not "trusted". Maybe I am simply pessimistic.
Let me put it this way: if PyPI disables API tokens in favor of mandatory Trusted Publishing, I will eat my shoe on a livestream.
(I was the one of the engineers for both API tokens and Trusted Publishing on PyPI. They're complementary, and neither can replace the other.)
It kinda feels like that right now.
This is good to know. I did not see related statements in of the documents linked to this discussion, though.
I am in no hurry to be pushed into using Github, Gitlab or whatever else. Programmer's open source code has been monetized by these companies to feed AI LLM beasties, and it's fundamentally objectionable to me. I self-host my code using Gitea for that reason.
This is exactly the same as Trusted Publishing, where people accused the feature of being a MSFT trojan horse because GitHub was enabled first. I think it would behoove everybody to assume the best intentions here and remember that the goal is to secure the most people by default.
This page clearly explains how this uses OIDC, and uses GitHub Actions as an example. At no point in my read did I feel like this was shilling me a microsoft product.
> Attestations are currently only supported when uploading with Trusted Publishing, and currently only with GitHub-based Trusted Publishers. Support for other Trusted Publishers is planned. See #17001 for additional information.
[1]: https://docs.pypi.org/attestations/producing-attestations/
IMO the launch should have been delayed until there was more than one trusted publisher.
Even if you standardized the more technical parts like OIDC claim metadata (which is 100% provider specific), it wouldn't really change the thrust of any of this — PyPI is literally trusting the publisher in a social sense, not in some "They comply with RFC standards and therefore I can plug in my random favorite thing" sense.
This whole discussion is basically a non-issue, IMO. If you want to publish stuff from your super-duper-secret underground airgapped base buried a mile underneath the Himalayas, you can use an API token like you have been able to. It will be far less hassle than running your own OIDC solution for this stuff.
On the other hand, considering all of that, you can see why Python would arrive at this design. They are the only other people besides NPM who regularly have supply chain attack problems. They are seemingly completely unopinionated about how to fix their supply chain problems, while being opinionated about the lack of opinions about packaging in general. What is the end goal? Presumably one of the giant media companies, Meta, Google or Microsoft maybe, have to take over the development of runtime and PEP process, but does that even sound good?
If PyPI accepts two more likely ones, a full 2/3rds will unrelated to GitHub.
[1]: https://docs.pypi.org/trusted-publishers/adding-a-publisher/
[1]: https://blog.yossarian.net/2023/05/21/PGP-signatures-on-PyPI...
PyPI will accept any key bound to an identity, provided we know how to verify that identity. Right now that means we accept Trusted Publishing identities, and GitHub identities in particular, since that's where the overwhelming majority of Python package publishing traffic comes from. Like what happened Trusted Publishing, this will be expanded to other identities (like GitLab repositories) as we roll it out.
I guess I'll have to update my slides :D
But well as a debian developer my advice is to just use debian and completely ignore pypi, so I might be slightly biased.
Absence of support for self-hosting, in the spirit of freedom 0 = OSD 5&6? Or, for that matter, for any provider whose code is fully open source?
This has nothing to do with self-hosting, whatsoever. You can upload to PyPI with an API token; that will always work and will not do anything related to Trusted Publishing, which exists entirely because it makes sense for large services.
PyPI isn't required to federate with the server in my basement through OpenID Connect to be considered open source.
The burden to ban russians/koreans/iranians will be on those big companies and pypi will be able to claim they respect the rules without having the resources themselves to ban accounts from sanctioned countries.
So in my opinion, in the future tooling will have some option to check if the software was uploaded via microsoft et al and can be considered unsanctioned in USA.
Or they might just say that's how you upload now and that's it.
You are talking about guys who can barely figure out how to structure a Python package repository. They are not doing this kind of 4D chess.
And anyway, none of that makes any sense.
Never mind all the low level details of the temporary keys and hashes and all of that. This is an high level comment not a university book about security.
(I'm disappointed by the degree to which people seem to gleefully conspiracize about Python packaging, and consequently uncritically accept the worse possible interpretation of anything done to try and improve the ecosystem.)
And this is in no way a consequence of pypi stopping to host public keys right? Say the whole story at least… Say that there used to be a way to verify the signatures but you dropped it years ago and since then the signatures have been useless.
Putting aside the fact that the mechanisms underpinning Github Actions are a mystery black box, the vast vast vast majority of github workflows are not built in a reproducible way - it's not even something that's encouraged by Github Actions' architecture, which emphasises Actions' container images that are little more than packaged installer scripts that go and download dependencies from random parts of the internet at runtime. An "attestation" makes no guarantee that one of these randomly fetched dependencies hasn't been usurped.
This is not to go into the poor security record of Github Actions' permissions model, which has brought us all a number of "oh shit" moments.
(But also: having PyPI be the keyserver defeats the point, since PyPI could then trivially replace my package's key. If that's the "whole story," it's not a very good one.)
If the dev’s lower in the comments claim that other support is soon to be added doesn’t pan out then sure this is a valid argument, but it feels very premature and ignores the reality of launch expectations of most agile based teams.
There's no hard technical reason for that. It's mostly that PyPI only want to trust certain issuers who they think will look after their signing keys responsibly.
I think there's a separate question around trust. But I think blocking non-trusted publishers from using a more secure form of authentication isn't the answer. Instead I think it makes more sense to use nudges in the PyPI UI and eventually of consumers (e.g. pip) to indicate that packages have come from non-trusted publishers.
[1] https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_pr... [2] https://learn.microsoft.com/en-us/graph/api/resources/federa... [3] https://cloud.google.com/identity-platform/docs/web/oidc [4] https://developer.hashicorp.com/vault/docs/auth/jwt
https://bootstrappable.org/ https://reproducible-builds.org/
It comes across like the goal of this system is to prove to my users a) who I am; b) that I am working in cooperation with some legitimate big business. I don't understand why I should want to prove such things, nor why it's desirable for b) to even be true, nor why my users should particularly care about a) at all, whether I can prove it or not.
I think my users should only care that the actual contents of the sdist match, and the built contents of the wheel correspond to, the source available at the location described in the README.
Yes, byte-for-byte reproducible builds would be a problem, for those who don't develop pure Python. Part of why sdists exist is so that people who can't trust the wheel, like distro maintainers, can build things themselves. And really - why would a distro maintainer take I take "I can cryptographically prove the source came from zahlman, and trust me, this wheel corresponds to the source" from $big_company any more seriously than "trust me, this wheel corresponds to the source" directly from me?
If I have to route my code through one of these companies ("Don't worry, you can work with Google instead!" is not a good response to people who don't want to work with Microsoft) to gain a compliance checkmark, and figure out how someone else's CI system works (for many years it was perfectly possible for me to write thousands of lines of code and never even have an awareness that there is such a thing as CI) so that I can actually use the publishing system, and learn what things like "OIDC IdP" are so that I can talk to the people in charge...
... then I might as well learn the internal details of how it works.
And, in fact, if those people are suggesting that I shouldn't worry about such details, I count that as a red flag.
I thought open source was about transparency.
Please reconsider this position with Debian and python packaging.
I sympathize, but there are obvious reasons for this: low trust in the Python packaging system generally (both because of recent PSF activity and because of the history of complications and long-standing problems), and the fact that many similar things have been seen in the Linux and open-source worlds lately (for one example, Linux Mint's new stance on Flatpak and how they present Flatpak options in the Software Manager). (Edit: it also kinda rhymes with the whole "Secure Boot" thing and how much trouble it causes for Linux installation.)
(Also, now I'm trying to wrap my head around the fact that there's such a thing as "Docker Hub" in the first place, and that people feel comfortable using it.)
How long should I expect it to take until I can automatically generate an attestation from `twine`? Or does someone else have to sign off on it through some OpenID mumbo-jumbo before I can qualify as "trusted"?
Automating the creation of SBOMs sounds even further out, since we're still struggling with actually just building sdists in the first place.
We know the first falls apart because the Web of Trust fell apart. When the UI asks people "hey, this is new, here's the fingerprint, do you trust it?", nobody actually conducts review, they just mindlessly click "yes" and move on. Very, very, very few people will actually conduct full review of all changes. Even massive organizations that want to lie to themselves that they review the changes, end up just outsourcing the responsibility to someone who is willing to legally accept the risk then they go ahead and just mindlessly click "yes" anyway. There are just too many changes to keep track of and too many ways to obscure malicious code.
We know the second falls apart because human beings are both the inevitable and ultimate weakest link in even the robustest security architectures. Human beings will be compromised and the blast radius on these highly centralized architectures will be immense.
Pick your poison.
Don't download packages from PyPI, go upstream to the actual source code on GitHub, audit that, build locally, verify your build is the same as the PyPI one, check the audits people have posted using crev, decide if you trust any of them, upload your audit to crev too.
https://reproducible-builds.org/ https://github.com/crev-dev/
This is probably deserving a criminal investigation since it appears the funds were probably misused?
Well done guys! Good job!
Some might even expect pypi to build everything going into it's repo similar to a distro building for it's repos. As far as I can tell, all this 'attestation' is just pypi making microsoft pay for the compute time with extra steps.
> Let me put it this way: if PyPI disables API tokens in favor of mandatory Trusted Publishing, I will eat my shoe on a livestream.
Yeah, sure. But parent poster also mentioned pip changes. Will you commit to eating a shoe if pip verifies attestations? Of course not, you know better than I do that those changes to pip are in the works and experimental implementations already available. You must have forgotten to mention that. PyPI doesn't have to be the bad guy on making sure its a PITA to use those packages. Insert Futurama "technically correct, the best kind of correct" meme.
> Reliability & notability: The effort necessary to integrate with a new Trusted Publisher is not exceptional, but not trivial either. In the interest of making the best use of PyPI's finite resources, we only plan to support platforms that have a reasonable level of usage among PyPI users for publishing. Additionally, we have high standards for overall reliability and security in the operation of a supported Identity Provider: in practice, this means that a home-grown or personal use IdP will not be eligible.
From what I can tell, this means using self-hosted Github/Gitlab/Gitea/whatever instances is explicitly not supported for most projects. I can't really tell what "a reasonable level of usage among PyPI users" means (does the invent.kde.org qualify? gitlab.gnome.org? gitlab.postmarketos.org?) but I feel like this means "Gitlab.com and maybe gitea.com" based on my original reading.
Of course by definition PyPI is already a massive single point of failure, but the focus on a few (corporate) partners to allow secure publishing feels like a mistake to me. I'm not sure what part of the cryptographic verification process restricts this API to the use of a few specific cloud providers.
Unless you build all of your images `FROM scratch` by default (or use in-house registries or quay or whatnot for all of your base images), you've used Docker Hub too...
A poorly maintained issuer could leak their secret keys, allowing anyone to impersonate any package from their service.
I disagree with your assessment that provisioning API tokens is better than being able to authenticate with a JWT. It makes managing credentials in an organisation much easier as far fewer people need access to the signing key compared to who would need access to the API token. Using asymmetric keys also means there's less opportunity for keys to be leaked.
Unpaid developers from Europe (and Asia) just serve as disposable work horses for the U.S. "elites".
Before I could learn my password and type it on twine. If my machine was stolen no upload on pypi was possible.
Now it's a token file on my disk so if my machine is stolen, then token can be used to publish.
Using github to publish doesn't change anything: if my machine is stolen the token needed to publish is still there, but instead of directly to pypi it will need to go via github first.
If I don't want to use GitHub, let alone GitHub Actions, I am now effectively excluded from publishing my work on PyPI: quite unacceptable.
And maybe that's a good thing? I'm not against security, and supply chain attacks are real. But it's still kind of sad that the amazing machines we all own are more and more just portals to the 'trusted' corporate clouds. And I think there are things that could be done to improve security with local uploads, but all the effort seems to go into the cloud path.
Without trusted publishers a nefarious actor could use a leaked PyPI API key to upload from anywhere. If the only authorised location is actions on a specific Github repo then it makes a supply chain attack much trickier and much more visible.
With the new attestations it's now possible for package consumers to verify where the package came from too.
You can still, and will always be able to use API tokens.
The only reason I started using PyPI was because I had a package on my website that someone else uploaded to PyPI, and I started getting support questions about it. The person did transfer control over to me - he was just trying to be helpful.
I stopped caring about PyPI with the 2FA requirement since I only have one device - my laptop - while they seem to expect that everyone is willing to buy a hardware device or has a smartphone, and I frankly don't care enough to figure it out since I didn't want to be there in the first place and no one paid me enough to care.
Which means there is a security issue whenever I make a new package available only on my website should someone decide to upload it to PyPI, perhaps along with a certain something extra, since people seem to think PyPI is authoritative and doesn't need checking.
Insinuating dishonesty is rude, especially when it’s baseless: the post linked in this one is explicit about experimenting with verification. There’s no point in signing without verification.
pip may or may not verify attestations; I don’t control what they do. But even if they do, they will never be able to mandate them for every project: as I have repeatedly said, there is no plan (much less desire) to mandate attestation generation.
Edit: woodruffw is sitting here and thoughtfully commenting and answering people despite how hostile some of the comments are (someone even said "This is probably deserving a criminal investigation"! and it has more upvotes than this comment). I think that should be appreciated, even if you don't like Python.
Your self-signed app analogy is apt: self-signing without a strong claimant identity proof is a solution, but a much weaker one than we wanted.
The gp was pointing out that the docs heavily emphasise (& therefore encourage) GHA usage & suggested language changes.
If people are confused about what they need to use Trusted Publishing & you're suggesting (begging) a re-read as the solution, that seems evidence enough that the gp is correct about the docs needing a reword.
It seems reasonable to suggest that advertising a solution for public use at a point in time when support is at <2 systems is not an ideal way to encourage an open ecosystem.
This is an unfortunate double effect, and one that I’m aware of. That’s why the emphasis has been on enabling them by default for as many people as possible.
I also agree about the need for a local/self-hosted story. We’ve been thinking about how to enable similar attestations with email and domain identities, since PyPI does or could have the ability to verify both.
If you work in a large organization that has the ability to maintain a PKI for OIDC, you should open an issue on PyPI discussing its possible inclusion as a supported provider. The docs[1] explain the process and requirements.
With an attestation from GitHub I know at least that the workflow that ran it and the artifacts it produced will be 100% traceable and verifyable. This doesn't mean the code was not malicious, but for example it will rule out that someone did the build at home and attached an alternative version of an artifact to a GitHub release. Like how that was done with the xz project.
It is fine to not like GitHub, but I think that means we need more trusted builders. Developers cannot be pushed toward just GitHub.
It always starts small: "Oh, we just want everyone to be nice, so we have this super nice CoC document. Anyone who distrusts us is malicious."
Ten years later you have a corporate-backed inner circle that abuses the CoC to silence people, extend their power and earning potential and resort to defamation and libel.
It is possible that the people here who defend this corporate attestation-of-provenance-preferably-on-our-infrastructure scheme think that nothing nefarious will happen in the future. Well, given the repressive history of Python they are naive then.
It just takes pip to add a flag to enable "non-attested" packages. And of course they'll name it something like --allow-insecure-potentially-malicious.
As long as those packages get digital attestation, perhaps attested by PyPI itself post-upload or from a well-known user provided key similar to how GPG worked but managed by PyPI this time.
Surely you see how this is creating two classes of packages, where the favored one requires you use a blessed 3rd party?
Yes, agreed. This is why the docs explicitly say that we’re planning on enabling support for other publisher providers, like GitLab.
Again, I need people to let this sink in: Trusted Publishing is not tied to GitHub. You can use Trusted Publishing with GitLab, and other providers too. You are not required to produce attestations, even if you use Trusted Publishing. Existing GitLab workflows that do Trusted Publishing are not changed or broken in any way by this feature, and will be given attestation support in the near future. This is all documented.
Nobody is suggesting that. There's an annotation on the internals documentation to make sure that people land on the happy path when it makes sense for them. If you want to learn about the internals of how this works, I heartily encourage you to. I'd even be happy to discuss it over a call, or chat, or any other medium you'd like: none of this is intended to be cloak-and-dagger.
(You can reach this conclusion abductively: if any of this was intended to be opaque, why would we write a public standard, or public documentation, or a bunch of high-profile public announcements for it?)
The strongest possible version of this is that projects that do provide attestations will be checked by installers for changes in identity (or regression in attestations). In other words, this feature will only affect packages that opt to provide attestations. And even that is uncertain, since Python packaging is devolved and neither I (nor anybody else involved in this effort) has literally any ability to force installers to do anything.
* An extremely complex, byzantine packet format with security problems of its own.
* Decades of backwards compatibility, which also harms security.
* Extreme unfriendliness towards automation.
* Way too many features.
* Encouragement of bad security practices like extremely long lived keys.
* Moribund and flawed ecosystem.
Lots of cryptographers agree that PGP has outlived its usefulness and it's time to put it out of its misery.
And really there's little need for GPG when package signing can be done more reliably and with less work without it.
I was a fan of PGP since the early days, but I agree that at this point it's best to abandon it.
Any pathway to provide trusted attestations for random individual Hacker News users like yourself will, in fact, require a different design.
Maybe there is a case to be made for STF to fund making Codeberg (a German-headquartered organization) one of the PyPI trusted hosts. If Codeberg were supported, that would go a long way to addressing fears. And conversely, if Codeberg can't meet PyPI's bar, that suggests complete commercial capture.
Let this sink in: if it is possible to check attestations, attestations will be checked and required by users, and PyPI packages without them will be used less. Whether PyPI requires attestations is unimportant.
I still have to understand how the github credentials stored on my computer are harder to steal than the pypi credentials stored on the very same computer.
If you can explain this convincingly, maybe I'll start to believe some of the things you claim.
This points to a basic contradiction in how people are approaching open source: if you want your project to be popular on a massive scale, then you should expect those people to have opinions about how you're producing that project. That should not come as a surprise.
If, on the other hand, you want to run a project that isn't seeking popularity, then you have a reasonable expectation that people won't ask you for these things and you shouldn't want your packages downloads from PyPI as much as possible. When people do bug you for those things, explicitly rejecting them is (1) acceptable, and (2) should reduce the relative popularity of your project.
The combination of these things ("no social expectations and a high degree of implicit social trust/criticality") is incoherent and, more importantly, doesn't reflect observed behavior (people who do OSS as a hobby - like myself - do try and do the more secure things because there's a common acknowledgement of responsibility for important projects).
If you draw an ugly cat, and someone tells you it's ugly, it doesn't matter how much you insist that it isn't, and the same is true for docs. It doesn't matter what your intention was: if people keep falling over the same phrasing, just rephrase it. You're not your writing, it's just text to help support your product, and if that text is causing problems just change it (with the help of some reviewers, because it's clear you think this is phrased well enough, but you're not the target audience for this document, and the target audience is complaining).
I believe you would have received a much less pushback if you hadn't been coy about the obviously impending quiet deprecation (for lack of a better phrase) you finally acknowledged elsewhere in the thread.
This is not, and has never been, the argument for Trusted Publishing. The argument is that temporary, automatically scoped credentials are less dangerous than permanent, user scoped credentials, and that an attacker who does steal them will struggle to maintain persistence or pivot to other scoped projects.
The documentation is explicit[1] about needing to secure your CI workflows, and treating them as equivalent to long-lived API tokens in terms of security practices. Using a Trusted Publisher does not absolve you of your security obligations; it only reduces the scope of failures within those obligations.
[1]: https://docs.pypi.org/trusted-publishers/security-model/
I tell people to "pip install -i $MY_SITE $MY_PACKAGE". I can tell from my download logs that this is open to dependency confusion attacks as I can see all the 404s from attempts to, for example, install NumPy from my server. To be clear, the switch to 2FA was only the triggering straw - I was already migrating my packages off of PyPI.
Finally, I sell a source license for a commercial product (which is not the one which got me started with PyPI). My customers install it via their internally-hosted PyPI mirrors.
I provide a binary package with a license manager for evaluation purposes, and as a marketing promotion. As such, I really want them to come to my web site, see the documentation and licensing options, and contact me. I think making it easier to express as a dependency via PyPI does not help my sales, and actually believe the extra intermediation likely hinders my sales.
[1] I dislike dependencies so much that I figured out how to make a PEP 517 compatible version that doesn't need to contact PyPI simply to install a local package. Clearly I will not become a Rust developer.
[2] PyPI support depends on GitHub issues. I regard Microsoft as a deeply immoral company, and a threat to personal and national data sovereignty, which means I will not sign up for a GitHub account. When MS provides IT support for the upcoming forced mass deportations, I will have already walked away from Omelas.
The "reasonability" of this is dependent on your goals. If an open ecosystem isn't a priority, then your statement is indeed correct.
As a package consumer, I agree with what you've said. I would have a preference for packages that are built by a large, trusted provider. However, if I'm a package developer, the idea worries me a bit. I think the concerns others are raising are pragmatic because once a majority of developers start taking the easy path by choosing (ex) GitHub Actions, that becomes the de-facto standard and your options as a developer are to participate or be left out.
The problem for me is that I've seen the same scenario play out many times. No one is "forced" to use the options controlled by corporate interests, but that's where all the development effort is allocated and, as time goes on, the open source and independent options will simply disappear due the waning popularity that's caused by being more complex than the easier, corporate backed options.
At that point, you're picking platform winners because distribution by any other means becomes untenable or, even worse, forbidden if you decide that only attested packages are trustworthy and drop support for other means of publishing. Those platforms will end up with enormous control over what type of development is allowed. We have good examples of how it's bad for both developers and consumers too. Apple's App Store is the obvious one, but uBlock Origin is even better. In my opinion, Google changed their platform (Chrome) to break ad blockers.
I worry that future maintainers aren't guaranteed to share your ideals. How open is Open Solaris these days? MySQL? OpenOffice?
I think the development community would end up in a much stronger position if all of these systems started with an option for self-hosted, domain based attestations. What's more trustworthy in your mind; 1) this package was built and published by ublockorigin.com or 2) this package was built and published by GitHub Actions?
Can an impersonator gain trust by publishing via GitHub actions? What do the uninformed masses trust more? 1) an un-attested package from gorhill/uBlock, which is a user without a verified URL, etc. or 2) an attested package from ublockofficial/ublockofficial, which could be set up as an organization with ublockofficial.com as a verified URL?
I know uBlock Origin has nothing to do with PyPI, but it's the best example to make my point. The point being that attesting to a build tool-chain that happens to be run by a non-verifying identity provider doesn't solve all the problems related to identity, impersonation, etc.. At worst, it provides a false sense of trust because an attested package sounds like it's trustworthy, but it doesn't do anything to verify the trustworthiness of the source, does it?
I guess I think the term "Trusted Publisher" is wrong. Who's the publisher of uBlock Origin? Is it GitHub Actions or gorhill or Raymond Hill or ublockorigin.com? As a user, I would prefer to see an attestation from ublockorigin.com if I'm concerned about trustworthiness and only get to see one attestation. I know who that is, I trust them, and I don't care as much about the technology they're using behind the scenes to deliver the product because they have a proven track record of being trustworthy.
That said, I do agree with your point about gaining popularity and compromises that developers without an existing reputation may need to make. In those cases, I like the idea of having the option of getting a platform attestation since it adds some trustworthiness to the supply chain, but I don't think it should be labelled as more than that and I think it works better as one of many attestations where additional attestations could be used to provide better guarantees around identity.
Skimming the provenance link [1] in the docs, it says:
> It’s the verifiable information about software artifacts describing where, when and how something was produced.
Isn't who is responsible for an artifact the most important thing? Bad actors can use the same platforms and tooling as everyone else, so, while I agree that platform attestations are useful, I don't understand how they're much more than a verified (ex) "Built using GitHub" stamp.
To be clear, I think it's useful, but I hope it doesn't get mistakenly used as a way of automatically assuming project owners are trustworthy. It's also possible I've completely misunderstood the goals since I usually do better at evaluating things if I can try them and I don't publish anything to PyPI.
That's what I think will happen.
> And maybe that's a good thing? I'm not against security, and supply chain attacks are real.
The problem is the attestation is only for part of the supply chain. You can say "this artifact was built with GitHub Actions" and that's it.
If I'm using Gitea and Drone or self-hosted GitLab, I'm not going to get trusted publisher attestations even though I stick to best practices everywhere.
Contrast that with someone that runs as admin on the same PC they use for pirating software, has a passwordless GPG key that signs all their commits, and pushes to GitHub (Actions) for builds and deployments. That person will have more "verified" badges than me and, because of that, would out-compete me if we had similar looking projects.
The point being that knowing how part of the supply chain works isn't sufficient. Security considerations need to start the second your finger touches the power button on your PC. The build tool at the end of the development process is the tip of the iceberg and shouldn't be relied on as a primary indicator of trust. It can definitely be part of it, but only a small part IMO.
The only way a trusted publisher (aka platform) can reliably attest to the security of the supply chain is if they have complete control over your development environment which would include a boot-locked PC without admin rights, forced MFA with a trustworthy (aka their) authenticator, and development happening 100% on their cloud platform or with tools that come off a safe-list.
Even if everyone gets onboard with that idea it's not going to stop bad actors. It'll be exactly the same as bad actors setting up companies and buying EV code signing certificates. Anyone with enough money to buy into the platform will immediately be viewed with a baseline of trust that isn't justified.
That's not everything, but it is a pretty big step. I don't love the way it reinforces dependence on a few big platforms, but I also don't have a great alternative to suggest.
It seems hard to justify "we want to ensure the build is reproducible" when there is no system to attempt the build on PyPI's end and the default installer is perfectly happy to attempt the build (when no prebuilt version is available) on the user's machine - orchestrated by arbitrary code, and without confirmation as a default - without a chance to review the downloaded source first.
It seems hard to justify "we want to limit the scope of the authentication" when "a few minutes" is more than enough time for someone who somehow MITMed a major package to insert pre-prepared malware, and access to a single package from a major project affects far more machines than access to every repo of some random unimportant developer.
(The wheel format has hashes, but the sdist format doesn't. If these sorts of attacks aren't meaningfully mitigated by wheel hashes, what is the RECORD file actually accomplishing? If they are, shouldn't sdists have them too?)
I’m a big fan of this style [1] of building base containers and think that keeping the container where you’ve stacked 4 layers (up to resources) makes sense. Call it a build container and keep it forever.
In short, use "backend-path" to include a subdirectory which contains your local copies of setuptools, wheel, etc. Create a file with the build hooks appropriate for "backend-path". Have that those hooks import the actual hooks in setuptools. Finally, set your "requires" to [].
Doing this means taking on a support burden of maintaining setuptools, wheels, etc. yourself. You'll also need to include their copyright statements in any distribution, even though the installed code doesn't use them.
(As I recall, that "etc" is hiding some effort to track down and install the full list of packages dragged in, but right now I don't have ready access to that code base.)
I understand you're not deprecating or recommending anything, and have learned in the meantime just how ugly python infighting is in and around package tooling. I can see the motivation to keep external messaging limited to saying a feature was added and everything else remains constant.
I work with a bunch of normies that think python packing starts and ends at "pip install"-ing systemwide on Windows. One day in the future the maintainers of the packages they use will likely be encouraged to use this optional new feature (and/or they already have been publicly). Even later in the future those end users themselves might be encouraged by warnings or errors not to install packages without this extra feature.
PyPI has deprecated nothing. Bureaucrat Conrad, you are technically correct. The best kind of correct.
You're insinuating an ulterior motive. Please don't do that; assume good faith[1].
(You can find trivial counterexamples that falsify this: PyPI was loud and explicit about deprecating and disabling PGP; there's no reason to believe it would be any less loud and explicit about a change here.)
Not the way they implement things. When they started forcing people to use 2FA, google also made a titan keys giveaway. But you set it up on your account, generate a token and that's it.
Identical situation on github. Setup 2FA with an hardware key, then generate a token and never use the hardware key ever again.
But they didn't touch twine at all. They just made me create a token and save it in a .txt file. That's it.