←back to thread

2024 points randlet | 2 comments | | HN request time: 0.406s | source
Show context
bla2 ◴[] No.17515883[source]
> I don't ever want to have to fight so hard for a PEP and find that so many people despise my decisions.

Leading a large open source project must be terrible in this age of constant outrage :-(

replies(9): >>17515955 #>>17515972 #>>17516193 #>>17516427 #>>17516776 #>>17516884 #>>17517282 #>>17517716 #>>17517821 #
symmitchry ◴[] No.17515972[source]
I'm a little confused though, by his feelings here. Why did he feel the need to "fight so hard for a PEP" if it was so controversial, and everyone was outraged?

I do understand people's points about "the age of outrage" and "internet 2018" but still: the PEP wasn't generally accepted as being a fantastic improvement, so why did he feel the need to fight so hard for it?

replies(5): >>17516128 #>>17516129 #>>17516223 #>>17516774 #>>17519017 #
jnwatson ◴[] No.17516128[source]
It was controversial syntax, inline assignment-as-expression. There's always a tension between "keep it simple stupid" and "let's make it better", especially when a large user demographic of Python are non-professional-programmers.

Interestingly, C++ is going through the same process, with lots of great ideas being proposed, but the sum total of them being an even more complicated language (on top of what is probably the most complicated language already).

Python has been successful, IMHO, because Guido has made several brave, controversial calls. Python 3 breakage and async turned out to be prescient, fantastic decisions.

replies(6): >>17516204 #>>17516226 #>>17516681 #>>17517178 #>>17517212 #>>17533584 #
coldtea[dead post] ◴[] No.17516204[source]
>Python 3 breakage and async turned out to be prescient, fantastic decisions.

Async maybe. Python 3 breakage? Did you forgot the /s tag?

ben509 ◴[] No.17516612[source]
No, the bad decision was treating bytes and strings interchangeably in the first place. 99% of the hardest to fix breakage was due to that, and it was the right call to pay that price all at once.
replies(6): >>17516995 #>>17517386 #>>17517687 #>>17518069 #>>17520956 #>>17528411 #
jessaustin ◴[] No.17516995[source]
The API in 2 is not optimal, but they fixed it the wrong way. As you know, some operations make sense with bytes, and some make sense with character strings. The operations that make sense with character strings would also make sense with bytes when an encoding is specified. Therefore, there should just be a way of annotating bytes with a suggested encoding. Then byte-oriented packages (e.g. those that deal with data sent over an interface like a socket or pipe) could simply ignore the issue of encoding. Whole classes of errors would just disappear for many python coders. Other coders, who do care about encodings and non-ASCII characters, would still get those errors but that would be OK because they would know how to fix those errors.

So yes some breaking change was indicated, but the particular change that was made was the wrong one.

replies(4): >>17517441 #>>17517444 #>>17517465 #>>17522768 #
1. tialaramex ◴[] No.17517444[source]
Implicitly converting strings into bytes or vice versa means now all your APIs grow an exception "Invalid encoding" / "Can't encode this" / "Can't decode this" / etcetera that you need to deal with.

Making people actually do the conversion has the advantage that when writing their string conversion code they might actually do something with the exception beyond maybe logging it and then pressing on anyway. It also gives you the opportunity to explicitly offer them alternatives like treating everything we can't encode as some sort of replacement character (works well for Unicode, not so much for ASCII), which is way too much to ask of every single function that takes bytes.

replies(1): >>17517968 #
2. jessaustin ◴[] No.17517968[source]
...which is way too much to ask of every single function that takes bytes.

Sorry if I wasn't clear. I meant to suggest that the byte-functions wouldn't know or do anything about encodings. They just work with bytes. It's the other functions, that take the encoding-annotated bytes (or optionally a "pure" unicode type), that would care about encodings.