Most active commenters
  • jessaustin(4)
  • jrochkind1(3)

←back to thread

2024 points randlet | 12 comments | | HN request time: 0.001s | source | bottom
Show context
bla2 ◴[] No.17515883[source]
> I don't ever want to have to fight so hard for a PEP and find that so many people despise my decisions.

Leading a large open source project must be terrible in this age of constant outrage :-(

replies(9): >>17515955 #>>17515972 #>>17516193 #>>17516427 #>>17516776 #>>17516884 #>>17517282 #>>17517716 #>>17517821 #
symmitchry ◴[] No.17515972[source]
I'm a little confused though, by his feelings here. Why did he feel the need to "fight so hard for a PEP" if it was so controversial, and everyone was outraged?

I do understand people's points about "the age of outrage" and "internet 2018" but still: the PEP wasn't generally accepted as being a fantastic improvement, so why did he feel the need to fight so hard for it?

replies(5): >>17516128 #>>17516129 #>>17516223 #>>17516774 #>>17519017 #
jnwatson ◴[] No.17516128[source]
It was controversial syntax, inline assignment-as-expression. There's always a tension between "keep it simple stupid" and "let's make it better", especially when a large user demographic of Python are non-professional-programmers.

Interestingly, C++ is going through the same process, with lots of great ideas being proposed, but the sum total of them being an even more complicated language (on top of what is probably the most complicated language already).

Python has been successful, IMHO, because Guido has made several brave, controversial calls. Python 3 breakage and async turned out to be prescient, fantastic decisions.

replies(6): >>17516204 #>>17516226 #>>17516681 #>>17517178 #>>17517212 #>>17533584 #
coldtea[dead post] ◴[] No.17516204[source]
>Python 3 breakage and async turned out to be prescient, fantastic decisions.

Async maybe. Python 3 breakage? Did you forgot the /s tag?

ben509 ◴[] No.17516612[source]
No, the bad decision was treating bytes and strings interchangeably in the first place. 99% of the hardest to fix breakage was due to that, and it was the right call to pay that price all at once.
replies(6): >>17516995 #>>17517386 #>>17517687 #>>17518069 #>>17520956 #>>17528411 #
1. jessaustin ◴[] No.17516995[source]
The API in 2 is not optimal, but they fixed it the wrong way. As you know, some operations make sense with bytes, and some make sense with character strings. The operations that make sense with character strings would also make sense with bytes when an encoding is specified. Therefore, there should just be a way of annotating bytes with a suggested encoding. Then byte-oriented packages (e.g. those that deal with data sent over an interface like a socket or pipe) could simply ignore the issue of encoding. Whole classes of errors would just disappear for many python coders. Other coders, who do care about encodings and non-ASCII characters, would still get those errors but that would be OK because they would know how to fix those errors.

So yes some breaking change was indicated, but the particular change that was made was the wrong one.

replies(4): >>17517441 #>>17517444 #>>17517465 #>>17522768 #
2. ubernostrum ◴[] No.17517441[source]
Then byte-oriented packages (e.g. those that deal with data sent over an interface like a socket or pipe) could simply ignore the issue of encoding.

Long and bitter experience has shown that people who think they can "simply ignore" the "issue" of encoding actually can't. That mindset is mostly a more polite way of saying "people who assume everything is ASCII all the time, or at most an encoding that always has one byte == one character". Those assumptions break sooner or later. I prefer having them break sooner, because I've been the person who had to clean up the mess at an unpleasant time when it was kicked to "later".

Which in turn means Python 3 made the right choice: text is text and bytes are bytes, and you should never ever pretend bytes are text no matter how much you think you'll never run into a case where the assumption fails.

replies(2): >>17517640 #>>17522848 #
3. tialaramex ◴[] No.17517444[source]
Implicitly converting strings into bytes or vice versa means now all your APIs grow an exception "Invalid encoding" / "Can't encode this" / "Can't decode this" / etcetera that you need to deal with.

Making people actually do the conversion has the advantage that when writing their string conversion code they might actually do something with the exception beyond maybe logging it and then pressing on anyway. It also gives you the opportunity to explicitly offer them alternatives like treating everything we can't encode as some sort of replacement character (works well for Unicode, not so much for ASCII), which is way too much to ask of every single function that takes bytes.

replies(1): >>17517968 #
4. jrochkind1 ◴[] No.17517465[source]
> The operations that make sense with character strings would also make sense with bytes when an encoding is specified. Therefore, there should just be a way of annotating bytes with a suggested encoding.

That's the ruby solution. I like the ruby API here. When ruby introduced it in 1.9, it did cause similar upgrade pain, since you weren't used to having your strings tagged with an encoding, and suddenly they all kind of need to be if you want to do anything with them as strings-not-bytes.

As someone else noted would be the result, indeed the result was lots of "incompatible encoding" exceptions.

I think ruby actually has a pretty reasonable API here, but several years on, there are still _plenty_ of developers who don't understand it.

replies(1): >>17518080 #
5. ◴[] No.17517640[source]
6. jessaustin ◴[] No.17517968[source]
...which is way too much to ask of every single function that takes bytes.

Sorry if I wasn't clear. I meant to suggest that the byte-functions wouldn't know or do anything about encodings. They just work with bytes. It's the other functions, that take the encoding-annotated bytes (or optionally a "pure" unicode type), that would care about encodings.

7. jessaustin ◴[] No.17518080[source]
Sure, there are growing pains with any breaking change. Do you think the ruby 1.8 -> 1.9 transition went more smoothly than the python 2.7 -> 3.x... transition?
replies(2): >>17518536 #>>17518878 #
8. jrochkind1 ◴[] No.17518536{3}[source]
My impression is that it did, yes.
replies(1): >>17520279 #
9. coldtea ◴[] No.17518878{3}[source]
Sure it did. Barely took a few years, and it also had a large performance boost (whereas 3 initially came with regressions until 3.4 or so).

Plus, basic things like Rails were working from the start.

10. jrochkind1 ◴[] No.17520279{4}[source]
But on the other hand, Python's overall popularity has grown (mostly on the back of 'data science' I think), while rubies has shrunk, and python is definitely more popular than ruby at the moment... so python's lesser success at that transition didn't actually matter much in the end?
11. kqr ◴[] No.17522768[source]
Encodings are related to storage and transportation -- not business logic. You should not have to deal with encodings inside your application, so forcing programmers to deal with encodings at the point it matters, when text enters and exits the application, is thus sound.

If yoy truly don't care whether or not the text is decodable (which is sometimes the case), then don't read it as a string, read it as a byte array.

You can still have methods that are generic over byte arrays and text, since that is an orthogonal concern.

12. jessaustin ◴[] No.17522848[source]
For lots of applications it doesn't matter how many bytes are in a character, because characters don't matter. Even within a particular application, it's common for characters to only matter in a few particular locations. It would still have been a win for python, to make such applications easier to write and maintain.