Leading a large open source project must be terrible in this age of constant outrage :-(
Leading a large open source project must be terrible in this age of constant outrage :-(
I do understand people's points about "the age of outrage" and "internet 2018" but still: the PEP wasn't generally accepted as being a fantastic improvement, so why did he feel the need to fight so hard for it?
Interestingly, C++ is going through the same process, with lots of great ideas being proposed, but the sum total of them being an even more complicated language (on top of what is probably the most complicated language already).
Python has been successful, IMHO, because Guido has made several brave, controversial calls. Python 3 breakage and async turned out to be prescient, fantastic decisions.
So yes some breaking change was indicated, but the particular change that was made was the wrong one.
Until way too recently essentially all Python code couldn't handle basic SSL/TLS certificate validation for Internationalized Domain Names. Once you understand what's going on, this situation is a no brainer: In order to connect to a machine named X, we must have turned X into DNS A-labels that we could look up, and we can treat those as bytes. The certificate must have SAN dnsNames matching its names, and those too are written as A-labels. So we can almost just compare the literal bytes (actually DNS A-labels are "case-insensitive" so we need to handle that, and the asterisk "wild card")
But Python, in its wisdom, defined this API to take a string, and strings, as you observe, aren't bytes. So instead of the above unarguable approach they wasted precious months trying to figure out how best to turn the A-labels from a SAN dnsName into Unicode strings, which isn't even the right problem to solve.
Eventually sanity prevailed: https://bugs.python.org/issue28414
Long and bitter experience has shown that people who think they can "simply ignore" the "issue" of encoding actually can't. That mindset is mostly a more polite way of saying "people who assume everything is ASCII all the time, or at most an encoding that always has one byte == one character". Those assumptions break sooner or later. I prefer having them break sooner, because I've been the person who had to clean up the mess at an unpleasant time when it was kicked to "later".
Which in turn means Python 3 made the right choice: text is text and bytes are bytes, and you should never ever pretend bytes are text no matter how much you think you'll never run into a case where the assumption fails.
Making people actually do the conversion has the advantage that when writing their string conversion code they might actually do something with the exception beyond maybe logging it and then pressing on anyway. It also gives you the opportunity to explicitly offer them alternatives like treating everything we can't encode as some sort of replacement character (works well for Unicode, not so much for ASCII), which is way too much to ask of every single function that takes bytes.
That's the ruby solution. I like the ruby API here. When ruby introduced it in 1.9, it did cause similar upgrade pain, since you weren't used to having your strings tagged with an encoding, and suddenly they all kind of need to be if you want to do anything with them as strings-not-bytes.
As someone else noted would be the result, indeed the result was lots of "incompatible encoding" exceptions.
I think ruby actually has a pretty reasonable API here, but several years on, there are still _plenty_ of developers who don't understand it.
Show me the 1995 software that gets this right, and I'll show you proof that time travel is possible, by simply showing you that software right back again.
Abstractly, yes, it's true. Concretely, though, it's not a particularly valid criticism.
Sorry if I wasn't clear. I meant to suggest that the byte-functions wouldn't know or do anything about encodings. They just work with bytes. It's the other functions, that take the encoding-annotated bytes (or optionally a "pure" unicode type), that would care about encodings.
There are so many languages in which the type "string" is not the one to use for strings...
No, they weren't. Impossible as it seems, we had encodings (including multi-byte encodings) for decades before UTF.
Python couldn't use UTF-8 in 1991, but it could very well tag strings with a specific encoding, instead of treating them as a bucket of bytes C-style.
>Java did do it right but by 1995 the industry had already seen the problems of differing character sets.
We had seen the problems of "differing character sets" for decades already (Windows vs DOS version of the same language encodings was a classic example for most users, but the problems go back to EBCDIC and so on).
Java just did a more right thing, but we already have a need for generic string types that can handle multiple encodings and know what they contain and how to convert from one to another.
If yoy truly don't care whether or not the text is decodable (which is sometimes the case), then don't read it as a string, read it as a byte array.
You can still have methods that are generic over byte arrays and text, since that is an orthogonal concern.
Python preceded unicode, though.