←back to thread

Go is still not good

(blog.habets.se)
644 points ustad | 1 comments | | HN request time: 0.001s | source
Show context
blixt ◴[] No.44983245[source]
I've been using Go more or less in every full-time job I've had since pre-1.0. It's simple for people on the team to pick up the basics, it generally chugs along (I'm rarely worried about updating to latest version of Go), it has most useful things built in, it compiles fast. Concurrency is tricky but if you spend some time with it, it's nice to express data flow in Go. The type system is most of the time very convenient, if sometimes a bit verbose. Just all-around a trusty tool in the belt.

But I can't help but agree with a lot of points in this article. Go was designed by some old-school folks that maybe stuck a bit too hard to their principles, losing sight of the practical conveniences. That said, it's a _feeling_ I have, and maybe Go would be much worse if it had solved all these quirks. To be fair, I see more leniency in fixing quirks in the last few years, like at some point I didn't think we'd ever see generics, or custom iterators, etc.

The points about RAM and portability seem mostly like personal grievances though. If it was better, that would be nice, of course. But the GC in Go is very unlikely to cause issues in most programs even at very large scale, and it's not that hard to debug. And Go runs on most platforms anyone could ever wish to ship their software on.

But yeah the whole error / nil situation still bothers me. I find myself wishing for Result[Ok, Err] and Optional[T] quite often.

replies(18): >>44983384 #>>44983427 #>>44983465 #>>44983479 #>>44983531 #>>44983616 #>>44983802 #>>44983872 #>>44984433 #>>44985251 #>>44985721 #>>44985839 #>>44986166 #>>44987302 #>>44987396 #>>45002271 #>>45002492 #>>45018751 #
xyzzyz ◴[] No.44983427[source]
Go was designed by some old-school folks that maybe stuck a bit too hard to their principles, losing sight of the practical conveniences.

I'd say that it's entirely the other way around: they stuck to the practical convenience of solving the problem that they had in front of them, quickly, instead of analyzing the problem from the first principles, and solving the problem correctly (or using a solution that was Not Invented Here).

Go's filesystem API is the perfect example. You need to open files? Great, we'll create

  func Open(name string) (*File, error)
function, you can open files now, done. What if the file name is not valid UTF-8, though? Who cares, hasn't happen to me in the first 5 years I used Go.
replies(10): >>44983477 #>>44983490 #>>44983605 #>>44984231 #>>44984419 #>>44985099 #>>44985582 #>>44985985 #>>44988513 #>>44993106 #
nasretdinov ◴[] No.44983477[source]
Note that Go strings can be invalid UTF-8, they dropped panicking on encountering an invalid UTF string before 1.0 I think
replies(1): >>44983502 #
xyzzyz ◴[] No.44983502[source]
This also epitomizes the issue. What's the point of having `string` type at all, if it doesn't allow you to make any extra assumptions about the contents beyond `[]byte`? The answer is that they planned to make conversion to `string` error out when it's invalid UTF-8, and then assume that `string`s are valid UTF-8, but then it caused problems elsewhere, so they dropped it for immediate practical convenience.
replies(6): >>44983745 #>>44983751 #>>44983838 #>>44983858 #>>44984463 #>>45024580 #
assbuttbuttass ◴[] No.44983745[source]
string is just an immutable []byte. It's actually one of my favorite things about Go that strings can contain invalid utf-8, so you don't end up with the Rust mess of String vs OSString vs PathBuf vs Vec<u8>. It's all just string
replies(1): >>44984167 #
zozbot234 ◴[] No.44984167[source]
Rust &str and String are specifically intended for UTF-8 valid text. If you're working with arbitrary byte sequences, that's what &[u8] and Vec<u8> are for in Rust. It's not a "mess", it's just different from what Golang does.
replies(2): >>44984324 #>>44985383 #
maxdamantus ◴[] No.44985383[source]
It's never been clear to me where such a type is actually useful. In what cases do you really need to restrict it to valid UTF-8?

You should always be able to iterate the code points of a string, whether or not it's valid Unicode. The iterator can either silently replace any errors with replacement characters, or denote the errors by returning eg, `Result<char, Utf8Error>`, depending on the use case.

All languages that have tried restricting Unicode afaik have ended up adding workarounds for the fact that real world "text" sometimes has encoding errors and it's often better to just preserve the errors instead of corrupting the data through replacement characters, or just refusing to accept some inputs and crashing the program.

In Rust there's bstr/ByteStr (currently being added to std), awkward having to decide which string type to use.

In Python there's PEP-383/"surrogateescape", which works because Python strings are not guaranteed valid (they're potentially ill-formed UTF-32 sequences, with a range restriction). Awkward figuring out when to actually use it.

In Raku there's UTF8-C8, which is probably the weirdest workaround of all (left as an exercise for the reader to try to understand .. oh, and it also interferes with valid Unicode that's not normalized, because that's another stupid restriction).

Meanwhile the Unicode standard itself specifies Unicode strings as being sequences of code units [0][1], so Go is one of the few modern languages that actually implements Unicode (8-bit) strings. Note that at least two out of the three inventors of Go also basically invented UTF-8.

[0] https://www.unicode.org/versions/Unicode16.0.0/core-spec/cha...

> Unicode string: A code unit sequence containing code units of a particular Unicode encoding form.

[1] https://www.unicode.org/versions/Unicode16.0.0/core-spec/cha...

> Unicode strings need not contain well-formed code unit sequences under all conditions. This is equivalent to saying that a particular Unicode string need not be in a Unicode encoding form.

replies(2): >>44986105 #>>44986175 #
xyzzyz ◴[] No.44986105[source]
The way Rust handles this is perfectly fine. String type promises its contents are valid UTF-8. When you create it from array of bytes, you have three options: 1) ::from_utf8, which will force you to handle invalid UTF-8 error, 2) ::from_utf8_lossy, which will replace invalid code points with replacement character code point, and 3) from_utf8_unchecked, which will not do the validity check and is explicitly marked as unsafe.
replies(1): >>44986537 #
maxdamantus ◴[] No.44986537[source]
But there's no option to just construct the string with the invalid bytes. 3) is not for this purpose; it is for when you already know that it is valid.

If you use 3) to create a &str/String from invalid bytes, you can't safely use that string as the standard library is unfortunately designed around the assumption that only valid UTF-8 is stored.

https://doc.rust-lang.org/std/primitive.str.html#invariant

> Constructing a non-UTF-8 string slice is not immediate undefined behavior, but any function called on a string slice may assume that it is valid UTF-8, which means that a non-UTF-8 string slice can lead to undefined behavior down the road.

replies(3): >>44986631 #>>44987186 #>>44990490 #
gf000 ◴[] No.44987186[source]
How could any library function work with completely random bytes? Like, how would it iterate over code points? It may want to assume utf8's standard rules and e.g. know that after this byte prefix, the next byte is also part of the same code point (excuse me if I'm using wrong terminology), but now you need complex error handling at every single line, which would be unnecessary if you just made your type represent only valid instances.

Again, this is the same simplistic, vs just the right abstraction, this just smudges the complexity over a much larger surface area.

If you have a byte array that is not utf-8 encoded, then just... use a byte array.

replies(1): >>44988899 #
kragen ◴[] No.44988899{3}[source]
There are a lot of operations that are valid and well-defined on binary strings, such as sorting them, hashing them, writing them to files, measuring their lengths, indexing a trie with them, splitting them on delimiter bytes or substrings, concatenating them, substring-searching them, posting them to ZMQ as messages, subscribing to them as ZMQ prefixes, using them as keys or values in LevelDB, and so on. For binary strings that don't contain null bytes, we can add passing them as command-line arguments and using them as filenames.

The entire point of UTF-8 (designed, by the way, by the group that designed Go) is to encode Unicode in such a way that these byte string operations perform the corresponding Unicode operations, precisely so that you don't have to care whether your string is Unicode or just plain ASCII, so you don't need any error handling, except for the rare case where you want to do something related to the text that the string semantically represents. The only operation that doesn't really map is measuring the length.

replies(2): >>44988951 #>>44989099 #
xyzzyz ◴[] No.44989099{4}[source]
> There are a lot of operations that are valid and well-defined on binary strings, such as (...), and so on.

Every single thing you listed here is supported by &[u8] type. That's the point: if you want to operate on data without assuming it's valid UTF-8, you just use &[u8] (or allocating Vec<u8>), and the standard library offers what you'd typically want, except of the functions that assume that the string is valid UTF-8 (like e.g. iterating over code points). If you want that, you need to convert your &[u8] to &str, and the process of conversion forces you to check for conversion errors.

replies(2): >>44989368 #>>44991335 #
maxdamantus ◴[] No.44991335{5}[source]
The problem is that there are so many functions that unnecessarily take `&str` rather than `&[u8]` because the expectation is that textual things should use `&str`.

So you naturally write another one of these functions that takes a `&str` so that it can pass to another function that only accepts `&str`.

Fundamentally no one actually requires validation (ie, walking over the string an extra time up front), we're just making it part of the contract because something else has made it part of the contract.

replies(1): >>44991769 #
1. kragen ◴[] No.44991769{6}[source]
It's much worse than that—in many cases, such as passing a filename to a program on the Linux command line, correct behavior requires not validating, so erroring out when validation fails introduces bugs. I've explained this in more detail in https://news.ycombinator.com/item?id=44991638.