←back to thread

Go is still not good

(blog.habets.se)
644 points ustad | 7 comments | | HN request time: 0.001s | source | bottom
Show context
the_duke ◴[] No.44983331[source]
I personally don't like Go, and it has many shortcomings, but there is a reason it is popular regardless:

Go is a reasonably performant language that makes it pretty straightforward to write reliable, highly concurrent services that don't rely on heavy multithreading - all thanks to the goroutine model.

There really was no other reasonably popular, static, compiled language around when Google came out.

And there still barely is - the only real competitor that sits in a similar space is Java with the new virtual threads.

Languages with async/await promise something similar, but in practice are burdened with a lot of complexity (avoiding blocking in async tasks, function colouring, ...)

I'm not counting Erlang here, because it is a very different type of language...

So I'd say Go is popular despite the myriad of shortcomings, thanks to goroutines and the Google project street cred.

replies(7): >>44983372 #>>44983413 #>>44983414 #>>44983469 #>>44983501 #>>44983524 #>>44983597 #
zwnow ◴[] No.44983372[source]
What modern language is a better fit for new projects in your opinion?
replies(5): >>44983386 #>>44983445 #>>44985494 #>>44989834 #>>45025592 #
jiehong ◴[] No.44989834[source]
Maybe weirdly I’d put swift in there.
replies(1): >>44990171 #
1. vips7L ◴[] No.44990171[source]
Swift is my hope for the next big server language. Great type system, great error handling.
replies(2): >>44994176 #>>44996044 #
2. gf000 ◴[] No.44994176[source]
I haven't followed swift too closely, but ref counting is not a good fit for typical server applications. Sure, value types and such take off a lot of load from the GC (yes, ref counting is a GC), but still, tracing GCs have much better performance on server workloads. (Reference counting when an object is shared between multiple cores require atomic increments/decrements and that is very expensive).
replies(2): >>44994279 #>>44995105 #
3. jiehong ◴[] No.44994279[source]
Tracing GC and their pauses on server workload is another tradeoff. They all have a tradeoff. You make a fair point.
replies(1): >>44994301 #
4. gf000 ◴[] No.44994301{3}[source]
Sure, though RC can't get away from pauses either - ever seen a C++ program seemingly hang at termination? That's a large object graph recursively running its destructors. And the worst thing is that it runs on the mutator thread (the thread doing the actual work).

Also, Java has ZGC that basically solved the pause time issue, though it does come at the expense of some throughput (compared to their default GC).

5. zozbot234 ◴[] No.44995105[source]
> but still, tracing GCs have much better performance on server workloads

Good performance with traditional tracing GC's involves a lot of memory overhead. Golang improves on this quite a bit with its concurrent GC, and maybe Java will achieve similarly in the future with ZGC, but reference counting has very little memory overhead in most cases.

> Reference counting when an object is shared between multiple cores require atomic increments/decrements and that is very expensive

Reference counting with a language like Rust only requires atomic inc/dec when independently "owning" references (i.e. references that can keep the object around and extend its lifecycle) are added or removed, which should be a rare operation. It's not really performing an atomic op on every access.

replies(1): >>44995820 #
6. gf000 ◴[] No.44995820{3}[source]
And memory is cheap, especially when we talk about backend workloads.

A tracing GCs can do the job concurrently, without slowing down the actual, work-bearing threads, so throughput will be much better.

> Golang improves on this quite a bit with its concurrent GC

Not sure what does it have to do with memory overhead. Java's GCs are at least generation ahead on every count, Go can just get away with a slower GC due to value types.

7. Degorath ◴[] No.44996044[source]
In my opinion they need to invest a lot more time and money into it for that. The development experience on VSCode was pretty bad (I think the LSP has a memory leak), and some important (for me) libraries aren't tuned very well yet (a Vapor webserver can sit around 100 MiB memory, whereas putting a bunch of load on the grpc implementation balloons the memory usage to >1 GiB).