I say switching to Go is like a different kind of Zen. It takes time, to settle in and get in the flow of Go... Unlike the others, the LSP is fast, the developer, not so much. Once you've lost all will to live you become quite proficient at it. /s
I say switching to Go is like a different kind of Zen. It takes time, to settle in and get in the flow of Go... Unlike the others, the LSP is fast, the developer, not so much. Once you've lost all will to live you become quite proficient at it. /s
ISTG if I get downvoted for sharing my opinion I will give up on life.
It just feels sloppy and I'm worried I'm going to make a mistake.
yeah no. you need an acyclic structure to maybe guarantee this, in CPython. other Python implementations are more normal in that you shouldn't rely on finalizers at all.
Some critique is definitely valid, but some of it just sounds like they didn't take the time to grasp the language. It's trade offs all the way. For example there is a lot I like about Rust, but still no my favorite language.
But I can't help but agree with a lot of points in this article. Go was designed by some old-school folks that maybe stuck a bit too hard to their principles, losing sight of the practical conveniences. That said, it's a _feeling_ I have, and maybe Go would be much worse if it had solved all these quirks. To be fair, I see more leniency in fixing quirks in the last few years, like at some point I didn't think we'd ever see generics, or custom iterators, etc.
The points about RAM and portability seem mostly like personal grievances though. If it was better, that would be nice, of course. But the GC in Go is very unlikely to cause issues in most programs even at very large scale, and it's not that hard to debug. And Go runs on most platforms anyone could ever wish to ship their software on.
But yeah the whole error / nil situation still bothers me. I find myself wishing for Result[Ok, Err] and Optional[T] quite often.
In fact this was so surprising to me is that I only found out about it when I wrote code that processed files in a loop, and it started crashing once the list of files got too big, because defer didnt close the handles until the function returned.
When I asked some other Go programmers, they told me to wrap the loop body in an anonymus func and invoke that.
Other than that (and some other niggles), I find Go a pleasant, compact language, with an efficient syntax, that kind of doesn't really encourage people trying to be cute. I started my Go journey rewriting a fairly substantial C# project, and was surprised to learn that despite it having like 10% of the features of C#, the code ended up being smaller. It also encourages performant defaults, like not forcing GC allocation at every turn, very good and built-in support for codegen for stuff like serialization, and no insistence to 'eat the world' like C# does with stuff like ORMs that showcase you can write C# instead of SQL for RDBMS and doing GRPC by annotating C# objects. In Go, you do SQL by writing SQL, and you od GRPC by writing protobuf specs.
I chuckled
Go is a case of the emperor having no clothes. Telling people that they just don’t get it or that it’s a different way of doing things just doesn’t convince me. The only thing it has going for it is a simple dev experience.
> It is possible (though not recommended!) for the __del__() method to postpone destruction of the instance by creating a new reference to it. This is called object resurrection.
[0]: https://docs.python.org/3/reference/datamodel.html#object.__...
Go is a reasonably performant language that makes it pretty straightforward to write reliable, highly concurrent services that don't rely on heavy multithreading - all thanks to the goroutine model.
There really was no other reasonably popular, static, compiled language around when Google came out.
And there still barely is - the only real competitor that sits in a similar space is Java with the new virtual threads.
Languages with async/await promise something similar, but in practice are burdened with a lot of complexity (avoiding blocking in async tasks, function colouring, ...)
I'm not counting Erlang here, because it is a very different type of language...
So I'd say Go is popular despite the myriad of shortcomings, thanks to goroutines and the Google project street cred.
Go has a good-enough standard library, and Go can support a "pile-of-if-statements" architecture. This is all you need.
Most enterprise environments are not handled with enough care to move beyond "pile-of-if-statements". Sure, maybe when the code was new it had a decent architecture, but soon the original developers left and then the next wave came in and they had different ideas and dreamed of a "rewrite", which they sneakily started but never finished, then they left, and the 3rd wave of developers came in and by that point the code was a mess and so now they just throw if-statements onto the pile until the Jira tickets are closed, and the company chugs along with its shitty software, and if the company ever leaks the personal data of 100 million people, they aren't financially liable.
Right now it's function scope; if you need it lexical scope, you can wrap it in a function.
Suppose it were lexical scope and you needed it function scope. Then what do you do?
First time its assigned nil, second time its overwritten in case there's an error in the 2nd function. I dont see the authors issue? Its very explicit.
That said I really wish there was a revamp where they did things right in terms of nil, scoping rules etc. However, they've commited to never breaking existing programs (honorable, understandable) so the design space is extremely limited. I prefer dealing with local awkwardness and even excessive verbosity over systemic issues any day.
I don't think the article sounds like someone didn't take the time to grasp the language. It sounds like it's talking about the kind of thing that really only grates on you after you've seriously used the language for a while.
bar, err := foo()
if err != nil {
return err
}
if err := foo2(); err != nil {
return err
}
The above (which declares a new value of err scoped to the second if statement) should compile right? What is it that he's complaining about?EDIT: OK, I think I understand; there's no easy way to have `bar` be function-scoped and `err` be if-scoped.
I mean, I'm with him on the interfaces. But the "append" thing just seems like ranting to me. In his example, `a` is a local variable; why would assigning a local variable be expected to change the value in the caller? Would you expect the following to work?
int func(a *MyStruct) {
a = &MyStruct{...}
}
If not why would you expect `a = apppend(a, ...)` to work?I would criticize Go from the point of view of more modern languages that have powerful type systems like the ML family, Erlang/Elixir or even the up and coming Gleam. These languages succeed in providing powerful primitives and models for creating good, encapsulating abstractions. ML languages can help one entirely avoid certain errors and understand exactly where a change to code affects other parts of the code — while languages like Erlang provided interesting patterns for handling runtime errors without extensive boilerplate like Go.
It’s a language that hobbles developers under the aegis of “simplicity.” Certainly, there are languages like Python which give too much freedom — and those that are too complex like Rust IMO, but Go is at best a step sideways from such languages. If people have fun or get mileage out of it, that’s fine, but we cannot pretend that it’s really this great tool.
The language sits in an awkward space between rust and python where one of them would almost always be a better choice.
But, google rose colored specs...
> The standard library does that. fmt.Print when calling .String(), and the standard library HTTP server does that, for exceptions in the HTTP handlers.
Apart from this most doesn't seem that big of a deal, except for `append` which is truly a bad syntax. If you doing it inplace append don't return the value.
(I realise this isn’t who is hiring, but email in bio)
My experience is mostly with C#, but async/await works very well there in my experience. You do need to know some basics there to avoid problem, but that's the case for essentially every kind of concurrency. They all have footguns.
The other jarring example of this kind of deferring logical thinking to big corps was people defending Apple's soldering of memory and ssd, specially so on this site, until some Chinese lad proved that all the imagined issues for why Apple had to do such and such was bs post hoc rationalisation.
The same goes with Go, but if you spend enough time, every little while you see the disillusionment of some hardcore fans, even from the Go's core team, and they start asking questions but always start with things like "I know this is Go and holy reasons exists and I am doing a sin to question but why X or Y". It is comedy.
I'd say that it's entirely the other way around: they stuck to the practical convenience of solving the problem that they had in front of them, quickly, instead of analyzing the problem from the first principles, and solving the problem correctly (or using a solution that was Not Invented Here).
Go's filesystem API is the perfect example. You need to open files? Great, we'll create
func Open(name string) (*File, error)
function, you can open files now, done. What if the file name is not valid UTF-8, though? Who cares, hasn't happen to me in the first 5 years I used Go.If someone really doesn't like the reuse of err, there's no reason why they couldn't create separate variable, e.g. err_foo and err_foo2. There's not no reason to not reuse err.
It is infuriating because it is close to being good, but it will never get there - now due to backwards compatibility.
Also Rob Pike quote about Go's origins is spot on.
For ML/data: python
For backend/general purpose software: Java
The only silver bullet we know of is building on existing libraries. These are also non-accidentally the top 3 most popular languages according to any ranking worthy of consideration.
Talk about hyperbole.
Is it the best or most robust or can you do fancy shit with it? No
But it works well enough to release reliable software along with the massive linter framework that's built on top of Go.
I agree.
The Go std-lib is fantastic.
Also no dependency-hell with Go, unlike with Python. Just ship an oven-ready binary.
And what's the alternative ?
Java ? Licensing sagas requiring the use of divergent forks. Plus Go is easier to work with, perhaps especially for server-side deployments.
Zig ? Rust ? Complex learning curve. And having to choose e.g. Rust crates re-introduces dependency hell and the potential for supply-chain attacks.
The change from Java 8 to 25 is night and day. And the future looks bright. Java is slowly bringing in more language features that make it quite ergonomic to work with.
You can just introduce a new scope wherever you want with {} in sane languages, to control the required behavior as you wish.
The code was on the hot path of their central routing server handling Billions (with a B) messages in a second or something crazy like that.
You're not building Discord, the GC will most likely never be even a blip in your metrics. The GC is just fine.
bar, err := foo()
if err != nil {
return err
}
if err = foo2(); err != nil {
return err
}
and bar, err := foo()
if err != nil {
return err
}
if err := foo2(); err != nil {
return err
}
He even says as much:> Even if we change that to :=, we’re left to wonder why err is in scope for (potentially) the rest of the function. Why? Is it read later?
My initial reaction was: "The first `err` is function-scope because the programmer made it function-scope; he clearly knows you can make them local to the if, so what's he on about?`
It was only when I tried to rewrite the code to make the first `err` if-scope that I realized the problem I guess he has: OK, how do you make both `err` variable if-scope while making `bar` function-scope? You'd have to do something like this:
var bar MyType
if lbar, err := foo(); err != nil {
return err
} else {
bar = lbar
}
Which is a lot of cruft to add just to restrict the scope of `err`.Nothing? Neither Go nor the OS require file names to be UTF-8, I believe
you can go `uv run script.py` and it'll automatically fetch the libraries and run the script in a virtual environment.
Still no match for Go though, shipping a single cross-compiled binary is a joy. And with a bit of trickery you can even bundle in your whole static website in it :) Works great when you're building business logic with a simple UI on top.
I have no desire to go back to Java no matter how much the language has evolved.
For me C# has filled the void of Java in enterprise/gaming environments.
It can’t match it for performance. There’s no mutable array, almost everything is a linked list, and message passing is the only way to share data.
I primarily use Elixir in my day job, but I just had to write high performance tool for data migration and I used Go for that.
Every single piece of Go 1.x code scraped from the internet and baked in to the models is still perfectly valid and compiles with the latest version.
The issue is that it was a bit outdated in the choice of _which_ things to choose as the one Go way. People expect a map/filter method rather than a loop with off by one risks, a type system with the smartness of typescript (if less featured and more heavily enforced), error handling is annoying, and so on.
I get that it’s tough to implement some of those features without opening the way to a lot of “creativity” in the bad sense. But I feel like go is sometimes a hard sell for this reason, for young devs whose mother language is JavaScript and not C.
Nonetheless, Java has eased the psvm requirements, you don't even have to explicitly declare a class and a void main method is enough. [1] Not that it would matter for any non-script code.
----- https://openjdk.org/jeps/512 -----
First, we allow main methods to omit the infamous boilerplate of public static void main(String[] args), which simplifies the Hello, World! program to:
class HelloWorld {
void main() {
System.out.println("Hello, World!");
}
}
Second, we introduce a compact form of source file that lets developers get straight to the code, without a superfluous class declaration: void main() {
System.out.println("Hello, World!");
}
Third, we add a new class in the java.lang package that provides basic line-oriented I/O methods for beginners, thereby replacing the mysterious System.out.println with a simpler form: void main() {
IO.println("Hello, World!");
}
Yes, My favourite is the `time` package. It's just so elegant how it's just a number under there, the nominal type system truly shines. And using it is a treat. What do you mean I can do `+= 8*time.Hour` :D
I don't really care if you want that. Everyone should know that that's just the way slices work. Nothing more nothing less.
I really don't give a damn about that, i just know how slices behave, because I learned the language. That's what you should do when you are programming with it (professionally)
Go _excels_ at API glue. Get JSON as string, marshal it to a struct, apply business logic, send JSON to a different API.
Everything for that is built in to the standard library and by default performant up to levels where you really don't need to worry about it before your API glue SaaS is making actual money.
Every piece of code looks the same and can be automatically, neutrally, analysed for issues.
And lists are slower than arrays, even if they provide functional guarantees (everything is a tradeoff…)
That said, pretty much everything else about it is amazing though IMHO and it has unique features you won’t find almost anywhere else
I can still check out the code to any of them, open it and it'll look the same as modern code. I can also compile all of them with the latest compiler (1.25?) and it'll just work.
No need to investigate 5 years of package manager changes and new frameworks.
Here's the accompanying playground: https://go.dev/play/p/Kt93xQGAiHK
If you run the code, you will see that calling read() on ControlMessage causes a panic even though there is a nil check. However, it doesn't happen for Message. See the read() implementation for Message: we need to have a nil check inside the pointer-receiver struct methods. This is the simplest solution. We have a linter for this. The ecosystem also helps, e.g protobuf generated code also has nil checks inside pointer receivers.
Yeah, these are sagas only, because there is basically one, single, completely free implementation anyone uses on the server-side and it's OpenJDK, which was made 100% open-source and the reference implementation by Oracle. Basically all of Corretto, AdoptOpenJDK, etc are just builds of the exact same repository.
People bringing this whole license topic up can't be taken seriously, it's like saying that Linux is proprietary because you can pay for support at Red Hat..
I've said this before, but much of Go's design looks like it's imitating the C++ style at Google. The comments where I see people saying they like something about Go it's often an idiom that showed up first in the C++ macros or tooling.
I used to check this before I left Google, and I'm sure it's becoming less true over time. But to me it looks like the idea of Go was basically "what if we created a Python-like compiled language that was easier to onboard than C++ but which still had our C++ ergonomics?"
?
It turns out to be nothing but a misunderstanding of what the fmt.Println() statement is actually doing. If we use a more advanced print statement then everything becomes extremely clear:
package main
import (
"fmt"
"github.com/k0kubun/pp/v3"
)
type I interface{}
type S struct{}
func main() {
var i I
var s *S
pp.Println(s, i) // (*main.S)(nil) nil
fmt.Println(s == nil, i == nil, s == i) // true true false
i = s
pp.Println(s, i) // (*main.S)(nil) (*main.S)(nil)
fmt.Println(s == nil, i == nil, s == i) // true false true
}
The author of this post has noted a convenience feature, namely that fmt.Println() tells you the state of the thing in the interface and not the state of the interface, mistaken it as a fundamental design issue and written a screed about a language issue that literally doesn't exist.Being charitable, I guess the author could actually be complaining that putting a nil pointer inside a nil interface is confusing. It is indeed confusing, but it doesn't mean there are "two types" of nil. Nil just means empty.
I got insta rejected in interview when i said this in response to interview panels question about 'thoughts about golang' .
Like they said, 'interview is over' and showed me the (virtual) door. I was stunned lol. This was during peak golang mania . Not sure what happened to rancherlabs .
It's fast enough, easy enough (being very similar now to TypeScript), versatile enough, well-documented (so LLMs do a great job), broad and well-maintained first party libraries, and the team has over time really focused on improving terseness of the language (pattern matching and switch expressions are really one thing I miss a lot when switching between C# and TS).
EF Core is also easily one of the best ORMs: super mature, stable, well-documented, performant, easy to use, and expressive. Having been in the Node ecosystem for the past year, there's really no comparison for building fast with less papercuts (Prisma, Drizzle, etc. all abound with papercuts).
It's too bad that it seems that many folks I've chatted with have a bad taste from .NET Framework (legacy, Windows only) and may have previously worked in C# when it was Windows only and never gave it another look.
Rust simply doesn’t cut it for me. I’m hoping Roc might become this, but I’m not holding my breath.
For anyone interested, this article explains the fundamentals very well, imo: https://go.dev/blog/slices-intro
If you prefer, I can provide the same example in C, C++, D, Java, C#, Scala, Kotlin, Swift, Rust, Nim, Zig, Odin.
Never had any problems with Go as it makes me millions each year.
But everyone knows in their heart of hearts that a few small language warts definitely don't outweigh Go's simplicity and convenience. Do I wish it had algebraic data types, sure, sure. Is that a deal-breaker, nah. It's the perfect example of something that's popular for a reason.
It is easily one of the most productive languages. No fuss, no muss, just getting stuff done.
You really come to appreciate when these batteries are included with the language itself. That Go binary will _always_ run but that Python project won't build in a few years.
So you mean all those universities and other places that have been forced to spend $$$ on licenses under the new regime also can't be taken seriously ? Are you saying none of them took advice and had nobody on staff to tell them OpenJDK exists ?
Regarding your Linux comment, some of us are old enough to remember the SCO saga.
Sadly Oracle have deeper pockets to pay more lawyers than SCO ever did ....
You really don't see why people would point a definition that changes underneath you out as a bad definition? They're not arguing the documentation is wrong.
Go, with all its faults, tries very hard to shun complexity, which I've found over the years to be the most important quality a language can have. I don't want a language with many features. I want a language with the bare essentials that are robust and well designed, a certain degree of flexibility, and for it to get out of my way. Go does this better than any language I've ever used.
You can do something like WTF-8 (not a misspelling, alas) to make it bidirectional. Rust does this under the hood but doesn’t expose the internal representation.
Do they? After too many functional battles I started practicing what I'm jokingly calling "Debugging-Driven Development" and just like TDD keeps the design decisions in mind to allow for testability from the get-go, this makes me write code that will be trivially easy to debug (specially printf-guided debugging and step-by-step execution debugging)
Like, adding a printf in the middle of a for loop, without even needing to understand the logic of the loop. Just make a new line and write a printf. I grew tired of all those tight chains of code that iterate beautifully but later when in a hurry at 3am on a Sunday are hell to decompose and debug.
Just like every PHP coder should know that the ternary operator associativity is backwards compared to every other language.
If you code in a language, then you should know what's bad about that language. That doesn't make those aspects not bad.
myfunc(arg: string): Value | Err
I really try not to throw anymore with typescript, I do error checking like in Go. When used with a Go backend, it makes context switching really easy...
Google's networking services keep being writen in Java/Kotlin, C++, and nowadays Rust.
PHP's frameworks are fantastic and they hide a lot from an otherwise minefield of a language (though steadily improved over the years).
Both are decent choices if this is what you/your developers know.
But they wouldn't be my personal first choice.
Yeah, but you still have to install `uv` as a pre-requisite.
And you still end up with a virtual environment full of dependency hell.
And then of course we all remember that whole messy era when Python 2 transitioned to Python 3, and then deferred it, and deferred it again....
You make a fair point, of course it is technically possible to make it (slightly) "cleaner". But I'll still take the Go binary thanks. ;-)
It's simplistic and that's nice for small tools or scripts, but at scale it becomes really brittle since none of the edge cases are handled
It can be also a source of bugs where you hang onto something for longer than intended - considering there's no indication of something that might block in Go, you can acquire a mutex, defer the release, and be surprised when some function call ends up blocking, and your whole program hangs for a second.
Its annoying to need to think about whether I’m working with an interface type of concrete type.
And if use pointers everywhere, why not make it the default?
It's a shame because it is just as effective as pissing in the wind.
:Error variable Scope -> Yes can be confusing at the beginning, but if you have some experience it doesnt really matter. Would it be cool to scope it down?`Sure, but it feels like here is something blown up to an "issue" where i would see other things to alot more important for the go team to revisit. Regarding the error handling in go, some hate it , some love it : i personally like it (yes i really do) so i think its more a preference than a "bad" thing.
:Two types of nil -> Funny, i never encountered this in > 10 years of go with ALOT of work in pointer juggling, so i wonder in which reality this hits your where it cant be avoided. Tho confusing i admit
:It’s not portable -> I have no opinion here since i work on unix systems only and i have my compiled binaries specific shrug dont see any issue here either.
:append with no defined ownership -> I mean... seriously? Your test case, while the results may be unexpected, is a super wierd one. Why you you append a mid field, if you think about what these functions do under the hood your attemp actualyl feels like you WANT to procude strange behaviour and things like that can be done in any language.
:defer is dumb -> Here i 100% agree - from my pov it leads to massive resource wasting and in certain situations it can also create strange errors, but im not motivated to explain this - ill just say defer, while it seems usefull, from my pov is a bad thing and should not be used.
:The standard library swallows exceptions, so all hope is lost -> "So all hope is lost" i mean you already left the realm of objectiveness long before tbut this really tops it. I wrote some quite big go applications and i never had a situation where i could not handle an exception simply by adjusting my code in a way that i prevent it from even happening. Again - i feel like someone is just in search of things to complain that could simply be avoided. (also in case someone comes up with a super specific probably once in a million case, well alrways keep in mind that language design doesnt orient on the least occuring thing).
:Sometimes things aren’t UTF-8 -> I wont bother to read another whole article, if its important include an example. I have dealth with different encodings (web crawler) and i could handle all of them.
:Memory use -> What you describe is one of the design decisions im not absolutly happy with, the memory handling. But than, one of my golang projects is an in memory graph storage/database - which in one of my cases run for ~2years without restart and had about 18GB of dataset stored in it. It has a lot of mutex handling (regarding your earlier complain with exxceptions, never had one) and it btw run as backend of a internet facing service so it wasnt just fed internal data.
--------------------
Finally i wanne say : often things come down to personal preference. I could spend days raging about javascript, java, c++ or some other languages, but whatfor? Pick the language that fits your use case and your liking, dont pick one that doesnt and complain about it.
Also , just to show im not just a big "golang is the best" fanboy, because it isnt - there are things to critizize like the previously mentioned memory handling.
While i still think you just created memory leaks in your app, golang had this idea of "arenas" which would enable the code to manage memory partly himself and therefor developt much more memory efficient applications. This has stalled lately and i REALLY hope the go team will pick it up again and make this a stable thing to use. I probably would update all of my bigger codebases using it.
Also - and thats something thats annoying me ALOT beacuse it made me spend alot of hours - the golang plugin system. I wrote an architecture to orchestrate processing and for certain reasons i wanted to implement the orchestrated "things" as plugins. But the plugin system as it is rn can only be described as the torments of hell. I messed with it for like 3 years till i recently dropped the plugin functionality and added the stuff directly. Plugins are a very powerfull thing and a good plugin system could be a great thing, but in its current state i would recommend noone to touch it.
These are just two points, i could list some more but the point i want to get to is : there are real things you can critizize instead of things that you create yourself or that are language design decision that you just dont like. Im not sure if such articles are the rage of someone who just is bored or its ragebait to make people read it. Either way its not helping anyone.
First one - you have an address to a struct, you pass it, all good.
Second case: you set address of struct to "nil". What is nil? It's an address like anything else. Maybe it's 0x000000 or something else. At this point from memory perspective it exists, but OS will prevent you from touching anything that NULL pointer allows you to touch.
Because you don't touch ANYTHING nothing fails. It's like a deadly poison in a box you don't open.
Third example id the same as second one. You have a IMessage but it points to NULL (instead NULL pointing to deadly poison).
And in fourth, you finally open the box.
Is it magic knowledge? I don't think so, but I'm also not surprised about how you can modify data through slice passing.
IMO the biggest Go shortcoming is selling itself as a high level language, while it touches more bare metal that people are used to touch.
Could you quote which paragraph you're talking about?
AFAIK, interoperability with C++ code is just one of their explicit goals; they only place that as the last item in the "Language Goals" section.
And even without types (which are coming and are looking good), Elixir's pattern matching is a thousands times better than the horror of Go error handling
I don't know what/which university you talk about, but I'm sure they were also "forced to pay $$$" for their water bills and whatnot. If they decided to go with paid support, then.. you have to pay for it. In exchange you can a) point your finger at a third-party if something goes wrong (which governments love doing/often legally necessary) b) get actual live support on Christmas Eve if needed.
It was actually really good for the time and lightyears ahead of whatever Flash was doing.
But people rather used all kinds of hacks to get Flash working on Linux and OSX rather than use Moonlight.
What are you trying to clarify by printing the types? I know what the types are, and that's why I could provide the succinct weird example. I know what the result of the comparisons are, and why.
And the "why" is "because there are two types of nil, because it's a bad language choice".
I've seen this in real code. Someone compares a variable to nil, it's not, and then they call a method (receiver), and it crashes with nil dereference.
Edit, according to this comment this two-types-of-null bites other people in production: https://news.ycombinator.com/item?id=44983576
This info is actually quite surprising to me, never heard of it since everywhere I know switched to OpenJDK-based alternatives from the get-go. There was no reason to keep on the Oracle one after the licencing shenanigans they tried to play.
Why do these places kept the Oracle JDK and ended up paying for it? OpenJDK was a drop-in replacement, nothing of value is lost by switching...
It's far better to get some � when working with messy data instead of applications refusing to work and erroring out left and right.
I think it's a bad trade-off, most languages out there are moving away from it
Which means if you write C#, you'll encounter a ton of devs who come from an enterprise, banking or govt background, who think doing a 4 layer enterprise architecture with DTOs and 5 line classes is the only way you can write a CRUD app, and the worst of all you'll se a ton of people who learned C# in college a decade ago and refuse to learn anything else.
EF is great, but most people use it because they don't have to learn SQL and databases.
Blazor is great, but most people use it because they don't want to learn Frontend dev, and JS frameworks.
"Consistent" is necessary but not sufficient for "good".
I know it's mostly a matter of tastes, but darn, it feels horrible. And there are no default parameter values, and the error hanling smells bad, and no real stack trace in production. And the "object orientation" syntax, adding some ugly reference to each function. And the pointers...
It took me back to my C/C++ days. Like programming with 25 year old technology from back when I was in university in 1999.
After checking for nil, there's no reason `err` should still be in scope. That's why it's recommended to write `if err := foo(); err != nil`, because after that, one cannot even accidentally refer to `err`.
I'm giving examples where Go syntactically does not allow you to limit the lifetime of the variable. The variable, not its value.
You are describing what happens. I have no problem with what happens, but with the language.
> Go, with all its faults, tries very hard to shun complexity
The whole field is about managing complexity. You don't shun complexity, you give tools to people to be able to manage it.
And Go goes the low end of the spectrum, of not giving enough features to manage that complexity -- it's simplistic, not simple.
I think the optimum as actually at Java - it is a very easy language with not much going on (compared to, say, Scala), but just enough expressivity that you can have efficient and comfortable to use libraries for all kind of stuff (e.g. a completely type safe SQL DSL)
I think you may need to re-read. My point is that it DOES change the value in the caller. (well, sometimes) That's the problem.
Yes, yes. Hence the words "almost" and "pretty much". For exactly this reason.
It's not about Printf. It's about how these two different kind of nil values sometimes compare equal to nil, sometimes compare equal to each other, and sometimes not
Yes there is a real internal difference between the two that you can print. But that is the point the author is making.
Of course, by your reasoning this also means you yourself have designed a language.
I'll leave out repeating your colorful language if you haven't done any of these things.
"Modern C#" (if we can differentiate that) has a lot of nice amenities for modeling like immutable `record` types and named tuples. I think where EF really shines is that it allows you to model the domain with persistence easily and then use DTOs purely as projections (which is how I use DTOs) into views (e.g. REST API endpoints).
I can't say for the broader ecosystem, but at least in my own use cases, EFC is primarily used for write scenarios and some basic read scenarios. But in almost all of my projects, I end up using CQRS with Dapper on the read side for more complex queries. So I don't think that it's people avoiding SQL; rather it's teams focused on productivity first.
WRT to Blazor, I would not recommend it in place of JS except for internal tooling (tried it at one startup and switched to Vue + Vite). But to be fair, modern FE development in JS is an absolute cluster of complexity.
> Over the decades I have lost data to tools skipping non-UTF-8 filenames. I should not be blamed for having files that were named before UTF-8 existed.
Umm.. why blame Go for that?
". They are likely the two most difficult parts of any design for parametric polymorphism. In retrospect, we were biased too much by experience with C++ without concepts and Java generics. We would have been well-served to spend more time with CLU and C++ concepts earlier."
https://go.googlesource.com/proposal/+/master/design/go2draf...
Many compiled languages are very slow to compile however, especially for large projects, C++ and rust being the usual examples.
In Go, `int * Duration = error`, but `Duration * Duration = Duration`!
It's just that a ridiculous amount of steps in real world problems can be summarised as 'reshape this data', 'give me a subset of this set', or 'aggregate this data by this field'.
Loops are, IMO, very bad at expressing those common concepts briefly and clearly. They take a lot of screen space, usually accesory variables, and it isn't immediately clear from just seing a for block what you're about to do - "I'm about to iterate" isn't useful information to me as a reader, are you transforming data, selecting it, aggregating it?.
The consequence is that you usually end up with tons of lines like
userIds = getIdsfromUsers(users);
where the function is just burying a loop. Compare to:
userIds = users.pluck('id')
and you save the buried utility function somewhere else.
What has been an issue for me, though, is working with private repositories outside GitHub (and I have to clarify that, because working with private repositories on GitHub is different, because Go has hardcoded settings specifically to make GitHub work).
I had hopes for the GOAUTH environment variable, but either (1) I'm more dumb and blind than I thought I already was, or (2) there's still no way to force Go to fetch a module using SSH without trying an HTTPS request first. And no, `GOPRIVATE="mymodule"` and `GOPROXY="direct"` don't do the trick, not even combined with Git's `insteadOf`.
I really like Go. It scratches every itch that I have. Is it the language for your problems? I don't know, but very possibly that answer is "no".
Go is easy to learn, very simple (this is a strong feature, for me) and if you want something more, you can code that up pretty quickly.
The blog article author lost me completely when they said this:
> Why do I care about memory use? RAM is cheap.
That is something that only the inexperienced say. At scale, nothing is cheap; there is no cheap resource if you are writing software for scale or for customers. Often, single bytes count. RAM usage counts. CPU cycles count. Allocations count. People want to pretend that they don't matter because it makes their job easier, but if you want to write performant software, you better have that those cpu cache lines in mind, and if you have those in mind, you have memory usage of your types in mind.
:Two types of nil
Other commenters have. I have. Not everyone will. Doesn't make it good.
:append with no defined ownership
I've seen it. Of course one can just "not do that", but wouldn't it be nice if it were syntactically prevented?
:It’s not portable ("just Unix")
I also only work on Unix systems. But if you only work on amd64 Linux, then portability is not a concern. Supporting BSD and Linux is where I encounter this mess.
:All hope is lost
All hope is lost specifically on the idea of not needing to write exception safe code. If panics did always crash the problem, then that'd be fine. But no coding standard can save you from the standard library, so yes, all hope about being able to pretend panic exits the problem, is lost.
You don't need to read my blog posts. Looking forward to reading your, much better, critique.
I quite often see devs introducing them in other languages like TypeScript, but it just doesn't work as well when it's introduced in userland (usually you just end up with a small island of the codebase following this standard).
Golang's biggest shortcoming is the fact that it touches bare metal isn't visible clearly enough. It provides many high level features which makes this ambience of "we got you" but fails on delivering proper education to its users that they are going to have a dirt on their hands.
Take a slice for example: even in naming it means "part of" but in reality it's closer to "box full of pointers" what happens when you modify pointer+1? Or "two types of nil"; there is a difference between having two bytes (simplification), one of struct type and the other of address to that struct and having just a NULL - same as knowing that house doesn't exist and being confident that house exists and saying it's in the middle of the volcano beneath the ocean.
The Foo99 critique is another example. If you'd want to have not 99 loop but 10 billion loops each with mere 10 bytes you'd need 100GiB of memory just to exit it. If you'd reuse the address block you'd only use... 10 bytes.
I also recommend trying to implement lexical scope defer in C and putting them in threads. That's a big bottle of fun.
I think that it ultimately boils down to what kind of engineer one wants to be. I don't like hand holding and rather be left on my own with a rain of unit tests following my code so Go, Zig, C (from low level Languages) just works for me. Some prefer Rust or high level abstractions. That's also fine.
But IMO poking at Go that it doesn't hide abstractions is like making fun of football of being child's play because not only it doesn't have horses but also has players using legs instead of mallets.
Quote from this article:[1]
*He told The Register that Oracle is "putting specific Java sales teams in country, and then identifying those companies that appear to be downloading and... then going in and requesting to [do] audits. That recipe appears to be playing out truly globally at this point."*
[1] https://www.theregister.com/2025/06/13/jisc_java_oracle/See link/quote in my earlier reply above.
There aren't two types of nil. Would you call an empty bucket and an empty cup "two types of empty"?
There is one nil, which means different things in different contexts. You're muddying the waters and making something which is actually quite straightforward (an interface can contain other things, including things that are themselves empty) seem complicated.
> I've seen this in real code. Someone compares a variable to nil, it's not, and then they call a method (receiver), and it crashes with nil dereference.
Sure, I've seen pointer-to-pointer dereferences fail for the same reason in C. It's not particularly different.
What I intended to say with this is that ignoring the problem if invalid UTF-8 (could be valid iso8859-1) with no error handling, or other way around, has lost me data in the past.
Compare this to Rust, where a path name is of a different type than a mere string. And if you need to treat it like a string and you don't care if it's "a bit wrong" (because it's for being shown to the user), then you can call `.to_string_lossy()`. But it's be more hard to accidentally not handle that case when exact name match does matter.
When exactness matters, `.to_str()` returns `Option<&str>`, so the caller is forced to deal with the situation that the file name may not be UTF-8.
Being sloppy with file name encodings is how data is lost. Go is sloppy with strings of all kinds, file names included.
The example from the blog post would fail, because `return err` referred to an `err` that was no longer in scope. It would syntactically prevent accidentally writing `foo99()` instead of `err := foo99()`.
Go had some poor design features, many of which have now been fixed, some of which can't be fixed. It's fine to warn people about those. But inventing intentionally confusing examples and then complaining about them is pretty close to strawmanning.
Also, as another topic, Oracle is doing audits specifically because their software doesn't phone home to check licenses and stuff like that - which is a crucial requirement for their intended target demographics, big government organizations, safety critical systems, etc. A whole country's healthcare system, or a nuclear power base can't just stop because someone forgot to pay the bill.
So instead Oracle just visits companies that have a license with them, and checks what is being used to determine if it's in accord with the existing contract. And yeah, from this respect I also heard of a couple of stories where a company was not using the software as the letter of the contract, e.g. accidentally enabling this or that, and at the audit the Oracle salesman said that they will ignore the mistake if they subscribe to this larger package, which most manager will gladly accept as they can avoid the blame, which is questionable business practice, but still doesn't have anything to do with OpenJDK..
Usually, as here, objections to go take the form a technically-correct-but-ultimately-pedantic arguments.
The positives of go are so overwhelmingly high magnitude that all those small things basically don’t matter enough to abandon the language.
Go is good enough to justify using it now while waiting for the slow-but-steady stream of improvements from version to version to make life better.
It's sort of a known sharp edge that people occasionally cut themselves on. No language is perfect, but when people run into them they rightfully complain about it
Local environments are not tied to IDEs at all, but you are doing yourself a disservice if you don't use a decent IDE irrespective of language - they are a huge productivity boost.
And are you stuck in the XML times or what? Spring Boot is insanely productive - just as a fact of matter, Go is significantly more verbose than Java, with all the unnecessary if errs.
Cargo is amazing, and you can do amazing things with it, I wish Go would invest in this area more.
Also funny you mention Python, a LOT of Go devs are former Python devs, especially in the early days.
Complexity exists in all layers of computing, from the silicon up. While we can't avoid complexity of real world problems, we can certainly minimize the complexity required for their solutions. There are an infinite amount of problems caused primarily by the self-induced complexity of our software stacks and the hardware it runs on. Choosing a high-level language that deliberately tries to avoid these problems is about the only say I have in this matter, since I don't have the skill nor patience to redo decades of difficult work smarter people than me have done.
Just because a language embraces simplicity doesn't mean that it doesn't provide the tools to solve real world problems. Go authors have done a great job of choosing the right set of trade-offs, unlike most other language authors. Most of the time. I still think generics were a mistake.
Score another for Rust's Safety Culture. It would be convenient to just have &str as an alias for &[u8] but if that mistake had been allowed all the safety checking that Rust now does centrally has to be owned by every single user forever. Instead of a few dozen checks overseen by experts there'd be myriad sprinkled across every project and always ready to bite you.
https://github.com/pannous/goo/
• errors handled by truthy if or try syntax • all 0s and nils are falsey • #if PORTABLE put(";}") #end • modifying! methods like "hi".reverse!() • GC can be paused/disabled • many more ease of use QoL enhancements
There is a difference between "small" and Rust's which is for all intents and purposes, non-existent.
I mean, in 2025, not having crypto in stdlib when every man and his dog is using crypto ? Or http when every man and his dog are calling REST APIs ?
As the other person who replied to you said. Go just allows you to hit the ground running and get on with it.
Having to navigate the world of crates, unofficially "blessed" or not is just a bit of a re-inventing the wheel scenario really....
P.S. The Go stdlib is also well maintained, so I don't really buy the specific "dead batteries" claim either.
2. mechanic is tied to call stack / stack unwinding
3. it feels natural when you're coming from C with `goto fail`
(yes it annoys me when I want to defer in a loop & now that loop body needs to be a function)
If you dont think that exists in java, spend some time in the maven documentation or spring documentation https://docs.spring.io/spring-framework/reference/index.html https://maven.apache.org/guides/getting-started/ Then imagine yourself a beginner to programming trying to make sense of that documentation
you try keep the easy things easy + simple, and try to make the hard things easier and simpler, if possible. Simple aint easy
I dont hate java (anymore), it has plenty of utility, (like say...jira). But when I'm writing golang I pretty much never think "oh I wish this I was writing java right now." no thanks
P.S. Swift, anyone?
Most users writing basic async CRUD servers won't notice, but you very much do if you write complex , highly concurrent servers.
That can be a viable tradeoff, and is for many, but it's far from being as fool-proof as Go.
And sure, it is welcome from a dev POV on one hand, though from an ecosystem perspective, more languages are not necessarily good as it multiplies the effort required.
It’s part trying to keep a common direction and part fear that dislike of their tech risks the hire not staying for long.
I don’t agree with this approach, don’t get me wrong, but I’ve seen it done and it might explain your experience.
Modern Java communities are slowly adopting the common FP practice "making illegal states unrepresentable" and call it "data oriented programming". Which is nice for those of us who actively use ADT. I no longer need to repeatedly explain "what is Option<?>?" or "why ADT?" whenever I use them; I could just point them to those new resources.
Hopefully, this shift will steer the Java community toward a saner direction than the current cargo cult which believed mutable C-struct (under guise of "anemic domain model") + Garbage Collector was OOP.
Without it, you either write that complexity yourself or fail to even recognize why is it necessary in the first place, e.g. failing to realize the existence of SQL injections, Cross-Site Scripting, etc. Backends have some common requirements and it is pretty rare that your problem wouldn't need these primitives, so as a beginner, I would advice.. learning the framework as well, the same way you would learn how to fly a plane before attempting it.
For other stuff, there is no requirement to use Spring - vanilla java has a bunch of tools and feel free to hack whatever you want!
They could support passing filename as `string | []byte`. But wait, go does not even have union types.
Go’s more chaotic approach to allow strings to have non-Unicode contents is IMO more ergonomic. You validate that strings are UTF-8 at the place where you care that they are UTF-8. (So I’m agreeing.)
[1] ZGC has basically decoupled the heap size from the pause time, at that point you get longer pauses from the OS scheduler than from GC.
EVERY language has certain pitfalls like this. Back when I wrote PHP for 20+ years I had a Google doc full of every stupid PHP pitfall I came across.
And they were always almost a combination of something silly in the language, and horrible design by the developer, or trying to take a shortcut and losing the plot.
These sorts of articles have been commonplace even before Go released 1.0 in 2013. In fact, most (if not all) of these complaints could have been written identically back then. The only thing missing from this post that could make me believe it truly was written in 2013 would be a complaint about Go not having generics, which were added a few years ago.
People on HN have been complaining about Go since Go was a weird side-project tucked away at Google that even Google itself didn't care about and didn't bother to dedicate any resources to. Meanwhile, people still keep using it and finding it useful.
Java is great if you stick to a recent version and update on a regular basis. But a lot of companies hate their own developers.
It feels often like the two principles they stuck/stick to are "what makes writing the compiler easier" and "what makes compilation fast". And those are good goals, but they're only barely developer-oriented.
``` machine {server} # e.g. gitlab.com login {username} password {read_only_api_key} # Must be actual key and not an ENV var ```
Worked consistently, but not a solution we were thrilled with.
In general, Windows filenames are Unicode and you can always express those filenames by using the -W APIs (like CreateFileW()).
Internally time.Duration is a single 64bit count, while time.Time is two more complicated 64bit fields plus a location
I'm not and I'm glad the core team doesn't have to maintain an http server and can spend time on the low level features I chose Rust for.
Author here.
No, this is not where it comes from. I've been coding C form more than 30 years, Go for maybe 12-15, and currently prefer Rust. I enjoy C++ (yes, really) and getting all those handle-less knives to fit together.
No, my critique of Go is that it did not take the lessons learned from decades of theory, what worked and didn't work.
I don't fault Go for its leaky abstractions in slices, for example. I do fault it for creating bad abstraction APIs in the first place, handing out footguns when they are avoidable. I know to avoid the footgun of appending to slices while other slices of the same array may still be accessible elsewhere. But I think it's indefensible to have created that footgun in the year Go was created.
Live long enough, and anybody will make a silly mistake. "Just don't make a mistake" is not an option. That's why programming language APIs and syntax matters.
As for bare metal; Go manages to neither get the benefits possible of being high level, and at the same time not being suitable for bare metal.
It's a missed opportunity. Because yes, in 2007 it's not like I could have pointed to something that was strictly better for some target use cases.
You should always be able to iterate the code points of a string, whether or not it's valid Unicode. The iterator can either silently replace any errors with replacement characters, or denote the errors by returning eg, `Result<char, Utf8Error>`, depending on the use case.
All languages that have tried restricting Unicode afaik have ended up adding workarounds for the fact that real world "text" sometimes has encoding errors and it's often better to just preserve the errors instead of corrupting the data through replacement characters, or just refusing to accept some inputs and crashing the program.
In Rust there's bstr/ByteStr (currently being added to std), awkward having to decide which string type to use.
In Python there's PEP-383/"surrogateescape", which works because Python strings are not guaranteed valid (they're potentially ill-formed UTF-32 sequences, with a range restriction). Awkward figuring out when to actually use it.
In Raku there's UTF8-C8, which is probably the weirdest workaround of all (left as an exercise for the reader to try to understand .. oh, and it also interferes with valid Unicode that's not normalized, because that's another stupid restriction).
Meanwhile the Unicode standard itself specifies Unicode strings as being sequences of code units [0][1], so Go is one of the few modern languages that actually implements Unicode (8-bit) strings. Note that at least two out of the three inventors of Go also basically invented UTF-8.
[0] https://www.unicode.org/versions/Unicode16.0.0/core-spec/cha...
> Unicode string: A code unit sequence containing code units of a particular Unicode encoding form.
[1] https://www.unicode.org/versions/Unicode16.0.0/core-spec/cha...
> Unicode strings need not contain well-formed code unit sequences under all conditions. This is equivalent to saying that a particular Unicode string need not be in a Unicode encoding form.
The green threads are very interesting since you can create 1000s of them at a low cost and that makes different designs possible.
I think this complaining about defer is a bit trivial. The actual major problem for me is the way imports work. The fact that it knows about github and the way that it's difficult to replace a dependency there with some other one including a local one. The forced layout of files, cmd directories etc etc.
I can live with it all but modules are the things which I have wasted the most time and struggled the most.
We pre-allocate a bunch of static buffers and re-use them. But that leads to a ton of ownership issues, like the append footgun mentioned in the article. We've even had to re-implement portions of the standard library because they allocate. And I get that we have a non-standard use case, and most programmers don't need to be this anal about memory usage. But we do, and it would be really nice to not feel like we're fighting the language.
Like, yes, those ideas have frequently been driven too far and have led to their own pain points. But people also seem to frequently rediscover that removing them entirety will lead to pain, too.
It breaks. Which is weird because you can create a string which isn't valid UTF-8 (eg "\xbd\xb2\x3d\xbc\x20\xe2\x8c\x98") and print it out with no trouble; you just can't pass it to e.g. `os.Create` or `os.Open`.
(Bash and a variety of other utils will also complain about it being valid UTF-8; neovim won't save a file under that name; etc.)
The article tries very hard to draw a connection between the licensing costs for the universities and Oracle auditing random java downloads, but nobody actually says that this is what happened.
The waiver of historic fees goes back to the last licensing change where Oracle changed how licensing fees would be calculated. So it seems reasonable that Oracle went after them because they were paying customers that failed to pay the inflated fees.
August 22, 2025.
Local environments are not literally tied to IDEs, but they effectively are in any non-trivially sized project. And the reason is because most Java shops really do believe "you are doing yourself a disservice if you don't use a decent IDE irrespective of language." I get along fine with a text editor + CLI tools in Deno, Lua, and Zig. Only when I enter Java world do the wisest of the wise say "yeah there is a CLI, but I don't really know it. I recommend you download IntelliJ and run these configs instead."
Yes Spring Boot is productive. So is Ruby on Rails or Laravel.
So for a large loop the code like
for i, value := source { result[i] = value * 2 + 1 }
Would be 2x faster than a loop like
for i, value := source { intermediate[i] = value * 2 }
for i, value := intermediate { result[i] = value + 1 }
Actually I think that's a reasonable argument. I've not designed a language myself (other than toy experiments) so I'm hesitant to denigrate other people's design choices because even with my limited experience I'm aware that there are always compromises.
Similarly, I'm not impressed by literary critics whose own writing is unimpressive.
This means that if you do:
func (f Foo) String() string {
some.Lock()
t := get_something()
some.Unlock()
t = transform_it(t)
return t
}
And `get_something()` panics, then the program continues with a locked mutex. There are more dangerous things than a deadlocked program, of course.It's non-optional to use defer, and thus write exception safe code. Even if you never use exceptions.
A big chunk of their strategy at the time was around how to completely own the web. I celebrated every time their attempts failed.
It's a good language for teams, for sure, though.
But I do definitely agree that the dynamic nature of defer and it not being block-scoped is probably not the best
I'm afraid I can only consider that a taste thing.
EDIT: One thing I don't consider a taste thing is the lack of the equivalent of a "const *". The problem with the slice thing is that you can sort of sometimes change things but not really. It would be nice if you could be forced to pass either a pointer to a slice (such that you can actually allocate a new backing array and point to it), or a non-modifiable slice (such that you know the function isn't going to change the slice behind your back).
func (f *Foo) foo() {
// critical section
{
f.mutex.Lock()
defer f.mutex.Unlock()
// something with the shared resource
}
// The lock is still held here, but you probably didn't want that
}
You can call Unlock directly, but then if there's a panic it won't be unlocked like it would be in the above. That can be an issue if something higher in the call stack prevents the panic from crashing the entire program, it would leave your system in a bad state.This is the key problem with defer. It operates a lot like a finally block, but only on function exit which means it's not actually suited to the task.
And as the sibling pointed out, you could use an anonymous function that's immediately called, but that's just awkward, even if it has become idiomatic.
But forcing all strings to be UTF-8 does not magically help with the issue you described. In practice I've often seen the opposite: Now you have to write two code paths, one for UTF-8 and one for everything else. And the second one is ignored in practice because it is annoying to write. For example, I built the web server project in your other submission (very cool!) and gave it a tar file that has a non-UTF-8 name. There is no special handling happening, I simply get "error: invalid UTF-8 was detected in one or more arguments" and the application exits. It just refuses to work with non-UTF-8 files at all -- is this less sloppy?
Forcing UTF-8 does not "fix" compatibility in strange edge cases, it just breaks them all. The best approach is to treat data as opaque bytes unless there is a good reason not to. Which is what Go does, so I think it is unfair to blame Go for this particular reason instead of the backup applications.
Then they obviously don't know their tooling well, and I would hesitate to call a jr 'the wisest of the wise'
This tends to be true for most languages, even the ones with easier concurrency support. Using it correctly is the tricky part.
I have no real problem with the portability. The area I see Go shining in is stuff like AWS Lambda where you want fast execution and aren't distributing the code to user systems.
Also, as mentioned by another comment, an HTTP or crypto library can become obsolete _fast_. What about HTTP3? What about post-quantum crypto? What about security fixes? The stdlib is tied to the language version, thus to a language release. Having such code independant allows is to evolve much faster, be leaner, and be more composable. So yes, the library is well maintained, but it's tied to the Go version.
Also, it enables breaking API changes if absolutely needed. I can name two precendents:
- in rust, time APIs in chrono had to be changed a few times, and the Rust maintainers were thankful it was not part of the stdlib, as it allowed massive changes
- otoh, in Go, it was found out that net.Ip has an absolutely atrocious design (it's just an alias for []byte). Tailscale wrote a replacement that's now in a subpackage in net, but the old net.Ip is set in stone. (https://tailscale.com/blog/netaddr-new-ip-type-for-go)
I really don’t like your logic. I’m not a Michelin chef, but I’m qualified to say that a restaurant ruined my dessert. While I probably couldn’t make a crème brûlée any better than theirs, I can still tell that they screwed it up compared to their competitor next door.
For example, I love Python, but it’s going to be inherently slow in places because `sum(list)` has to check the type of every single item to see what __add__ function to call. Doesn’t matter if they’re all integers; there’s no way to prove to the interpreter that a string couldn’t have sneaked in there, so the interpreter has to check each and every time.
See? I’ve never written a language, let alone one as popular as Python, but I’m still qualified to point out its shortcomings compared to other languages.
I think there is little to no chance it can hold on to its central vision as the creators "age out" of the project, which will make the language worse (and render the tradeoffs pointless).
I think allowing it to become pigeon holed as "a language for writing servers" has cost and will continue to cost important mindshare that instead jumps to Rust or remains in Python or etc.
Maybe it's just fun, like harping on about how bad Visual Basic was, which was true but irrelevant, as the people who needed to do the things it did well got on with doing so.
For example, Rust iterators are lazily evaluated with early-exits (when filtering data), thus it's your first form but as optimized as possible. OTOH python's map/filter/etc may very well return a full list each time, like with your intermediate. [EDIT] python returns generators, so it's sane.
I would say that any sane language allowing functional-style data manipulation will have them as fast as manual for-loops. (that's why Rust bugs you with .iter()/.collect())
Well, so long as you don't care about compatibility with the broad ecosystem, you can write a perfectly fine Optional yourself:
type Optional[Value any] struct {
value Value
exists bool
}
// New empty.
func New[Value any]() Optional[Value] {}
// New of value.
func Of[Value any](value Value) Optional[Value] {}
// New of pointer.
func OfPointer[Value any](value *Value) Optional[Value] {}
// Only general way to get the value.
func (o Optional[Value]) Get() (Value, bool) {}
// Get value or panic.
func (o Optional[Value]) MustGet() Value {}
// Get value or default.
func (o Optional[Value]) GetOrElse(defaultValue Value) Value {}
// JSON support.
func (o Optional[Value]) MarshalJSON() ([]byte, error) {}
func (o *Optional[Value]) UnmarshalJSON(data []byte) error {}
// DB support.
func (o *Optional[Value]) Scan(value any) error {}
func (o Optional[Value]) Value() (driver.Value, error) {}
But you probably do care about compatibility with everyone else, so... yeah it really sucks that the Go way of dealing with optionality is slinging pointers around.It’s faster than Node or Python, with a better type system than either. It’s got a much easier learning curve than Rust. It has a good stdlib and tooling. Simple syntax with usually only one way to do things. Error handling has its problems but I still prefer it over Node, where a catch clause might receive just about anything as an “error”.
Am I missing a language that does this too or more? I’m not a Go fanatic at all, mostly written Node for backends in my career, but I’ve been exploring Go lately.
I always encounter these upsides once every few years when preparing leetcode interviews, where this kind of optimization is needed for achieving acceptable results.
In daily life, however, most of these chunks of data to transform fall in one of these categories:
- small size, where readability and maintainability matters much more than performance
- living in a db, and being filtered/reshaped by the query rather than code
- being chunked for atomic processing in a queue or similar (usual when importing a big chunk of data).
- the operation itself is a standard algorithm that you just consume from a standard library that handless the loop internally.
Much like trees and recursion, most of us don’t flex that muscle often. Your mileage might vary depending of domain of course.
I think they only work if the language is built around it. In Rust, it works, because you just can't deref an Optional type without matching it, and the matching mechanism is much more general than that. But in other languages, it just becomes a wart.
As I said, some kind of type annotation would be most go-like, e.g.
func f(ptr PtrToData?) int { ... }
You would only be allowed to touch *ptr inside a if ptr != nil { ... }. There's a linter from uber (nilaway) that works like that, except for the type annotation. That proposal would break existing code, so perhaps something an explicit marker for non-nil pointers is needed instead (but that's not very ergonomic, alas).Go has chosen explicit over implicit everywhere except initialization—the one place where I really needed "explicit."
Now I always use pointers consistently for the readability.
The problem with this, as with any lack of static typing, is that you now have to validate at _every_ place that cares, or to carefully track whether a value has already been validated, instead of validating once and letting the compiler check that it happened.
I feel like I could write this same paragraph about Java or C#.
$ cat main.go
package main
import (
"log"
"os"
)
func main() {
f, err := os.Create("\xbd\xb2\x3d\xbc\x20\xe2\x8c\x98")
if err != nil {
log.Fatalf("create: %v", err)
}
_ = f
}
$ go run .
$ ls -1
''$'\275\262''='$'\274'' ⌘'
go.mod
main.go
WTF-8 has some inconvenient properties. Concatenating two strings requires special handling. Rust's opaque types can patch over this but I bet Go's WTF-8 handling exposes some unintuitive behavior.
There is a desire to add a normal string API to OsStr but the details aren't settled. For example: should it be possible to split an OsStr on an OsStr needle? This can be implemented but it'd require switching to OMG-WTF-8 (https://rust-lang.github.io/rfcs/2295-os-str-pattern.html), an encoding with even more special cases. (I've thrown my own hat into this ring with OsStr::slice_encoded_bytes().)
The current state is pretty sad yeah. If you're OK with losing portability you can use the OsStrExt extension traits.
I can't recall ever needing that (but that might just be because I'm used to lexical scoping for defer-type constructs / RAII).
I don't think you need two code paths. Maybe your program can live its entire life never converting away from the original form. Say you read from disk, pick out just the filename, and give to an archive library.
There's no need to ever convert that to a "string". Yes, it could have been a byte array, but taking out the file name (or maybe final dir plus file name) are string operations, just not necessarily on UTF-8 strings.
And like I said, for all use cases where it just needs to be shown to users, the "lossy" version is fine.
> I simply get "error: invalid UTF-8 was detected in one or more arguments" and the application exits. It just refuses to work with non-UTF-8 files at all -- is this less sloppy?
Haha, touche. But yes, it's less sloppy. Would you prefer that the files were silently skipped? You've created your archive, you started the webserver, but you just can't get it to deliver the page you want.
In order for tarweb to support non-UTF-8 in filenames, the programmer has to actually think about what that means. I don't think it means doing a lossy conversion, because that's not what the file name was, and it's not merely for human display. And it should probably not be the bytes either, because tools will likely want to send UTF-8 encoded.
Or they don't. In either case unless that's designed, implemented, and tested, non-UTF-8 in filenames should probably be seen as malformed input. For something that uses a tarfile for the duration of the process's life, that probably means rejecting it, and asking the user to roll back to a previous working version or something.
> Forcing UTF-8 does not "fix" compatibility in strange edge cases
Yup. Still better than silently corrupting.
Compare this to how for Rust kernel work they apparently had to implement a new Vec equivalent, because dealing with allocation failures is a different thing in user and kernel space[1], and Vec push can't fail.
Similarly, Go string operations cannot fail. And memory allocation issues has reasons that string operations don't.
[1] a big separate topic. Nobody (almost) runs with overcommit off.
If you have an int variable hours := 8, you have to cast it before multiplying.
This is also true for simple int and float operations.
f := 2.0
3 * f
is valid, but x := 3 would need float64(x)*f to be valid. Same is true for addition etc.There are languages with fewer warts, but they're usually more complicated (e.g. Rust), because most of Go's problems are caused by its creators' fixation with simplicity at all costs.
package main
import "fmt"
type Foo struct{
Name string
}
func (f *Foo) Kaboom() {
fmt.Printf("hello from Kaboom, f=%s\n", f.Name)
}
func NewKaboom() interface{ Kaboom() } {
var p *Foo = nil
return p
}
func main() {
obj := NewKaboom()
fmt.Printf("obj == nil? %v\n", obj == nil)
// The next line will panic (because method receives nil *Foo)
obj.Kaboom()
}
go run fred.go
obj == nil? false
panic: runtime error: invalid memory address or nil pointer dereference
The go language and its runtime is the only system I know that is able to handle concurrency with multicore cpus seamlessly within the language, using the CSP-like (goroutine/channel) formalism which is easy to reason with.
Python is a mess with the gil and async libraries that are hard to reason with. C,C++,Java etc need external libraries to implement threading which cant be reasoned with in the context of the language itself.
So, go is a perfect fit for the http server (or service) usecase and in my experience there is no parallel.
Because 99.999% of the time you want it to be valid and would like an error if it isn't? If you want to work with invalid UTF-8, that should be a deliberate choice.
It's confusing enough that it has an FAQ entry and that people tried to get it changed for Go 2. Evidently people are running in to this. (I for sure did)
But certainly, anyone will bring their previous experience to the project, so there must be some Plan9 influence in there somewhere
IMO the differences with Windows are such that I’m much more unhappy with WTF-8. There’s a lot that sucks about C++ but at least I can do something like
#if _WIN32
using pathchar = wchar_t;
constexpr pathchar sep = L'\\';
#else
using pathchar = char;
constexpr pathchar sep = '/';
#endif
using pathstring = std::basic_string<pathchar>;
Mind you this sucks for a lot of reasons, one big reason being that you’re directly exposed to the differences between path representations on different operating systems. Despite all the ways that this (above) sucks, I still generally prefer it over the approaches of Go or Rust.No Turing complete language will ever prevent people from being idiots.
It's not only programming language API and syntax. It's a conceptual complexity, which Go has very low. It's a remodeling difficulty which Rust has very high. It's implicit behavior that you get from high stack of JS/TS libraries stitched together. It's accessibility of tooling, size of the ecosystem and availability of APIs. And Golang crosses many of those checkboxes.
All the examples you've shown in your article were "huh? isn't this obvious?" to me. With your experience in C I have no idea you why you don't want to reuse same allocation multiple times and instead keeping all of them separately while reserving allocation space for possibly less than you need.
Even if you'd assume all of this should be on stack you still would crash or bleed memory through implicit allocations that exit the stack.
Add 200 of goroutines and how does that (pun intended) stack?
Is fixing those perceived footguns really a missed opportunity? Go is getting stronger every year and while it's hated by some (and I get it, some people like Rust approach better it's _fine_) it's used more and more as a mature and stable language.
Many applications don't even worry about GC. And if you're developing some critical application, pair it with Zig and enjoy cross-compilation sweetness with as bare metal as possible with all the pipes that are needed.
Validation is nice but Rust’s principled approach leaves me high and dry sometimes. Maybe Rust will finish figuring out the OsString interface and at that point we can say Rust has “won” the conversation, but it’s not there yet, and it’s been years.
Consider:
for i, chr := range string([]byte{226, 150, 136, 226, 150, 136}) {
fmt.Printf("%d = %v\n", i, chr)
// note, s[i] != chr
}
How many times does that loop over 6 bytes iterate? The answer is it iterates twice, with i=0 and i=3.There's also quite a few standard APIs that behave weirdly if a string is not valid utf-8, which wouldn't be the case if it was just a bag of bytes.
These days, it seems like languages keep chasing paradigms and over adapt to moving targets.
Look at what Rust and Swift have become. C# has stayed relatively sane somehow, but it's not what I'd call indepedent.
There's no single 'best language', and it depends on what your use-cases are. But I'd say that for many typical backend tasks, Go is a choice you won't really regret, even if you have some gripes with the language.
I don't have a lot of experience with the malloc languages at scale, but I do know that heat fragmentation and GC fragmentation are very similar problems.
There are techniques in GC languages to avoid GC like arena allocation and stuff like that, generally considered non-idiomatic.
So that means that for 99% of scenarios, the difference between char[] and a proper utf8 string is none. They have the same data representation and memory layout.
The problem comes in when people start using string like they use string in PHP. They just use it to store random bytes or other binary data.
This makes no sense with the string type. String is text, but now we don't have text. That's a problem.
We should use byte[] or something for this instead of string. That's an abuse of string. I don't think allowing strings to not be text is too constraining - that's what a string is!
(Similar to how Python is finally getting its act together with the uv tool.)
> having to write a for loop to get the list of keys of a map
We now have the stdlib "maps" package, you can do:
keys := slices.Collect(maps.Keys(someMap))
With the wonder of generics, it's finally possible to implement that.Now if only Go was consistent about methods vs functions, maybe then we could have "keys := someMap.Keys()" instead of it being a weird mix like `http.Request.Headers.Set("key", "value")` but `map["key"] = "value"`
Or 'close(chan x)' but 'file.Close()', etc etc.
I'd love to see a list of these, with any references you can provide.
Except when it doesn’t and then you have to deal with fucking Cthulhu because everyone thought they could just make incorrect assumptions that aren’t actually enforced anywhere because “oh that never happens”.
That isn’t engineering. It’s programming by coincidence.
> Maybe Rust will finish figuring out the OsString interface
The entire reason OsString is painful to use is because those problems exist and are real. Golang drops them on the floor and forces you pick up the mess on the random day when an unlucky end user loses data. Rust forces you to confront them, as unfortunate as they are. It's painful once, and then the problem is solved for the indefinite future.
Rust also provides OsStrExt if you don’t care about portability, which greatly removes many of these issues.
I don’t know how that’s not ideal: mistakes are hard, but you can opt into better ergonomics if you don’t need the portability. If you end up needing portability later, the compiler will tell you that you can’t use the shortcuts you opted into.
What I would absolutely love is a compacting garbage collector, but my understanding is Go can’t add that without breaking backwards compatibility, and so likely will never do that.
Happy to not be in that community, happy to not have to write (or read) Go these days.
And frankly, most of the time I see people gushing about Go, it's for features that trivially exist in most languages that aren't C, or are entirely subjective like "it's easy" (while ignoring, you know, reality).
But there is no silent corruption when you pass the data as opaque bytes, you just get some placeholder symbols when displayed. This is how I see the file in my terminal and I can rm it just fine.
And yes, question marks in the terminal are way better than applications not working at all.
The case of non-UTF-8 being skipped is usually a characteristic of applications written in languages that don't use bytes for their default string type, not the other way around. This has bitten me multiple times with Python2/3 libraries.
You wanted sources, here's the chapter on tasks and synchronization in the Ada LRM: http://www.ada-auth.org/standards/22rm/html/RM-9.html
For Erlang and Elixir, concurrent programming is pretty much their thing so grab any book or tutorial on them and you'll be introduced to how they handle it.
Go is a super productive powerhouse for me.
Anyway, assuming you're talking about TypeScript, I'm surprised to hear that you prefer Go's type system to TypeScript's. There are definitely cases where you can get carried away with TypeScript types, but due to that expressiveness I find it much more productive than Go's type system (and I'd make the same argument for Rust vs. Go).
Sure it's good compared to like... C++. Is go actually competing with C++? From where I'm standing, no.
But compared to what you might actually use Go for... The tooling is bad. PHP has better tooling, dotnet has better tooling, Java has better tooling.
If you use 3) to create a &str/String from invalid bytes, you can't safely use that string as the standard library is unfortunately designed around the assumption that only valid UTF-8 is stored.
https://doc.rust-lang.org/std/primitive.str.html#invariant
> Constructing a non-UTF-8 string slice is not immediate undefined behavior, but any function called on a string slice may assume that it is valid UTF-8, which means that a non-UTF-8 string slice can lead to undefined behavior down the road.
In Go's category, there's Java, Haskell, OCaml, Julia, Nim, Crystal, Pony...
Dynamic languages are more likely to have green threads but aren't Go replacements.
You can debate whether it is sloppy but I think an error is much better than silently corrupting data.
> The best approach is to treat data as opaque bytes unless there is a good reason not to
This doesn't seem like a good approach when dealing with strings which are not just blobs of bytes. They have an encoding and generally you want ways to, for instance, convert a string to upper/lowercase.
In C#, for example, there are multiple ways, but you should generally be using the modern approach of async/Task, which is trivial to learn and used exclusively in examples for years.
I agree with this. I feel like Go was a very smart choice to create a new language to be easy and practical and have great tooling, and not to be experimental or super ambitious in any particular direction, only trusting established programming patterns. It's just weird that they missed some things that had been pretty well hashed out by 2009.
Map/filter/etc. are a perfect example. I remember around 2000 the average programmer thought map and filter were pointlessly weird and exotic. Why not use a for loop like a normal human? Ten years later the average programmer was like, for loops are hard to read and are perfect hiding places for bugs, I can't believe we used to use them even for simple things like map, filter, and foreach.
By 2010, even Java had decided that it needed to add its "stream API" and lambda functions, because no matter how awful they looked when bolted onto Java, it was still an improvement in clarity and simplicity.
Somehow Go missed this step forward the industry had taken and decided to double down on "for." Go's different flavors of for are a significant improvement over the C/C++/Java for loop, but I think it would have been more in line with the conservative, pragmatic philosophy of Go to adopt the proven solution that the industry was converging on.
So: The way Go presents it is confusing, but this behavior makes sense, is correct, will never be changed, and is undoubtedly depended on by correct programs.
The confusing thing for people use to C++ or C# or Java or Python or most other languages is that in Go nil is a perfectly valid pointer receiver for a method to have. The method resolution lookup happens statically at compile time, and as long as the method doesn't try to deref the pointer, all good.
It still works if you assign to an interface.
package main
import "fmt"
type Dog struct {}
type Cat struct {}
type Animal interface {
MakeNoise()
}
func (*Dog) MakeNoise() { fmt.Println("bark") }
func (*Cat) MakeNoise() { fmt.Println("meow") }
func main() {
var d *Dog = nil
var c *Cat = nil
var i Animal = d
var j Animal = c
d.MakeNoise()
c.MakeNoise()
i.MakeNoise()
j.MakeNoise()
}
This will print bark
meow
bark
meow
But the interface method lookup can't happen at compile time. So the interface value is actually a pair -- the pointer to the type, and the instance value. The type is not nil, hence the interface value is something like (&Cat,nil) and (&Dog,nil) in each case, which is not the interface zero value, which is (nil, nil).But it's super confusing because Go type cooerces a nil struct value to a non-nil (&type, nil) interface value. There's probably some naming or syntax way to make this clearer.
But the behavior is completely reasonable.
What do you mean by this for Java? The library is the runtime that ships with Java, and while they're OS threads under the hood, the abstraction isn't all that leaky, and it doesn't feel like they're actually outside the JVM.
Working with them can be a bit clunky, though.
the nil issue. An interface, when assigned a struct, is no longer nil even if that struct is nil - probably a mistake. Valid point.
append in a func. Definitely one of the biggest issues is that slices are by ref. They did this to save memory and speed but the append issue becomes a monster unless abstracted. Valid point.
err in scope for the whole func. You defined it, of course it is. Better to reuse a generic var than constantly instantiate another. The lack of try catch forces you to think. Not a valid point.
defer. What is the difference between a scope block and a function block? I'll wait.
my favorite example of this was the go authors refusing to add monotonic time into the standard library because they confidently misunderstood its necessity
(presumably because clocks at google don't ever step)
then after some huge outages (due to leap seconds) they finally added it
now the libraries are a complete a mess because the original clock/time abstractions weren't built with the concept of multiple clocks
and every go program written is littered with terrible bugs due to use of the wrong clock
https://github.com/golang/go/issues/12914 (https://github.com/golang/go/issues/12914#issuecomment-15075... might qualify for the worst comment ever)
There are also runtimes like e.g. Hermes (used primarily by React Native), there's support for separating operations between the graphics thread and other threads.
All that being said, I won't dispute OP's point about "handling concurrency [...] within the language"- multithreading and concurrency are baked into the Golang language in a more fundamental way than Javascript. But it's certainly worth pointing out that at least several of the major runtimes are capable of multithreading, out of the box.
I assume, anyway. Maybe the Go debugger is kind of shitty, I don't know. But in PHP with xdebug you just use all the fancy array_* methods and then step through your closures or callables with the debugger.
Elixir handling 2 million websocket connections on a single machine back in 2015 would like to have a word.[1] This is largely thanks to the Erlang runtime it sits atop.
Having written some tricky Go (I implemented Raft for a class) and a lot of Elixir (professional development), it is my experience that Go's concurrency model works for a few cases but largely sucks in others and is way easier to write footguns in Go than it ought to be.
[1]: https://phoenixframework.org/blog/the-road-to-2-million-webs...
Source: spent the last few weeks at work replacing a Go program with an Elixir one instead.
I'd use Go again (without question) but it is not a panacea. It should be the default choice for CLI utilities and many servers, but the notion that it is the only usable language with something approximating CSP is idiotic.
I feel people who complain about rustc compile times must be new to using compiled languages…
f.Close() // without defer
The reason I might want function scope defer is because there might be a lot of different exit points from that function.With lexical scope, there’s only three ways to safely jump the scope:
1. reaching the end of the procedure, in which case you don’t need a defer)
2. A ‘return’, in which case you’re also exiting the function scope
3. a ‘break’ or ‘continue’, which admittedly could see the benefit of a lexical scope defer but they’re also generally trivial to break into their own functions; and arguably should be if your code is getting complex enough that you’ve got enough branches to want a defer.
If Go had other control flows like try/catch, and so on and so forth, then there would be a stronger case for lexical defer. But it’s not really a problem for anyone aside those who are also looking for other features that Go also doesn’t support.
For example, you're not allowed to write the following:
type Option[T any] struct { t *T }
func (o *Option[T]) Map[U any](f func(T) U) *Option[U] { ... }
That fails because methods can't have type parameters, only structs and functions. It hurts the ergonomics of generics quite a bit.And, as you rightly point out, the stdlib is largely pre-generics, so now there's a bunch of duplicate functions, like "strings.Sort" and "slices.Sort", "atomic.Pointer" and "atomic.Value", quite possible a sync/v2 soon https://github.com/golang/go/issues/71076, etc.
The old non-generic versions also aren't deprecated typically, so they're just there to trap people that don't know "no never use atomic.Value, always use atomic.Pointer".
Regarding Typescript, I actually am a big fan of it, and I almost never write vanilla JS anymore. I feel my team uses it well and work out the kinks with code review. My primary complaint, though, is that I cannot trust any other team to do the same, and TS supports escape hatches to bypass or lie about typing.
I work on a project with a codebase shared by several other teams. Just this week I have been frustrated numerous times by explicit type assertions of variables to something they are not (`foo as Bar`). In those cases it’s worse than vanilla JS because it misleads.
Basically OP was saying that JavaScript can run multiple tasks concurrently, but with no parallelism since all tasks map to 1 OS thread.
However no &str is not "an alias for &&String" and I can't quite imagine how you'd think that. String doesn't exist in Rust's core, it's from alloc and thus wouldn't be available if you don't have an allocator.
import "gopkg.in/yaml.v3" // does *what* now?
curl https://gopkg.in/yaml.v3?go-get=1 | grep github
<meta name="go-source" content="gopkg.in/yaml.v3 _ https://github.com/go-yaml/yaml/tree/v3.0.1{/dir} https://github.com/go-yaml/yaml/blob/v3.0.1{/dir}/{file}#L{line}">
oh, ok :-/I would presume only a go.mod entry would specify whether it really is v3.0.0 or v3.0.1
Also, for future generations, don't use that package https://github.com/go-yaml/yaml#this-project-is-unmaintained
So it's really Go vs. Java, or you can take a performance hit and use Erlang (valid choice for some tasks but not all), or take a chance on a novel paradigm/unsupported language.
The downside of a small stdlib is the proliferation of options, and you suddenly discover(ed?, it's been a minute) that your async package written for Tokio won't work on async-std and so forth.
This has often been the case in Go too - until `log/slog` existed, lots of people chose a structured logger and made it part of their API, forcing it on everyone else.
I thought it was a seldom mentioned fact in Go that CSP systems are impossible to reason about outside of toy projects so everyone uses mutexes and such for systemic coordination.
I'm not sure I've even seen channels in a production application used for anything more than stopping a goroutine, collecting workgroup results, or something equally localized.
> Within a worker thread, worker.getEnvironmentData() returns a clone of data passed to the spawning thread's worker.setEnvironmentData(). Every new Worker receives its own copy of the environment data automatically.
M:1 threaded means that the user space threads are mapped onto a single kernel thread. Go is M:N threaded: goroutines can be arbitrarily scheduled across various underlying OS threads. Its primitives (goroutines and channels) make both concurrency and parallelism notably simpler than most languages.
> But it's certainly worth pointing out that at least several of the major runtimes are capable of multithreading, out of the box.
I’d personally disagree in this context. Almost every language has pthread-style cro-magnon concurrency primitives. The context for this thread is precisely how go differs from regular threading interfaces. Quoting gp:
> The go language and its runtime is the only system I know that is able to handle concurrency with multicore cpus seamlessly within the language, using the CSP-like (goroutine/channel) formalism which is easy to reason with.
Yes other languages have threading, but in go both concurrency and parallelism are easier than most.
(But not erlang :) )
I like Go and Rust, but sometimes I feel like they lack tools that other languages have just because they WANT to be different, without any real benefit.
Whenever I read Go code, I see a lot more error handling code than usual because the language doesn't have exceptions...
And sometimes Go/Rust code is more complex because it also lacks some OOP tools, and there are no tools to replace them.
So, Go/Rust has a lot more boilerplate code than I would expect from modern languages.
For example, in Delphi, an interface can be implemented by a property:
type
TMyClass = class(TInterfacedObject, IMyInterface)
private
FMyInterfaceImpl: TMyInterfaceImplementation; // A field containing the actual implementation
public
constructor Create;
destructor Destroy; override;
property MyInterface: IMyInterface read FMyInterfaceImpl implements IMyInterface;
end;
This isn't possible in Go/Rust. And the Go documentation I read strongly recommended using Composition, without good tools for that.This "new way is the best way, period ignore good things of the past" is common.
When MySQL didn't have transactions, the documentation said "perform operations atomically" without saying exactly how.
MongoDB didn't have transactions until version 4.0. They said it wasn't important.
When Go didn't have generics, there were a bunch of "patterns" to replace generics... which in practice did not replace.
The lack of inheritance in Go/Rust leaves me with the same impression. The new patterns do not replace the inheritance or other tools.
"We don't have this tool in the language because people used it wrong in the old languages." Don't worry, people will use the new tools wrong too!
Sure, there are some awfully dated companies that still send changed files over email to each other with no version control, I'm sure some of those are stuck with an IDE config, but to be honest where I have seen this most commonly were some Visual Studio projects, not Java. Even though you could find any of these for any other language, you just need to scale your user base up. A language that hasn't even hit 1.0 will have a higher percentage of technically capable users, that's hardly a surprise.
Some learnings. Don't pass sections of your slices to things that mutate them. Anonymous functions need recovers. Know how all goroutines return.
> “The key point here is our programmers are Googlers, they’re not researchers. They’re typically, fairly young, fresh out of school, probably learned Java, maybe learned C or C++, probably learned Python. They’re not capable of understanding a brilliant language but we want to use them to build good software. So, the language that we give them has to be easy for them to understand and easy to adopt.”
No, I stick by my position. I may not be able to do any better, but I can tell when something’s not good.
(I have no opinion on Go. I’ve barely used it. This is only on the general principle of being able to judge something you couldn’t do yourself. I mean, the Olympics have gymnastic judges who are not gold medalists.)
As for hot swap, I haven't heard it being used for production, that's mostly for faster development cycles - though I could be wrong. Generally it is safer to bring up the new version, direct requests over, and shut down the old version. It's problematic to just hot swap classes, e.g. if you were to add a new field to one of your classes, how would old instances that lack it behave?
Most of the time if there's a result, there's no error. If there's an error, there's no result. But don't forget to check every time! And make sure you don't make a mistake when you're checking and accidentally use the value anyway, because even though it's technically meaningless it's still nominally a meaningful value since zero values are supposed to be meaningful.
Oh and make sure to double-check the docs, because the language can't let you know about the cases where both returns are meaningful.
The real world is messy. And golang doesn't give you advance warning on where the messes are, makes no effort to prevent you from stumbling into them, and stands next to you constantly criticizing you while you clean them up by yourself. "You aren't using that variable any more, clean that up too." "There's no new variables now, so use `err =` instead of `err :=`."
I'm surprised people in these comments aren't focusing more on the append example.
Though, thinking back, someone should have brought up TypeScript's at least three different ways to represent nil (undefined, null, NaN, a few others). Its at least a little better in TS, because unlike in Go the type-checker doesn't actively lie to you about how many different states of undefined you might be dealing with.
I recently realized that there is no easy way to "bubble up a goroutine error", and I wrote some code to make sure that was possible, and that's when I realize, as usual, that I'm rewriting part of the OTP library.
The whole supervisor mechanism is so valuable for concurrency.
The tasks run concurrently, but not in parallel.
I've heard the term "beaten path" used for these languages, or languages that an organization chooses to use and forbids the use of others.
Debugging is a nightmare because it refuses to even compile if you have unused X (which you always will have when you're debugging and testing "What happens if I comment out this bit?").
The bureaucracy is annoying. The magic filenames are annoying. The magic field names are annoying. The secret hidden panics in the standard library are annoying. The secret behind-your-back heap copies are annoying (and SLOW). All the magic in go eventually becomes annoying, because usually it's a naively repurposed thing (where they depend on something that was designed for a different purpose under different assumptions, but naively decided to depend on its side effects for their own ever-so-slightly-incompatible machinery - like special file names, and capitalization even though not all characters have such a thing .. was it REALLY such a chore to type "pub" for things you wanted exposed?).
Now that AI has gotten good, I'm rather enjoying Rust because I can just quickly ask the AI why my types don't match or a gnarly mutable borrow is happening - rather than spending hours poring over documentation and SO questions.
For JSON, you can't encode Optional[T] as nothing at all. It has to encode to something, which usually means null. But when you decode, the absence of the field means UnmarshalJSON doesn't get called at all. This typically results in the default value, which of course you would then re-encode as null. So if you round-trip your JSON, you get a materially different output than input (this matters for some other languages/libraries). Maybe the new encoding/json/v2 library fixes this, I haven't looked yet.
Also, I would usually want Optional[T]{value:nil,exists:true} to be impossible regardless of T. But Go's type system is too limited to express this restriction, or even to express a way for a function to enforce this restriction, without resorting to reflection, and reflection has a type erasure problem making it hard to get right even then! So you'd have to write a bunch of different constructors: one for all primitive types and strings; one each for pointers, maps, and slices; three for channels (chan T, <-chan T, chan<- T); and finally one for interfaces, which has to use reflection.
We can try to shove it into objects that work on other text but this won't work in edge cases.
Like if I take text on Linux and try to write a Windows file with that text, it's broken. And vice versa.
Go let's you do the broken thing. In Rust or even using libraries in most languages, you cant. You have to specifically convert between them.
That's why I mean when I say "storing random binary data as text". Sure, Windows almost UTF16 abomination is kind of text, but not really. Its its own thing. That requires a different type of string OR converting it to a normal string.
You can get a large portion of what graal native offers by using AppCDS and compressed object headers.
Here's the latest JEP for all that.
Similarly, if a field implements a trait in Rust, you can expose it via `AsRef` and `AsMutRef`, just return a reference to it.
These are not ideal tools, and I find the Go solution rather unintuitive, but they solve the problems that I would've solved with inheritance in other languages. I rarely use them.
I don't get how you can assign an interface to be a pointer to a structure. How does that work? That seems like a compile error. I don't know much about Go interfaces.
You don't need to have a cmd directory. I see it a lot in Go projects but I'm not sure why.
That works well for go and Google but I'm not sure how easily that'd be to replicate with rust or others
Should and could golang have been so much better than it is? Would golang have been better if Pike and co. had considered use-cases outside of Google, or looked outward for inspiration even just a little? Unambiguously yes, and none of the changes would have needed it to sacrifice its priorities of language simplicity, compilation speed, etc.
It is absolutely okay to feel that go is a better language than some of its predecessors while at the same time being utterly frustrated at the the very low-hanging, comparatively obvious, missed opportunities for it to have been drastically better.
https://learn.microsoft.com/en-us/dotnet/csharp/fundamentals...
It maybe legacy cruft downstream of poorly thought out design decisions at the system/OS level, but we're stuck with it. And a language that doesn't provide the tooling necessary to muddle through this mess safely isn't a serious platform to build on, IMHO.
There is room for languages that explicitly make the tradeoff of being easy to use (e.g. a unified string type) at the cost of not handling many real world edge cases correctly. But these should not be used for serious things like backup systems where edge cases result in lost data. Go is making the tradeoff for language simplicity, while being marketed and positioned as a serious language for writing serious programs, which it is not.
Make use of binary libraries, export templates, incremental compilation and linking with multiple cores, and if using VC++ or clang vLatest, modules.
It still isn't Delphi fast, but becomes more manageable.
That's 6 languages, a non-exhaustive list of them, that are either properly mainstream and more popular than Go or at least well-known and easy to obtain and get started with. All of which have concurrency baked in and well-supported (unlike, say, C).
EDIT: And one more thing, all but Elixir are older than Go, though Clojure only slightly. So prior art was there to learn from.
But yeah, the CSP model is mostly dead. I think the language authors' insistence that goroutines should not be addressable or even preemptible from user code makes this inevitable.
Practical Go concurrency owes more to its green threads and colorless functions than its channels.
Again, this is the same simplistic, vs just the right abstraction, this just smudges the complexity over a much larger surface area.
If you have a byte array that is not utf-8 encoded, then just... use a byte array.
I quite like Go and use it when I can. However, I wish there were something like Go, without these issues. It's worth talking about that. For instance, I think most of these critiques are fair but I would quibble with a few:
1. Error scope: yes, this causes code review to be more complex than it needs to be. It's a place for subtle, unnecessary bugs.
2. Two types of nil: yes, this is super confusing.
3. It's not portable: Go isn't as portable as C89, but it's pretty damn portable. It's plenty portable to write a general-purpose pre-built CLI tool in, for instance, which is about my bar for "pragmatic portability."
4. Append ownership & other slice weirdness: yes.
5. Unenforced `defer`: yes, similar to `err`, this introduces subtle bugs that can only be overcome via documentation, careful review, and boilerplate handling.
6. Exceptions on top of err returns: yes.
7. utf-8: Hasn't bitten me, but I don't know how valid this critique is or isn't.
8. Memory use: imo GC is a selling-point of the language, not a detriment.
When you write JavaScript (or TypeScript that gets transpiled), it's not as easy to assume the target is Node (V8). It could be Bun (JavaScriptCore), Deno, a browser, etc.
I do think there probably would've been some more elegant syntax or naming convention to make it less confusing.
With all of that, Go becomes the perfect language for the age of LLMs writing all the code. Let the LLMs deal with all the boilerplate and misery of Go, while at the same time its total lack of elegance is also well suited to LLMs which similarly have the most dim notions of code elegance.
And when the front end is C# so is the back end.
I shouldn't fault the creators. They did what they did, and that is all and good. I am more shocked by the way it has exploded in adoption.
Would love to see a coffeescript for golang.
For NodeJS development, you would typically write it in Typescript - which has a very good type system.
Personally I have also written serverside C# code, which is a very nice experience these days. C# is a big language these days though.
Also, quite a few libraries have metadata now denoting these extra reflection targets.
But nonetheless you are right in general, but depends on your use case.
But those are not rules. If you're doing stuff for fun, check out QBE <https://c9x.me/compile/> or Plan 9 C <https://plan9.io/sys/doc/comp.html> (which Go was derived from!)
> I couldn't resist the temptation to put in a null reference, simply because it was so easy to implement. This has led to innumerable errors, vulnerabilities, and system crashes, which have probably caused a billion dollars of pain and damage in the last forty years.
At the larger codebase go company I worked at, the general conclusion was: Go is a worse Java. The company should've just used Java in the end.
Python sucks balls but it has more fanboys than a K-pop idol. Enforced whitespace? No strong typing? A global interpreter lock? Garbage error messages? A package repository you can't search (only partially because it's full of trash packages and bad forks) with random names and no naming convention? A complete lack of standardization in installing or setting up applications/environments? 100 lines to run a command and read the stout and stderr in real time?
The real reason everyone uses Python and Go is because Google used them. Otherwise Python looks like BASIC with objects and Go is an overhyped niche language.
Of the top of my head, in order of likely difficulty to calculate: byte length, number of code points, number of grapheme/characters, height/width to display.
Maybe it would be best for Str not to have len at all. It could have bytes, code_points, graphemes. And every use would be precise.
No, none outside of stdlib anyway in the way you're probably thinking of.
There are specialized constructs which live in third-party crates, such as rope implementations and stack-to-heap growable Strings, but those would have to exist as external modules in Go as well.
FWIW the docs indicate that working with grapheme clusters will never end up in the standard library.
Yes this is why all competent libraries don't actually use string for path. They have their own path data type because it's actually a different data type.
Again, you can do the Go thing and just use the broken string, but that's dumb and you shouldn't. They should look at C++ std::filesystem, it's actually quite good in this regard.
> And a language that doesn't provide the tooling necessary to muddle through this mess safely isn't a serious platform to build on, IMHO.
I agree, even PHP does a better job at this than Go, which is really saying something.
> Go is making the tradeoff for language simplicity, while being marketed and positioned as a serious language for writing serious programs, which it is not.
I would agree.
Its main virtues are low cognitive load and encouraging simple straightforward ways of doing things, with the latter feeding into the former.
Languages with sophisticated powerful type systems and other features are superior in a lot of ways, but in the hands of most developers they are excuses to massively over-complicate everything. Sophomore developers (not junior but not yet senior) love complexity and will use any chance to add as much of it as they can, either to show off how smart they are, to explore, or to try to implement things they think they need but actually don't. Go somewhat discourages this, though devs will still find a way of course.
Experienced developers know that complexity is evil and simplicity is actually the sign of intelligence and skill. A language with advanced features is there to make it easier and simpler to express difficult concepts, not to make it more difficult and complex to express simple concepts. Every language feature should not always be used.
It's hard to find a language that will satisfy everyone's needs. Go I find better for smaller, focused applications/utilities... can definitely see how it would cause problems at an "enterprise" level codebase.
Once you know about it, though, it's easy to avoid. I do think, especially given that the CSP features of Go are downplayed nowadays, this should be addressed more prominently in the docs, with the more realistic solutions presented (atomics, mutexes).
It could also potentially be addressed using 128-bit atomics, at least for strings and interfaces (whereas slices are too big, taking up 3 words). The idea of adding general 128-bit atomic support is on their radar [2] and there already exists a package for it [3], but I don't think strings or interfaces meet the alignment requirements.
[1]: https://research.swtch.com/gorace
That’s the selling point for me. If I’m coming to a legacy code as that no one working wrote, I pray it is go because then it just keeps working through upgrading the compiler and generally the libraries used.
Python, for a number of years at this point, has had structural (!) pattern matching with unpacking, type-checking baked in, with exhaustiveness checking (depending on the type checker you use). And all that works at "type-check time".
It can also facilitate type-state programming through class methods.
Libraries like Pydantic are fantastic in their combination of ergonomics and type safety.
The prime missing piece is sum types, which need language-level support to work well.
Go is simplistic in comparison.
I mean, really neither should be the default. You should have to pick chars or bytes on use, but I don't think that's palatable; most languages have chosen one or the other as the preferred form. Or some have the joy of being forward thinking in the 90s and built around UCS-2 and later extended to UTF-16, so you've got 16-bit 'characters' with some code points that are two characters. Of course, dealing with operating systems means dealing with whatever they have as well as what the language prefers (or, as discussed elsewhere in this thread, pretending it doesn't exist to make easy things easier and hard things harder)
* The Dremel is approachable: I don't have to worry about cutting off my hand with the jigsaw or set up a jig with the circular saw. I don't have to haul my workpiece out to the garage.
* The Dremel is simple: One slider for speed. Apply spinny bit to workpiece.
* The Dremel is fun: It fits comfortably in my hand. It's not super loud. I don't worry about hurting myself with it. It very satisfyingly shaves bits of stuff off things.
In so many respects, the Dremel is a great tool. But 90% of the time when I use it, it ends up taking my five times as long (but an enjoyable 5x!) and the end result is a wobbly scratchy mess. I curse myself for not spending the upfront willpower to use the right tool for the job.
I find myself doing this with all sorts of real and software tools: Over-optimizing for fun and ease-of-entry and forgetting the value of the end result and using the proper tool for the job.
I think of this as the "Dremel effect" and I try to be mindful of it when selecting tools.
But that’s all this is, is a list of annoyances the author experiences when using Go. Great, write an article about the problems with Go, but don’t say “therefore it’s a bad language”.
While I agree with many of the points brought up, none of them seems like such a huge issue that it’s even worth discussing, honestly. So you have different taste the the original designers. Who cares? What language do you say is better? I can find just as many problems with that language.
Also, defer is great.
Use the language where it makes sense. You do know what if you have an issue that the language fails at, you can solve that particular problem in another language and.. call that code?
We used to have a node ts service. We had some computationally heavy stuff, we moved that one part to Go because it was good for that ONE thing. I think later someone ported that one thing to Rust and it became a standalone project.
Idk. It’s just code. Nobody really cares, we use these tools to solve problems.
You end up creating these elegant abstractions that are very seductive from a programmer-as-artist perspective, but usually a distraction from just getting the work done in a good enough way.
You can tell that the creators of Go are very familiar with engineer psychology and what gets them off track. Go takes away all shiny toys.
try (SomeResource foo = SomeResource.open()) { method(foo); }
or
public void method() { try(...) { // all business logic wrapped in try-with-resources } }
To me it seems like lexical scoping can accomplish nearly everything functional scoping can, just without any surprising behavior.
This also hurts discoverability. `slices`, `maps`, `iter`, `sort` are all top-level packages you simply need to know about to work efficiently with iteration. You cannot just `items.sort().map(foo)`, guided and discoverable by auto-completion.
Most of my coding these days is definitely in the 'for fun' bucket given my current role. So I'd rather take 5x and have fun.
That said, I don't think Go is only fun, I think it's also a viable option for many backend projects where you'd traditionally have reached for Java / C#. And IMO, it sure beats the recent tendency of having JS/Python powering backend microservices.
The answer here isn't to throw up your hands, pick one, and other cases be damned. It's to expose them all and let the engineer choose. To not beat the dead horse of Rust, I'll point that Ruby gets this right too.
* String#length # count Unicode code units
* String#bytes#length # count bytes
* String#grapheme_clusters#length # count grapheme clusters
Similarly, each of those "views" lets you slice, index, etc. across those concepts naturally. Golang's string is the worst of them all. They're nominally UTF-8, but nothing actually enforces it. But really they're just buckets of bytes, unless you send them to APIs that silently require them to be UTF-8 and drop them on the floor or misbehave if they're not.Height/width to display is font-dependent, so can't just be on a "string" but needs an object with additional context.
What is different about it? I don't see any constraints here relevant to having a different type. Note that this thread has already confused the issue, because they said filename and you said path. A path can contain /, it just happens to mean something.
If you want a better abstraction to locations of files on disk, then you shouldn't use paths at all, since they break if the file gets moved.
I have absolutely no idea how go would solve this problem, and in fact I don't think it does at all.
> The Go std-lib is fantastic.
I have seen worse, but I would still not call it decent considering this is a fairly new language that could have done a lot more.
I am going to ignore the incredible amount of asinine and downright wrong stuff in many of the most popular libraries (even the basic ones maintained by google) since you are talking only about the stdlib.
On the top of my head I found inconsistent tagging management for structs (json defaults, omitzero vs omitempty), not even errors on tag typos, the reader/writer pattern that forces you to to write custom connectors between the two, bzip2 has a reader and no writer, the context linked list for K/V. Just look at the consistency of the interfaces in the "encoding" pkg and cry, the package `hash` should actually be `checksum`. Why does `strconv.Atoi`/ItoA still exist? Time.Add() vs Time.Sub()...
It chock full of inconsistencies. It forces me to look at the documentation every single time I don't use something for more than a couple of days. No, the autocomplete with the 2-line documentation does not include the potential pitfalls that are explained at the top of the package only.
And please don't get me started on the wrappers I had to write around stuff in the net library to make it a bit more consistent or just less plain wrong. net/url.Parse!!! I said don't make my start on this package! nil vs NoBody! ARGH!
None of this is stuff at the language level (of which there is plenty to say).
None of it is a dealbreaker per se, but it adds attrition and becomes death by a billion cuts.
I don't even trust any parser written in go anymore, I always try to come up with corner cases to check how it reacts, and I am often surprised by most of them.
Sure, there are worse languages and libraries. Still not something I would pick up in 2025 for a new project.
edit: hold on wait, java doesn't have Value types yet... /jk
This is one of the minor errors in the post.
Btw, AI sucks on GO. One would have guessed that such a simple lang would suit ChatGPT. Turns out ChatGPT is much better at Java, C#, Pyhton and many other langs than GO.
package main
import "fmt"
type I interface{}
type S struct{}
func main() {
var i I
var s *S
fmt.Println(s, i) // nil nil
fmt.Println(s is nil, i is nil, s == i) // t,t,f: Not confusing anymore?
i = s
fmt.Println(s, i) // nil nil
fmt.Println(s is nil, i is nil, s == i) // t,f,t: Still not confusing?
}
Of course, this means you have to precisely define the semantics of `is` and `==`:- `is` for interfaces checks both value and interface type.
- `==` for interfaces uses only the value and not the interface type.
- For structs/value, `is` and `==` are obvious since there's only a value to check.
There's no need to ever convert that to a "string".
Until you run into a crate that wants the filename in String form.One of the great advances of Unix was that you don't need separate handling for binary data and text data; they are stored in the same kind of file and can be contained in the same kinds of strings (except, sadly, in C). Occasionally you need to do some kind of text-specific processing where you care, but the rest of the time you can keep all your code 8-bit clean so that it can handle any data safely.
Languages that have adopted the approach you advocate, such as Python, frequently have bugs like exception tracebacks they can't print (because stdout is set to ASCII) or filenames they can't open when they're passed in on the command line (because they aren't valid UTF-8).
The entire point of UTF-8 (designed, by the way, by the group that designed Go) is to encode Unicode in such a way that these byte string operations perform the corresponding Unicode operations, precisely so that you don't have to care whether your string is Unicode or just plain ASCII, so you don't need any error handling, except for the rare case where you want to do something related to the text that the string semantically represents. The only operation that doesn't really map is measuring the length.
Since they didn't want to have a 'proper' RAII unwinding mechanism, this is the crappy compromise they came up with.
- As someone who’s worked with C/C++ and Fortran, I think all these languages have their own challenges—Go’s simplicity trades off against Rust’s safety guarantees, for example.
- Could someone share a real-world example where Go’s design caused a production issue that Rust or another language would’ve avoided?
- I’m curious how these trade-offs play out in practice.
Sorry, I don't do Go/Rust coding, still on C/C++/Fotran.
But it is being put. Read newsletters like "The Go Blog", "Go Weekly". It's been improving constantly. Language-changes require lots of time to be done right, but the language is evolving.
But that’s a moot point because I appreciate it’s just an example. And, more importantly, Go doesn’t support the kind of control flows you’re describing anyway (as I said in my previous post).
A lot of the comments here about ‘defer’ make sense in other languages that have different idioms and features to Go. But they don’t apply directly to Go because you’d have to make other changes to the language first (eg implementing try blocks).
Every single thing you listed here is supported by &[u8] type. That's the point: if you want to operate on data without assuming it's valid UTF-8, you just use &[u8] (or allocating Vec<u8>), and the standard library offers what you'd typically want, except of the functions that assume that the string is valid UTF-8 (like e.g. iterating over code points). If you want that, you need to convert your &[u8] to &str, and the process of conversion forces you to check for conversion errors.
Even AST-based macro systems have tricky problems like nontermination and variable capture. It can be tough to debug why your compiler is stuck in an infinite macro expansion loop. Macro systems that solve these problems, like the R⁵RS syntax-rules system, have other drawbacks like very complex implementations and limited expressive power.
And often there's no easy way to look at the code after it's been through the macro processor, which makes bugs in the generated code introduced by buggy macros hard to track down.
By contrast, if your code generator hangs in an infinite loop, you can debug it the same way you normally debug your programs; it doesn't suffer from tricky bugs due to variable capture; and it's easy to look at its output.
I don't completely get this. If you are memory requirements are strict, this makes little to no sense to me. I was programming J2ME games 20 years ago for Nokia devices. We were trying to fit games into 50-128kb RAM and all of this with Java of all the languages. No sane Java developer would have looked at that code without fainting - no dynamic allocations, everything was static, byte and char were the most common data types used. Images in the games were shaved, no headers, nothing. You really have to think it through if you got memory constraints on your target device.
(something that's not a minor error, that someone else pointed out, is that Python isn't strictly refcounted. Yeah, that's why emphasized "almost" and "pretty much". I can't do anything about that kind of critique)
If I hit a point where I need to do that in pretty much any other language, I'll cast about for some way to avoid doing it for a while (to include finding a different dependency to replace that one) because it's almost always gonna be a time-suck and may end up yielding nothing useful at all without totally unreasonable amounts of time spent on it, so I may burn time trying and then just have to abandon the effort.
In go, I unhesitatingly hop right in and find what I need fast, just about every time.
It's the polar opposite of something like Javascript (or Typescript—it doesn't avoid this problem) where you can have three libraries and all three both read like a totally different language from the one you're writing, and also like totally different languages from one another. Ugh. This one was initially written during the "everything should be a HOF" trend and ties itself in knots to avoid ever treating the objects it's implicitly instantiating all over the place as objects... this one uses "class" liberally... this one extensively leans on the particular features of prototypal inheritance, OMG, kill me now... this one imports Lodash, sigh, here we go... et cetera.
Are Java AOT compilation times just as fast as Go?
Go (and lots of other languages...) wreck it on dependency management and deployment, though. :-/ As the saying goes, "it was easier to invent Docker than fix Python's tooling".
There aren't many possibilities for nil errors in Go once you eliminate the self-harm of abusing pointers to represent optionality.
After Go added generics in version 1.18, you can just import someone else's generic implementations of whatever of these functions you want and use them all throughout your code and never think about it. It's no longer a problem.
And if you're engaging in CS then Go is probably the last language you should be using. If however, what you're interested in doing is programming, the fundamental data structures there are arrays and hashmaps, which Go has built-in. Everything else is niche.
> Also, as mentioned by another comment, an HTTP or crypto library can become obsolete _fast_. What about HTTP3? What about post-quantum crypto? What about security fixes? The stdlib is tied to the language version, thus to a language release. Having such code independant allows is to evolve much faster, be leaner, and be more composable. So yes, the library is well maintained, but it's tied to the Go version.
The entire point is to have a well supported crypto library. Which Go does and it's always kept up to date. Including security fixes.
As for post-quantum: https://words.filippo.io/mlkem768/
> - otoh, in Go, it was found out that net.Ip has an absolutely atrocious design (it's just an alias for []byte). Tailscale wrote a replacement that's now in a subpackage in net, but the old net.Ip is set in stone. (https://tailscale.com/blog/netaddr-new-ip-type-for-go)
Yes, and? This seems to me to be the perfect way to handle things - at all times there is a blessed high-quality library to use. As warts of its design get found out over time, a new version is worked on and released once every ~10 years.
A total mess of barely-supported libraries that the userbase is split over is just that - a mess.
The kind of general reason you need a mutex is that you are mutating some data structure from one valid state to another, but in between, it's in an inconsistent state. If other code sees that inconsistent state, it might crash or otherwise misbehave. So you acquire the mutex beforehand and release it once it's in the new valid state.
But what happens if you panic during the modification? The data structure might still be in an inconsistent state! But now it's unlocked! So other threads that use the inconsistent data will misbehave, and now you have a very tricky bug to fix.
This doesn't always apply. Maybe the mutex is guarding a more systemic consistency condition, like "the number in this variable is the number of messages we have received", and nothing will ever crash or otherwise malfunction if some counts are lost. Maybe it's really just providing a memory fence guarding against a torn read. Maybe the mutex is just guarding a compare-and-swap operation written out as a compare followed by a swap. But in cases like these I question whether you can really panic with the mutex held!
This is why Java deprecated Thread.stop. (But Java does implicitly unlock mutexes when unwinding during exception handling, and that does cause bugs.)
This is only vaguely relevant to your topic of whether Golang is good or not. Explicit error handling arguably improves your chances of noticing the possibility of an error arising with the mutex held, and therefore handling it correctly, but you correctly pointed out that because Go does have exceptions, you still have to worry about it. And cases like these tend to be fiendishly hard to test—maybe it's literally impossible for a test to make the code you're calling panic.
It is. None of this was new to me. In C++ defining a non-virtual destructor on a class hierarchy is also not new to me, but a fair critique can be made there too why it's "allowed". I do feel like C++ can defend that one from first principles though, in a way that Go cannot.
I'm not sure what you mean by the foo99 thing. I'm guessing this is about defer inside a loop?
> Is fixing those perceived footguns really a missed opportunity?
In my opinion very yes.
I also review a lot of Go code. I see these problems all the time from other people.
Generics can only be on function and not methods because of it's type system. So don't hold your breath and modifying this would be a breaking change.
Yes, that was my assumption when bash et al also had problems with it.
Typically the way you do this is you have the constructor for path do the validation or you use a static path::fromString() function.
Also paths breaking when a file is moved is correct behavior sometimes. For example something like openFile() or moveFile() requires paths. Also path can be relative location.
A simple one, if you create two separate library in Go and try to link with an application, you will have a terrible time.
I've ran into this same issue: https://github.com/golang/go/issues/65050
colors := items.Filter(_.age > 20).Map(_.color)
Instead of colors := items.Filter(func(x Item){ return x.age > 20 }).Map(func(x Item){ return x.color })
which as best as I can tell is how you'd express the same thing in Go if you had a container type with Map and Filter defined.Joda time is an excellent library and indeed it was basically the basis for java's time API, and.. for pretty much any modern language's time API, but given the history - Java basically always had the best time library available at the time.
Great, pretty much every language ever can do the equivalent. Not what anyone is talking about.
> Java is the epitome of backwards and forward-compatible changes,
Is the number of companies stuck on Java 8 lower than 50% yet? [1]
a := Start()
if thingEnabled {
thing := connectToThing()
defer thing.Close()
a.SetThing(thing)
}
a.Run(ctx)
You made a poor choice of language for the problem. It'd be a good fit for C/C++/Rust/Zig.
Why not? Machine code is not all that special - C++ and Rust is slow due to optimizations, not for machine code as target itself. Go "barely does anything", just spits out machine code almost as is.
Java AOT via GraalVM's native image is quite slow, but it has a different way of working (doing all the Java class loading and initialization and "baking" that into the native image).
It's not viable to use, but: https://github.com/borgo-lang/borgo
As someone who's been doing Go since 2015, working on dozens of large codebases counting probably a million lines total, across multiple teams, your criticisms do not ring true.
Go is no worse than C when it comes to extensibility, or C# or Java for that matter. Go programs are only extensible to the extent (ha) developers design their codebases right. Certainly, Go trades expressivity for explicitness more than some languages. You're encouraged to have fewer layers of abstraction and be more concrete and explicit. But in no way does that impede being able to extend code. The ability to write modular, extensible programs is a skill that must be learned, not something a programming language gives you for free.
It sounds like you worked on a poorly constructed codebase and assumed it was Go's fault.
A struct{a, b int32} takes 8 bytes of memory. It doesn't use any extra bytes to “know” its type, to point to a vtable of “methods,” to store a lock, or any other object “header.”
Dynamic dispatch in Go uses interfaces which are fat pointers that store the both type and a pointer to an object.
With this design it's only natural that you can have nil pointers, nil interfaces (no type and no pointer), and typed interfaces to a nil pointer.
This may be a bad design decision, it may be confusing. It's the reason why data races can corrupt memory.
But saying, as the author, “The reason for the difference boils down to again, not thinking, just typing” is just lazy.
Just as lazy as it is arguing Go is bad for portability.
I've written Go code that uses syscalls extensively and runs in two dozen different platforms, and found it far more sensible than the C approach.
I didn't really use it much until the last few years. It was limited and annoyiongly verbose. Now it's great, you don't even have to do things like explicitly notate covariant/contravariant types, and a lot of what used to be clumsy annotation with imports from typing is now just specified with normal Python.
And best of all, more and more libraries are viewing type support as a huge priority, so there's usually no more having to download type mocks and annotation packages and worry about keeping them in sync. There are some libraries that do annoying things like adding " | None" after all their return types to allow themselves to be sloppy, but at least they are sort of calling out to you that they could be sloppy instead of letting it surprise you.
It's now good and easy enough that it saves me time to use type annotations even for small scripts, as the time it saves from quickly catching typos or messing up a return type.
Like you said, Pydantic is often the magic that makes it really useful. It is just easy enough and adds enough value that it's worth not lugging around data in dicts or tuples.
My main gripe with Go's typing has always been that I think the structural typing of its interfaces is convenient but really it's convenient in the same way that duck typing is. In the same way that a hunter with a duck call is the same as a duck with duck typing, a F16 and a VCR are both things that have an ejection feature.
However, the non-intuitive punning of nil is unfortunate.
I'm not sure what the ideal design would be.
Perhaps just making an interface not comparable to nil, but instead something like `unset`.
type Cat struct {}
type Animal interface{}
func main() {
var c *Cat = nil
var a Animal = c
if a == nil { // compile error, can't compare interface to nil
;
}
if a == unset { // false, hopefully intuitively
}
}
Still, it's a sharp edge you hit once and then understand. I am surprised people get so bothered by it...it's not like something that impairs your use of the language once you're proficient.(E.g. complaints about nil existing at all, or error handling, are much more relatable!)
If your API takes &str, and tries to do byte-based indexing, it should almost certainly be taking &[u8] instead.
- Zero values, lack of support for constructors
- Poor handling of null
- Mutability by default
- A static type system not designed with generics in mind
- `int` is not arbitrary precision [1]
- The built-in array type (slices) has poorly considered ownership semantics [2]
Notable mentions:
- No sum types
- No string interpolation
Yes, and that's a good thing. It allows every code that gets &str/String to assume that the input is valid UTF-8. The alternative would be that every single time you write a function that takes a string as an argument, you have to analyze your code, consider what would happen if the argument was not valid UTF-8, and handle that appropriately. You'd also have to redo the whole analysis every time you modify the function. That's a horrible waste of time: it's much better to:
1) Convert things to String early, and assume validity later, and
2) Make functions that explicitly don't care about validity take &[u8] instead.
This is, of course, exactly what Rust does: I am not aware of a single thing that &str allows you to do that you cannot do with &[u8], except things that do require you to assume it's valid UTF-8.
bar, err := foo()
if err != nil {
return err
}
if err = foo2(); err != nil {
return err
}
Sounded more like a nitpicking.If you really care about scope while being able to use `bar` later down, the code should be written as:
bar, err := foo()
if err != nil {
return err
}
err = foo2() // Just reuse `err` plainly
if err != nil {
return err
}
which actually overwrites `err`, opposite to "shadowing" it.The confusing here, is that the difference between `if err != nil` and `if err = call(); err != nil` is not just style, the later one also introduces a scope that captures whatever variables got created before `;`.
If you really REALLY want to use the same `if` style, try:
if bar, err := foo(); err != nil {
return err
} else if bar2, err := foo2(); err != nil {
return err
} else {
[Use `bar` and `bar2` here]
return ...
}
Can it? If you want to open a file with invalid UTF8 in the name, then the path has to contain that.
And a path can contain the path separator - it's the filename that can't contain it.
> For example something like openFile() or moveFile() requires paths.
macOS has something called bookmark URLs that can contain things like inode numbers or addresses of network mounts. Apps use it to remember how to find recently opened files even if you've reorganized your disk or the mount has dropped off.
IIRC it does resolve to a path so it can use open() eventually, but you could imagine an alternative. Well, security issues aside.
You were right, it's a niche and therefore pretty much irrelevant issue. They may as well have rejected Python due to its "significant whitespace".
Sure? Depends on use case.
> too much verbosity
Doesn't meaningfully affect anything.
> Too much fucking "if err != nil".
A surface level concern.
> The language sits in an awkward space between rust and python where one of them would almost always be a better choice.
Rust doesn't have a GC so it's stuck to its systems programming niche. If you want the ergonomics of a GC, Rust is out.
Python? Good, but slow, packaging is a joke, dynamic typing (didn't you mention type safety?), async instead of green threads, etc., etc.
Doesn't this demonstrate my point? If you can do everything with &[u8], what's the point in validating UTF-8? It's just a less universal string type, and your program wastes CPU cycles doing unnecessary validation.
So it’s not like we’re running on a machine with only kilobytes of RAM, but we do want to minimize our usage.
An oxymoron if I've ever heard one.
Note that &[u8] would allow things like null bytes, and maybe other edge cases.
So you naturally write another one of these functions that takes a `&str` so that it can pass to another function that only accepts `&str`.
Fundamentally no one actually requires validation (ie, walking over the string an extra time up front), we're just making it part of the contract because something else has made it part of the contract.
Which part are you referring to, here?
> Even if you'd assume all of this should be on stack you still would crash or bleed memory through implicit allocations that exit the stack.
What do you mean by this? I don't mean to be rude, but this sounds confusing if you understand how memory works. What do you mean an allocation that exits the stack would bleed memory?
In Linux, they’re 8-bit almost-arbitrary strings like you noted, and usually UTF-8. So they always have a convenient 8-bit encoding (I.e. leave them alone). If you hated yourself and wanted to convert them to UTF-16, however, you’d have the same problem Windows does but in reverse.
refinement: the process of removing impurities or unwanted elements from a substance.
refinement: the improvement or clarification of something by the making of small changes.
public static void in a class with factory of AbstractFactoryBuilderInstances...? right..? Yes, say that again?
We are talking about removing unnecessary syntactic constructs, not adding as some would do with annotations in order to have what? Refinement types perhaps? :)
The upshot is that since the values aren’t always UTF-16, there’s no canonical way to convert them to single byte strings such that valid UTF-16 gets turned into valid UTF-8 but the rest can still be roundtripped. That’s what bastardized encodings like WTF-8 solve. The Rust Path API is the best take on this I’ve seen that doesn’t choke on bad Unicode.
use std::env;
fn main() {
let args: Vec<String> = env::args().collect();
...
}
When I run this code, a literal example from the official manual, with this filename I have here, it panics: $ ./main $'\200'
thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: "\x80"', library/std/src/env.rs:805:51
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
($'\200' is bash's notation for a single byte with the value 128. We'll see it below in the strace output.)So, literally any program anyone writes in Rust will crash if you attempt to pass it that filename, if it uses the manual's recommended way to accept command-line arguments. It might work fine for a long time, in all kinds of tests, and then blow up in production when a wild file appears with a filename that fails to be valid Unicode.
This C program I just wrote handles it fine:
#include <unistd.h>
#include <fcntl.h>
#include <stdio.h>
#include <stdlib.h>
char buf[4096];
void
err(char *s)
{
perror(s);
exit(-1);
}
int
main(int argc, char **argv)
{
int input, output;
if ((input = open(argv[1], O_RDONLY)) < 0) err(argv[1]);
if ((output = open(argv[2], O_WRONLY | O_CREAT, 0666)) < 0) err(argv[2]);
for (;;) {
ssize_t size = read(input, buf, sizeof buf);
if (size < 0) err("read");
if (size == 0) return 0;
ssize_t size2 = write(output, buf, (size_t)size);
if (size2 != size) err("write");
}
}
(I probably should have used O_TRUNC.)Here you can see that it does successfully copy that file:
$ cat baz
cat: baz: No such file or directory
$ strace -s4096 ./cp $'\200' baz
execve("./cp", ["./cp", "\200", "baz"], 0x7ffd7ab60058 /* 50 vars */) = 0
brk(NULL) = 0xd3ec000
brk(0xd3ecd00) = 0xd3ecd00
arch_prctl(ARCH_SET_FS, 0xd3ec380) = 0
set_tid_address(0xd3ec650) = 4153012
set_robust_list(0xd3ec660, 24) = 0
rseq(0xd3ecca0, 0x20, 0, 0x53053053) = 0
prlimit64(0, RLIMIT_STACK, NULL, {rlim_cur=9788*1024, rlim_max=RLIM64_INFINITY}) = 0
readlink("/proc/self/exe", ".../cp", 4096) = 22
getrandom("\xcf\x1f\xb7\xd3\xdb\x4c\xc7\x2c", 8, GRND_NONBLOCK) = 8
brk(NULL) = 0xd3ecd00
brk(0xd40dd00) = 0xd40dd00
brk(0xd40e000) = 0xd40e000
mprotect(0x4a2000, 16384, PROT_READ) = 0
openat(AT_FDCWD, "\200", O_RDONLY) = 3
openat(AT_FDCWD, "baz", O_WRONLY|O_CREAT, 0666) = 4
read(3, "foo\n", 4096) = 4
write(4, "foo\n", 4) = 4
read(3, "", 4096) = 0
exit_group(0) = ?
+++ exited with 0 +++
$ cat baz
foo
The Rust manual page linked above explains why they think introducing this bug by default into all your programs is a good idea, and how to avoid it:> Note that std::env::args will panic if any argument contains invalid Unicode. If your program needs to accept arguments containing invalid Unicode, use std::env::args_os instead. That function returns an iterator that produces OsString values instead of String values. We’ve chosen to use std::env::args here for simplicity because OsString values differ per platform and are more complex to work with than String values.
I don't know what's "complex" about OsString, but for the time being I'll take the manual's word for it.
So, Rust's approach evidently makes it extremely hard not to introduce problems like that, even in the simplest programs.
Go's approach doesn't have that problem; this program works just as well as the C program, without the Rust footgun:
package main
import (
"io"
"log"
"os"
)
func main() {
src, err := os.Open(os.Args[1])
if err != nil {
log.Fatalf("open source: %v", err)
}
dst, err := os.OpenFile(os.Args[2], os.O_CREATE|os.O_WRONLY, 0666)
if err != nil {
log.Fatalf("create dest: %v", err)
}
if _, err := io.Copy(dst, src); err != nil {
log.Fatalf("copy: %v", err)
}
}
(O_CREATE makes me laugh. I guess Ken did get to spell "creat" with an "e" after all!)This program generates a much less clean strace, so I am not going to include it.
You might wonder how such a filename could arise other than as a deliberate attack. The most common scenario is when the filenames are encoded in a non-Unicode encoding like Shift-JIS or Latin-1, followed by disk corruption, but the deliberate attack scenario is nothing to sneeze at either. You don't want attackers to be able to create filenames your tools can't see, or turn to stone if they examine, like Medusa.
Note that the log message on error also includes the ill-formed Unicode filename:
$ ./cp $'\201' baz
2025/08/22 21:53:49 open source: open ζ: no such file or directory
But it didn't say ζ. It actually emitted a byte with value 129, making the error message ill-formed UTF-8. This is obviously potentially dangerous, depending on where that logfile goes because it can include arbitrary terminal escape sequences. But note that Rust's UTF-8 validation won't protect you from that, or from things like this: $ ./cp $'\n2025/08/22 21:59:59 oh no' baz
2025/08/22 21:59:09 open source: open
2025/08/22 21:59:59 oh no: no such file or directory
I'm not bagging on Rust. There are a lot of good things about Rust. But its string handling is not one of them.If you stuff random binary data into a string, Go just steams along, as described in this post.
Over the decades I have lost data to tools skipping non-UTF-8 filenames. I should not be blamed for having files that were named before UTF-8 existed.
If your API takes &str, and tries to do byte-based indexing, it should
almost certainly be taking &[u8] instead.
Str is indexed by bytes. That's the issue.You're meant to use `unsafe` as a way of limiting the scope of reasoning about safety.
Once you construct a `&str` using `from_utf8_unchecked`, you can't safely pass it to any other function without looking at its code and reasoning about whether it's still safe.
Also see the actual documentation: https://doc.rust-lang.org/std/primitive.str.html#method.from...
> Safety: The bytes passed in must be valid UTF-8.
It seems like there's some confusion in the GGGGGP post, since Go works correctly even if the filename is not valid UTF-8 .. maybe that's why they haven't noticed any issues.
Agree on node/TS error handling. It’s super whack
I'm also reminded about the time that Tomcat stopped being an application you deploy to and just being an embedded library in the runtime! It was like the collective light went on that Web containers were just a sham. That didn't prevent employers from forcing me to keep using Websphere/WAS because "they paid for that and by god they're going to use it!" Meanwhile it was totally obsolete as docker containers just swept them all by the wayside.
I wonder what "Webshere admins" are doing these days? That was once a lucrative role to be able to manage those Jython configs, lol.
That's not syntax. Factory builders have nothing to do with syntax and everything to do with code style.
The oxymoron is implying syntax refinements would be inspired by Go of all things, a language with famously basic syntax. I'm not saying it's bad to have basic syntax. But obviously modern Java has a much more refined syntax and it's not because it looks closer to Go.
Well if maximalist performance tuning is your stated goal, to the point that single bytes count, I would imagine Go is a pretty terrible choice? There are definitely languages with a more tunable GC and more cache line friendly tools than Go.
But honestly, your comment reads more like gatekeeping, saying someone is inexperienced because they aren't working with software at the same scale as you. You sound equally inexperienced (and uninterested) with their problem domain.
Granted, many people don't ever need to handle that kind of throughput. It depends on the app and the load put on to it. So many people don't realize. Which is fine! If it works it works. But if you do fall into the need of concurrency, yea, you probably don't want to be using node - even the newer versions. You certainly could do worse than golang. It's good we have some choices out there.
The other thing I always say is the choice in languages and technology is not for one person. It's for the software and team at hand. I often choose languages, frameworks, and tools specifically because of the team that's charged with building and maintaining. If you can make them successful because a language gives them type safety or memory safety that rust offers or a good tool chain, whatever it is that the team needs - that's really good. In fact, it could well be the difference between a successful business and an unsuccessful one. No one really cares how magical the software is if the company goes under and no one uses the software.
https://github.com/golang/go/issues/32334
oops, looks like some files are just inaccessible to you, and you cannot copy them.
Fortunately, when you try to delete the source directory, Go's standard library enters infinite loop, which saves your data.
Then make it valid UTF-8. If you try to solve the long tail of issues in a commonly used function of the library its going to cause a lot of pain. This approach is better. If someone has a weird problem like file names with invalid characters, they can solve it themselves, even publish a package. Why complicate 100% of uses for solving 0.01% of issues?
I guess the issue isn't so much about whether strings are well-formed, but about whether the conversion (eg, from UTF-16 to UTF-8 at the filesystem boundary) raises an error or silently modifies the data to use replacement characters.
I do think that is the main fundamental mistake in Go's Unicode handling; it tends to use replacement characters automatically instead of signalling errors. Using replacement characters is at least conformant to Unicode but imo unless you know the text is not going to be used as an identifier (like a filename), conversion should instead just fail.
The other option is using some mechanism to preserve the errors instead of failing quietly (replacement) or failing loudly (raise/throw/panic/return err), and I believe that's what they're now doing for filenames on Windows, using WTF-8. I agree with this new approach, though would still have preferred they not use replacement characters automatically in various places (another one is the "json" module, which quietly corrupts your non-UTF-8 and non-UTF-16 data using replacement characters).
Probably worth noting that the WTF-8 approach works because strings are not validated; WTF-8 involves converting invalid UTF-16 data into invalid UTF-8 data such that the conversion is reversible. It would not be possible to encode invalid UTF-16 data into valid UTF-8 data without changing the meaning of valid Unicode strings.
Another example I found in my code is a conditional lock. The code runs through a list of objects it might have to update (note: it is only called in one thread). As an optimization, it doesn't acquire a lock on the list until it finds an object that has to be changed. That allows other threads to use/lock that list in the meantime instead of waiting until the list scan has finished.
I now realize I could have used an RWLock...
What's important is how good primitives you have access to. Java has platform and virtual threads now (the latter simplifying a lot of cases where reactive stuff was prevalent before) with proper concurrent data structures.
The JVM is a runtime, just like what Go has. It allows for the best observability of any platform (you can literally connect to a prod instance and check e.g. the object allocations) and has stellar performance and stability.
I think this is sensible, because the fact that Windows still uses UTF-16 (or more precisely "Unicode 16-bit strings") in some places shouldn't need to complicate the API on other platforms that didn't make the UCS-2/UTF-16 mistake.
It's possible that the WTF-8 strings might not concatenate the way they do in UTF-16 or properly enforced WTF-8 (which has special behaviour on concatenation), but they'll still round-trip to the intended 16-bit string, even after concatenation.
Using `defer...recover` is computationally expensive within hot paths. And since Go encourages errors to be surfaced via the `error` type, when writing idomatic Go you don't actually need to raise exceptions all that often.
So panics are reserved for instances where your code reaches a point that it cannot reasonably continue.
This means you want to catch panics at boundary points in your code.
Given that global state is an anti-pattern in any language, you'd want to wrap your mutex, file, whatever operations in a `struct` or its own package and instantiate it there. So you can have a destructor on that which is still caught by panic and not overly reliant on `defer` to achieve it.
This actually leads to my biggest pet peeve in Go. It's not `x, err := someFunction()` and nor is it `defer/panic`, these are all just ugly syntax that doesn't actually slow you down. My biggest complaint is the lack of creator and destructor methods for structs.
the `NewClass`-style way of initialising types is an ugly workaround and it constantly requires checking if libraries require manual initilisation before use. Not a massive time sink but it's not something the IDE can easily hint to you so you do get pulled from your flow to then either Google that library or check what `New...` functions are defined in the IDEs syntax completion. Either way, it's a distraction.
The lack of a destructor, though, does really show up all the other weaknesses in Go. It then makes `defer` so much more important than it otherwise should be. It means the language then needs runtime hacks for you to add your own dereference hooks[0][1] (this is a problem I run into often with CGO where I do need to deallocate, for example, texture data from the GPU). And it means you need to check each struct to see if it includes a `Close()` method.
I've heard the argument against destructors is that they don't catch errors. But the counterargument to that is the `defer x.Close()` idiom, where errors are ignored anyway.
I think that change, and tuples too so `err` doesn't always have to be its own variable, would transform Go significantly into something that feels just as ergonomic as it is simple to learn, and without harming the readability of Go code.
[0] https://medium.com/@ksandeeptech07/improved-finalizers-in-go...
[1] eg https://github.com/lmorg/ttyphoon/blob/main/window/backend/r...
(1) "Generics are too complicated and academical and in the real world we only need them for a small number of well-known tasks anyway, so let's just leave them out!"
(2) The amount of code that does need generics but now has to work around the lack of them piles up, leading to an explosion of different libraries, design patterns, etc, that all try to partially recreate them in their own way.
(3) The language designers finally cave and introduce some kind of generics support in a later version of the language. However, at this point, they have to deal with all the "legacy" code that is not generics-aware and with runtime environments that aren't either. It also somehow has to play nice with all the ad-hoc solutions that are still present. So the new implementation has to deal with a myriad of special cases and tradeoffs that wouldn't be there in the first if it had been included in the language from the beginning.
(4) All the tradeoffs give the feature a reputation of needless complexity and frustrating limitations and/or footguns, prompting the next language designer to wonder if they should include them at all. Go to (1) ...
Also, Java has ZGC that basically solved the pause time issue, though it does come at the expense of some throughput (compared to their default GC).
I think Java and C# offer clearly more straightforward ways to extend and modify existing code. Maybe the primary ways extension in Java and C# works are not quite the right ones for every situation.
The primary skill necessary to write modular code is first knowing what the modular interfaces is and second being able to implement it in a clean fashion. Go does offer a form of interfaces. But precisely because it encourages you to be highly explicit and avoid abstraction, it can make it difficult for you to implement the right abstraction and therefore complicate the modular interfaces.
Programming is hard. I don’t think adopting a kind of ascetic language like Go makes programming easier overall. Maybe it’s harder to be an architecture astronaut in Go, but only by eliminating entire classes of abstraction that are sometimes just necessary. Sometimes, inheritance is the right abstraction. Sometimes, you really need highly generic and polymorphic code (see some of the other comments for issues with Go’s implementation of generics).
That's the point.
Good performance with traditional tracing GC's involves a lot of memory overhead. Golang improves on this quite a bit with its concurrent GC, and maybe Java will achieve similarly in the future with ZGC, but reference counting has very little memory overhead in most cases.
> Reference counting when an object is shared between multiple cores require atomic increments/decrements and that is very expensive
Reference counting with a language like Rust only requires atomic inc/dec when independently "owning" references (i.e. references that can keep the object around and extend its lifecycle) are added or removed, which should be a rare operation. It's not really performing an atomic op on every access.
You should see what package management was like for golang in the beginning "just pin a link to github". That was probably one of the most embarrassing technical faux pass ive ever seen.
>dynamic typing
Type hinting works very well in python and the option to not use it when prototyping is useful.
>Rust doesn't have a GC so it's stuck to its systems programming niche.
The lack of GC makes it faster than golang. It has a better type system also.
If speed is really a concern, using golang doesnt make much sense.
Go is amazing in that it lets you tell the machine what you want, simply and you can easily verify that that is indeed what the machine should be doing.
Regarding defer, idk about other, but I never assumed it was a gotcha, you read the go docs once and all is just clear and you don’t make most mistakes that others claim are footguns.
A tracing GCs can do the job concurrently, without slowing down the actual, work-bearing threads, so throughput will be much better.
> Golang improves on this quite a bit with its concurrent GC
Not sure what does it have to do with memory overhead. Java's GCs are at least generation ahead on every count, Go can just get away with a slower GC due to value types.
Eventually there should be an underlying operation which can only work on valid UTF-8, but that doesn't exist. UTF-8 was designed such that invalid data can be detected and handled, without affecting the meaning of valid subsequences in the same string.
That “reign” continued forever if you count when java.time got introduced and no, Calendar was not much better in the mean time. Python already had datetime in 2002 or 2003 and VB6 was miles ahead back when Java had just util.Date.
Windows doing something similar wouldn't surprise me at all. I believe NTFS internally stores filenames as UTF-16, so enforcing UTF-8 at the API boundary sounds likely.
In practice, I've never struggled to write modular, extensible software. Like in any language you have to think about the boundaries between modules and layers, and how to design the interfaces between them to avoid spaghetti soup. It's not more difficult in Go, just different. Java's OO approach kind of inextricably pulls you towards AbstractSingletonProxyFactoryBean situations that just feel wrong to replicate in Go. A lot of the "performative" software structure kind of falls away and you end up with very concrete, readable code.
Go is often derided for being "too simple", but I think this kind of simplicity is underrated in our industry. I'm a fan of the Niklaus Wirth approach to software development. Go is basically a reskinned Modula 2 with GC and concurrency, and that it's the Wirth influence that drives its core design.
To be clear, Go is chock full of warts and faults, some of them quite egregious. But they're mostly about the practical aspects of the language (invasive error handling, lack of sum types and pattern matching, the badness of channels) that don't really relate to what you're talking about.
People like Rob Pike and Ken Thompson certainly knew that you can't put in a GC and cover all systems programming use cases, but they knew that Go could cover their use cases.
Or are you suggesting that they were frustrated with C++ so they decided to write a language they couldn't use instead of C++ for their use case?
> Google's networking services keep being writen in Java/Kotlin, C++, and nowadays Rust.
And? Google is a massive company that uses many languages across many teams. That doesn't mean that some people at Google, incl Go's original creators, would not use Go nowdays to write what they would previously use C++ for.
$ golangci-lint linters | wc -l
107
That is a long list of linters (only a few enabled by default however). I much prefer less fragmented approaches such as clippy or ruff. It makes for a more coherent experience and surely much higher performance (1x AST parsing instead of dozens of times).You kind of undermine your argument here, because someone experienced in building languages wouldn't identify this as a shortcoming of Python compared to other languages, but rather a design choice that was made to support other features; designing a language is all about making these tradeoffs to achieve specific design goals. It's one thing to keep boxed values around and be slow because of it -- it's another to do so because you want to support a dynamic type system.
This might be what the other poster was getting at when they said they don't want to listen to non-experts here, because the criticisms are usually shallow and based on personal programming preferences rather than an actual critique of language design.
How do you propose Go interfaces should be implemented then, to avoid the multiple issues the current implementation causes (more than one kind of nil is only one such issue, memory corruption under data races is another)?
fn main() {
let s = "12345";
println!("{}", &s[0..1]);
}
compiles and prints out "1".This:
fn main() {
let s = "\u{1234}2345";
println!("{}", &s[0..1]);
}
compiles and panics with the following error: byte index 1 is not a char boundary; it is inside 'ሴ' (bytes 0..3) of `ሴ2345`
To get the nth char (scalar codepoint): fn main() {
let s = "\u{1234}2345";
println!("{}", s.chars().nth(1).unwrap());
}
To get a substring: fn main() {
let s = "\u{1234}2345";
println!("{}", s.chars().skip(0).take(1).collect::<String>());
}
To actually get the bytes you'd have to call #as_bytes which works with scalar and range indices, e.g.: fn main() {
let s = "\u{1234}2345";
println!("{:02X?}", &s.as_bytes()[0..1]);
println!("{:02X}", &s.as_bytes()[0]);
}
IMO it's less intuitive than it should be but still less bad than e.g. Go's two types of nil because it will fail in a visible manner.Also, I said I didn’t write a language. I never said that I didn’t maintain my own local fork so I could play with the VM and add opcodes to it and see what happens if I try experiments like “assume that all(type(_) is type(list[0])) for _ in list[i..])”, which I’ve done just for fun to see how it’d explode when that assumption fails.
But no, I haven’t written a language of my own, I apparently I’m not qualified to have an opinion.
I don't identify that as a shortcoming, because it enables the dynamism. A "worthwhile shortcoming" is just another way of saying "tradeoff".
You might as well say C++'s performance is a shortcoming because it's a result of static typing, which is hard to write compared to dynamic languages. Or Rust's safety is a shortcoming because it has worse compile times compared to Go. Or Java's JVM is a shortcoming because it doesn't compile to machine code like C.
Just because a language doesn't have the same features as another doesn't mean it suffers a shortcoming. A shortcoming only exists in the context of your own use case, which is different from everyone else's. What you label a "shortcoming" is the feature others are looking for in their language.
> But no, I haven’t written a language of my own, I apparently I’m not qualified to have an opinion.
Your opinion is fine, the OP was just saying that they would like to hear a qualified opinion because they are left wondering how a language designer might change Go for the better.
But you should try designing and implementing your own language, it's a lot of fun and a worthy exercise for any dev.
let start = s.find('a')?;
let end = s.find('z')?;
let sub = &s[start..end];
and it will never panic, because find will never return something that's not on a char boundary. Where would you even get them from?
In my case it was in parsing text where a numeric value had a two character prefix but a string value did not. So I was matching on 0..2 (actually 0..2.min(string.len()) which doubly highlights the indexing issue) which blew up occasionally depending on the string values. There are perhaps smarter ways to do this (e.g. splitn on a space, regex, giant if-else statement, etc, etc) but this seemed at first glance to be the most efficient way because it all fit neatly into a match statement.The inverse was also a problem: laying out text with a monospace font knowing that every character took up the same number of pixels along the x-axis (e.g. no odd emoji or whatever else). Gotta make sure to call #len on #chars instead of the string itself as some of the text (Windows-1250 encoded) got converted into multi-byte Unicode codepoints.
I think this is a fine "fail-closed" way of language design. For example, Python has gone the other way and language complexity has gotten pretty bad since the small-language days. Trust what you are, don't try to please everyone, lest you become something like C++.
Clojure is good in this respect.
Ah, what????
i made a cli tool[1] to mitigate this problem, it can be integrated into an ide quite easily, currently it has (neo)vim integration described in the readme and a vs code plugin-companion [2] which both can serve as an example and inspiration for creating an integration for your ide of choice
[1] https://github.com/vipkek/gouse
[2] https://marketplace.visualstudio.com/items?itemName=looshch....
edit: formatting
[1] https://news.ycombinator.com/item?id=44982491#44986946
[2] https://groups.google.com/g/golang-nuts/c/QL5h8zO7MDo/m/qiLi...
If you use vanilla rust with global alloc everywhere then you will get fragmentation
You have to use nightly for custom allocators and implementing allocators and using them is super annoying still.
i think you can take these[1][2][3][4] official advices and extrapolate to other cases
[1] https://go.dev/wiki/CodeReviewComments#receiver-type
[2] https://google.github.io/styleguide/go/decisions#receiver-ty...
AST macros are complicated, yeah, and I agree that any half-decent macro system needs a convenient way to dump generated code and so forth. I'm not saying any macro system is better than codegen. But a decent macro system will give you hygenic tools to modify an AST, whereas codegen really forces you to hack something together. I'll grant that there's some worse-is-better charm to codegen, but I don't think that saves it from being ultimately worse.
At that point, all languages suck because there's newbie mistakes and I don't understand them.
Maybe it is time to dump credentials rather than being obscure. I don't believe it for a moment that you have decade+ experience in Go.
`append` always returns a new slice value
`func append(slice []Type, elems ...Type) []Type`
The only correct way to use append is something like `sl = append(sl, 1, 2, 3)`
`sl` is now a new slice value, as `append` always returns a new slice value. You must now pass the new slice value back to the user, and the user must use the new slice value. The user must not use the old slice value.
It's trivial to fix the "bug" in the article, once you actually understand what a slice value is: https://go.dev/play/p/JRMV_IuXcZ6
A slice is not the underlying array, and the underlying array is not the slice. A slice is just a box with 3 numbers.
Compared to incumbents like dotnet and PHP? Uhh, no. The tooling is very far behind and cumbersome in comparison.
Writing a compiler is not "worse is better" and does not force you to hack something [fragile] together. Therefore, your argument is wrong.
In the meantime he also wrote about Rust: "I’ve not found anything frustrating or stupid in Rust yet."
So he can just use Rust and stop using Go completely. Problem solved. No need to write those hates. Go lang is cleary not good for him and it's ok.
A couple quotes from the Go Blog by Rob Pike:
> It’s important to state right up front that a string holds arbitrary bytes. It is not required to hold Unicode text, UTF-8 text, or any other predefined format. As far as the content of a string is concerned, it is exactly equivalent to a slice of bytes.
> Besides the axiomatic detail that Go source code is UTF-8, there’s really only one way that Go treats UTF-8 specially, and that is when using a for range loop on a string.
Both from https://go.dev/blog/strings
If you want UTF-8 in a guaranteed way, use the functions available in unicode/utf8 for that. Using `string` is not sufficient unless you make sure you only put UTF-8 into those strings.
If you put valid UTF-8 into a string, you can be sure that the string holds valid UTF-8, but if someone else puts data into a string, and you assume that it is valid UTF-8, you may have a problem because of that assumption.
"I am literally losing sleep over this issue since together with a move to more server based applications it seems like it could make it easy for people to do competitive operating systems." [1]
[1] https://www.joyk.com/dig/detail/1672957813119759#gsc.tab=0
It does though? Strings are internable, comparable, can be keys, etc.
Golang is too far down the road and cemented in its ways, to expect such significant changes in direction. At this stage, people need to accept it for what it is or look elsewhere.
[1]: https://vlang.io/
Frankly, an official Go codegen library would solve pretty much all my complaints, but the only difference between that and a macro system is compiler integration.
type HorsePlay interface {
Neigh()
}
type SomeClass[T HorsePlay] struct {
players []T
}
func (sc SomeClass[T]) NeighAll() {
for _, p := range sc.players {
p.Neigh()
}
}
type Horse struct{}
func (h Horse) Neigh() {
fmt.Println("neigh!")
}
func main() {
sc := SomeClass[Horse]{players: []Horse{{}, {}, {}}}
sc.NeighAll()
}