←back to thread

392 points mfiguiere | 1 comments | | HN request time: 0.208s | source
Show context
RcouF1uZ4gsC ◴[] No.35470953[source]
> Buck2 is an extensible and performant build system written in Rust

I really appreciate tooling that is written in Rust or Go that produce single binaries with minimal runtime dependencies.

Getting tooling written in for example Python to run reliably can be an exercise in frustration due to runtime environmental dependencies.

replies(3): >>35471099 #>>35471103 #>>35471569 #
rektide ◴[] No.35471103[source]
Personally it seems like a huge waste of memory to me. It's the electron of the backend. It's absolutely done for convenience & simplicity, with good cause after the pain we have endured. But every single binary bringing the whole universe of libraries with it offends.

Why have an OS at all if every program is just going to package everything it needs?

It feels like we cheapened out. Rather than get good & figure out how to manage things well, rather than driver harder, we're bunting the problem. It sucks & it's lo-fi & a huge waste of resources.

replies(7): >>35471298 #>>35471361 #>>35471433 #>>35471640 #>>35471685 #>>35472010 #>>35472197 #
bogwog ◴[] No.35471640[source]
I don't think that matters so much. For building a system, you definitely need dynamic linking, but end user apps being as self contained as possible is good for developers, users, and system maintainers (who don't have to worry about breaking apps). As long as it doesn't get out of hand, a few dozen MBs even is a small price to pay IMO for the compatibility benefits.

As a long time Linux desktop user, I appreciate any efforts to improve compatibility between distros. Since Linux isn't actually an operating system, successfully running software built for Ubuntu on a Fedora box, for example, is entirely based on luck.

replies(1): >>35471780 #
rektide ◴[] No.35471780[source]
There's also the issue that if a library has a vulnerability, you are now reliant on every static binary updating with the fix & releasing a new version.

Where-as with the conventional dynamic library world one would just update openssl or whomever & keep going. Or if someone wanted to shim in an alternate but compatible library, one could. I personally never saw the binary compatibility issue as very big, and generally felt like there was a while where folks were getting good at packaging apps for each OS, making extra repos, that we've lost. So it seems predominantly to me like downsides, that we sell ourselves on, based off of outsized/overrepresented fear & negativity.

replies(1): >>35472269 #
preseinger ◴[] No.35472269[source]
the optimization you describe here is not valuable enough to offset the value provided by statically linked applications

the computational model of a fleet of long-lived servers, which receive host/OS updates at one cadence, and serve applications that are deployed at a different cadence, is at this point a niche use case, basically anachronistic, and going away

applications are the things that matter, they provide the value, the OS and even shared libraries are really optimizations, details, that don't really make sense any more

the unit of maintenance is not a host, or a specific library, it's an application

vulnerabilities affect applications, if there is a vulnerability in some library that's used by a bunch of my applications then it's expected that i will need to re-deploy updated versions of those applications, this is not difficult, i am re-deploying updated versions of my applications all the time, because that is my deployment model

replies(2): >>35472452 #>>35475496 #
lokar ◴[] No.35475496[source]
Indeed. I view Linux servers/vms as ELF execution appliances with a network stack. And more and more the network stack lives in the NIC and the app, not Linux.
replies(1): >>35477009 #
1. preseinger ◴[] No.35477009[source]
100% yes