Most active commenters
  • preseinger(15)
  • rektide(7)
  • crabbone(6)
  • PaulDavisThe1st(5)
  • howinteresting(4)
  • cozzyd(3)
  • bscphil(3)

←back to thread

392 points mfiguiere | 62 comments | | HN request time: 1.674s | source | bottom
1. RcouF1uZ4gsC ◴[] No.35470953[source]
> Buck2 is an extensible and performant build system written in Rust

I really appreciate tooling that is written in Rust or Go that produce single binaries with minimal runtime dependencies.

Getting tooling written in for example Python to run reliably can be an exercise in frustration due to runtime environmental dependencies.

replies(3): >>35471099 #>>35471103 #>>35471569 #
2. TikolaNesla ◴[] No.35471099[source]
Yes, just what I thought when I installed the Shopify CLI (https://github.com/Shopify/cli) a few days ago because they force you to install Ruby and Node
3. rektide ◴[] No.35471103[source]
Personally it seems like a huge waste of memory to me. It's the electron of the backend. It's absolutely done for convenience & simplicity, with good cause after the pain we have endured. But every single binary bringing the whole universe of libraries with it offends.

Why have an OS at all if every program is just going to package everything it needs?

It feels like we cheapened out. Rather than get good & figure out how to manage things well, rather than driver harder, we're bunting the problem. It sucks & it's lo-fi & a huge waste of resources.

replies(7): >>35471298 #>>35471361 #>>35471433 #>>35471640 #>>35471685 #>>35472010 #>>35472197 #
4. thangngoc89 ◴[] No.35471298[source]
> with minimal runtime dependencies

You’re probably thinking of static binary. I believe that OP is comparing a single binary vs installing the whole toolchain of Python/Ruby/Node and fetching the dependencies over the wire.

replies(1): >>35471380 #
5. crabbone ◴[] No.35471361[source]
Absolutely. As soon as it started to seem like even couple hundreds of JARs won't put a significant strain on the filesystem having to house them, the typical deployment switched to Docker images, and, on top of the hundred of JARs started to bundle in the whole OS userspace. Which also, conveniently, makes memory explode because shared libraries are no longer shared.

This would definitely sound like a conspiracy theory, but I'm quite sure that hardware vendors see this technological development as, at least, a fortunate turn of events...

6. crabbone ◴[] No.35471380{3}[source]
If it's not a statically linked binary, then the problem is just as bad as it is with Python dependencies: instead, now you need to find the shared libraries that it linked with.
7. maccard ◴[] No.35471433[source]
We've had decades to figure this out, and none of the "solutions" work. Meanwhile, the CRT for visual studio id 15MB. If every app I installed grew by 15MB I don't think I would notice.
replies(1): >>35477554 #
8. crabbone ◴[] No.35471569[source]
Your problem is that Python sucks, especially it's dependency management. It sucks not because it ought to suck, but because of the incompetence of PyPA (the people responsible for packaging).

There are multiple problems with Python packaging which ought not exist, but are there and make lives of Python users worse:

* Python doesn't have a package manager. pip can install packages, but installing packages iteratively will break dependencies of packages installed in previous iterations. So, if you call pip install twice or more, you are likely to end up with a broken system.

* Python cannot deal with different programs wanting different versions of the same dependency.

* Python version iterates very fast. It's even worse for most of the Python packages. To stand still you need to update all the time, because everything goes stale very fast. In addition, this creates too many versions of packages for dependency solvers to process leading to insanely long installation times, which, in turn, prompts the package maintainers to specify very precise version requirements (to reduce the time one has to wait for the solver to figure out what to install), but this, in turn, creates a situation where there are lots of allegedly incompatible packages.

* Python package maintainers have too many elements in support matrix. This leads to quick abandonment of old versions, fragmented support across platforms and versions.

* Python packages are low quality. Many Python programmers don't understand what needs to go into a package, they either put too little or too much or just the wrong stuff altogether.

All of the above could've been solved by better moderation of community-generated packages, stricter rules on package submission process, longer version release cycles, formalizing package requirements across different platforms, creating tools s.a. package manager to aid in this process... PyPA simply doesn't care. That's why it sucks.

replies(2): >>35471800 #>>35471896 #
9. bogwog ◴[] No.35471640[source]
I don't think that matters so much. For building a system, you definitely need dynamic linking, but end user apps being as self contained as possible is good for developers, users, and system maintainers (who don't have to worry about breaking apps). As long as it doesn't get out of hand, a few dozen MBs even is a small price to pay IMO for the compatibility benefits.

As a long time Linux desktop user, I appreciate any efforts to improve compatibility between distros. Since Linux isn't actually an operating system, successfully running software built for Ubuntu on a Fedora box, for example, is entirely based on luck.

replies(1): >>35471780 #
10. stu2b50 ◴[] No.35471685[source]
Sometimes things just don’t have good solutions in one space. We solved in another space, as SSD and ram manufacturers made memory exponentially cheaper and more available over the last few decades.

So we make the trade off of software complexity for hardware complexity. Such is how life goes sometimes.

11. rektide ◴[] No.35471780{3}[source]
There's also the issue that if a library has a vulnerability, you are now reliant on every static binary updating with the fix & releasing a new version.

Where-as with the conventional dynamic library world one would just update openssl or whomever & keep going. Or if someone wanted to shim in an alternate but compatible library, one could. I personally never saw the binary compatibility issue as very big, and generally felt like there was a while where folks were getting good at packaging apps for each OS, making extra repos, that we've lost. So it seems predominantly to me like downsides, that we sell ourselves on, based off of outsized/overrepresented fear & negativity.

replies(1): >>35472269 #
12. zdw ◴[] No.35471800[source]
s/Python/NodeJS/ and everything in this statement is multiplied by 10x
replies(2): >>35475079 #>>35476120 #
13. androidbishop ◴[] No.35471896[source]
Most of this is false. You are ignoring the best practices of using python virtual environments for managing a project's binary and package versions.
replies(1): >>35476100 #
14. preseinger ◴[] No.35472010[source]
when someone writes a program and offers it for other people to execute, it should generally be expected to work

the size of a program binary is a distant secondary concern to this main goal

static compilation more or less solves this primary requirement, at the cost of an increase to binary size that is statistically zero in the context of any modern computer, outside of maybe embedded (read: niche) use cases

there is no meaningful difference between a 1MB binary or a 10MB binary or a 100MB binary, disks are big and memory is cheap

the optimization of dynamic linking was based on costs of computation, and a security model of system administration, which are no longer valid

there's no reason to be offended by this, just update your models of reality and move on

replies(2): >>35472497 #>>35477524 #
15. howinteresting ◴[] No.35472197[source]
Dynamic linking is an artifact of C, not some sort of universal programming truth.
replies(1): >>35476370 #
16. preseinger ◴[] No.35472269{4}[source]
the optimization you describe here is not valuable enough to offset the value provided by statically linked applications

the computational model of a fleet of long-lived servers, which receive host/OS updates at one cadence, and serve applications that are deployed at a different cadence, is at this point a niche use case, basically anachronistic, and going away

applications are the things that matter, they provide the value, the OS and even shared libraries are really optimizations, details, that don't really make sense any more

the unit of maintenance is not a host, or a specific library, it's an application

vulnerabilities affect applications, if there is a vulnerability in some library that's used by a bunch of my applications then it's expected that i will need to re-deploy updated versions of those applications, this is not difficult, i am re-deploying updated versions of my applications all the time, because that is my deployment model

replies(2): >>35472452 #>>35475496 #
17. rektide ◴[] No.35472452{5}[source]
Free software has a use beyond industrial software containers. I don't think most folks developing on Linux laptops agree with your narrow conception of software.

Beyond app delivery there's dozens of different utils folks rely on in their day to day. The new statically compiled world requiring each of these to be well maintained & promptly updated feels like an obvious regression.

replies(2): >>35472926 #>>35472959 #
18. rektide ◴[] No.35472497{3}[source]
I never had a problem before. The people saying we need this for convenience felt detached & wrong from the start.

It's popular to be cynical & conservative, to disbelieve. That has won the day. It doesn't do anything to convince me it was a good choice or actually helpful, that we were right to just give up.

replies(1): >>35473094 #
19. howinteresting ◴[] No.35472926{6}[source]
Again, there is no alternative. Dynamic linking is an artifact of an antiquated 70s-era programming language. It simply does not and cannot work with modern language features like monomorphization.

Linux distros are thankfully moving towards embracing static linking, rather than putting their heads in the sand and pretending that dynamic linking isn't on its last legs.

replies(1): >>35473824 #
20. preseinger ◴[] No.35472959{6}[source]
> Free software has a use beyond industrial software containers. I don't think most folks developing on Linux laptops agree with your narrow conception of software.

the overwhelming majority of software that would ever be built by a system like buck2 is written and deployed in an industrial context

the share of software consumers that would use this class of software on personal linux laptops is statistically zero

really, the overwhelming majority of installations of distros like fedora or debian or whatever are also in industrial contexts, the model of software lifecycles that their maintainers seem to assume is wildly outdated

21. preseinger ◴[] No.35473094{4}[source]
"wrong" or "a good choice" or "actually helpful" are not objective measures, they are judged by a specific observer, what's wrong for you can be right for someone else

i won't try to refute your personal experience, but i'll observe it's relevant in this discussion only to the extent that your individual context is representative of consumers of this kind of software in general

that static linking provides a more reliable end-user experience vs. dynamic linking is hopefully not controversial, the point about security updates is true and important but very infrequent compared to new installations

22. PaulDavisThe1st ◴[] No.35473824{7}[source]
Whoa, strong opinions.

Dynamic linking on *nix has nothing to do with 70s era programming languages.

Did you consider the possibility that the incompatibility between monomorphization (possibly the dumbest term in all of programming) and dynamic linking should perhaps saying something about monomorphization, instead?

replies(1): >>35474200 #
23. howinteresting ◴[] No.35474200{8}[source]
> Dynamic linking on *nix has nothing to do with 70s era programming languages.

Given that dynamic linking as a concept came out of the C world, it has everything to do with them.

> Did you consider the possibility that the incompatibility between monomorphization (possibly the dumbest term in all of programming) and dynamic linking should perhaps saying something about monomorphization, instead?

Yes, I considered that possibility.

replies(1): >>35474351 #
24. PaulDavisThe1st ◴[] No.35474351{9}[source]
The design of dynamic linking on most *nix-ish systems today comes from SunOS in 1988, and doesn't have much to do with C at all other than requiring both the compiler and assembler to know about position-independent code.

What elements of dynamic linking do you see as being connected to "70s era programming languages"?

> Yes, I considered that possibility.

Then I would urge you to reconsider.

replies(2): >>35475191 #>>35479652 #
25. IshKebab ◴[] No.35475079{3}[source]
Some of it is also true for Node (e.g. poor package quality), but I think it would be hard to argue that the actual package management of Node is anywhere near as bad as Python.

Node basically works fine. You get a huge node_modules folder, sure. But it works.

Python is a complete mess.

replies(2): >>35475204 #>>35475438 #
26. preseinger ◴[] No.35475191{10}[source]
dynamic linking is an optimization that is no longer necessary

there is no practical downside to a program including all of its dependencies, when evaluated against the alternative of those dependencies being determined at runtime and based on arbitrary state of the host system

monomorphization is good, not bad

the contents of /usr/lib/whatever should not impact the success or failure of executing a given program

replies(1): >>35475419 #
27. zdw ◴[] No.35475204{4}[source]
I've tended to have the exact opposite experience - Node projects have 10x (or more) the dependencies of Python ones, and the tooling is far worse and harder to isolate across projects.

A well engineered virtualenv solves most Python problems.

28. PaulDavisThe1st ◴[] No.35475419{11}[source]
Dynamic linking wasn't an optimization (or at least, it certainly wasn't just an optimization). It allows for things like smaller executable sizes, more shared code in memory, and synchronized security updates. You can, if you want, try the approach of "if you have 384GB of RAM, you don't need to care about these things", and in that sense you're on quicksand with the "just an optimization". Yes, the benefits of sharing library code in memory are reduced by increasing system RAM, but we're seeing from a growing chorus of both developers and users, the "oh, forget all that stupid stuff, we've got bigger faster computers now" isn't going so well.

There's also the problem that dynamic loading relies on almost all the same mechanisms as dynamic linking, so you can't get rid of those mechanisms just because your main build process used static linking.

replies(1): >>35476228 #
29. nextaccountic ◴[] No.35475438{4}[source]
> You get a huge node_modules folder, sure. But it works.

pnpm and other tools deduplicates that

30. lokar ◴[] No.35475496{5}[source]
Indeed. I view Linux servers/vms as ELF execution appliances with a network stack. And more and more the network stack lives in the NIC and the app, not Linux.
replies(1): >>35477009 #
31. crabbone ◴[] No.35476100{3}[source]
You are seriously going to preach about virtual environments to someone who maintains couple dozens of Python packages, works and worked in the infra departments of the largest software companies on Earth? :)

Come back ten years. We'll talk.

replies(1): >>35478709 #
32. crabbone ◴[] No.35476120{3}[source]
I don't have enough experience with npm, but one thing I know for sure is that it can support multiple different versions of the same package. Not in the way I'd like it to do that (i.e. it also allows this in the same application), but at least in this sense it's not like Python.
33. preseinger ◴[] No.35476228{12}[source]
it allows for all of the things you list, yes, but those things just aren't really valuable compared to the reliable execution of a specific binary, regardless of any specific shared library that may be installed on a host

smaller executable sizes, shared code in memory, synchronized security updates, are all basically value-zero, in any modern infrastructure

there is no "growing chorus" of developers or users saying otherwise, it is in fact precisely the opposite, statically linked binaries are going extremely well, they are very clearly the future

replies(2): >>35476415 #>>35478228 #
34. wahern ◴[] No.35476370{3}[source]
Dynamic linking originated with Multics (https://en.wikipedia.org/wiki/Multics) and MTS (https://en.wikipedia.org/wiki/MTS_system_architecture), years before C even existed. Unix didn't get dynamic linking until the 1980s (https://www.cs.cornell.edu/courses/cs414/2001FA/sharedlib.pd...).

The impetus for dynamic linking on Multics and MTS was the ability to upgrade libraries without having to recompile software, and to reuse code not originally designed or intended for (e.g. different compilers or languages), let alone compiled with, the primary program. Both of these reasons still pertain, notwithstanding that some alternatives are more viable (e.g. open source code means less reliance on binary distribution).

replies(1): >>35477190 #
35. PaulDavisThe1st ◴[] No.35476415{13}[source]
the chorus is about the assumptions commonly found among younger devs that these old "efficiency" and "optimization" techniques don't matter any more. c.f. apps (desktop, mobile) that take forever to do things that should not take forever.

"modern infrastructure" seems like a bit of a giveaway of your mind set. yes, i know that there's a lot of stuff that now happens by having your web browser reach out to "infrastructure" and then the result is displayed in front of you.

But lots of people still use their computers to run applications outside the browser, where "modern infrastructure" means either nothing at all, or it means "their computer (or mobile platform)". the techniques mentioned in this subthread are all still very relevant in this context.

replies(1): >>35476982 #
36. preseinger ◴[] No.35476982{14}[source]
there is basically no situation in which it is important to optimize for binary size, embedded sure, but nowhere else

the infrastructural model i'm describing doesn't require applications to run in browsers, or imply that applications are slower, actually quite to the contrary, statically linked binaries tend to be faster

the model where an OS is one to many with applications works fine for personal machines, it's no longer relevant for most servers (shrug)

replies(3): >>35478013 #>>35478165 #>>35550312 #
37. preseinger ◴[] No.35477009{6}[source]
100% yes
38. preseinger ◴[] No.35477190{4}[source]
neither of those reasons still pertain, really

"the primary program" is the atomic unit of change, it is expected that each program behaves in a way that is independent of whatever other files may exist on a host system

39. cozzyd ◴[] No.35477524{3}[source]
Wait until you use a single board computer with a 4GB emmc OS disk. And don't forget about bandwidth...
replies(1): >>35478191 #
40. cozzyd ◴[] No.35477554{3}[source]
Imagine if every QT program included all of the QT shared libraries.
replies(1): >>35480090 #
41. PaulDavisThe1st ◴[] No.35478013{15}[source]
it was never relevant for servers. and there are probably still fewer servers than end-user systems out there, certainly true if you include mobile (there are arguments for and against that).
replies(1): >>35478267 #
42. cwalv ◴[] No.35478165{15}[source]
> there is basically no situation in which it is important to optimize for binary size, embedded sure, but nowhere else

Not disagreeing that there many upsides to statically linking, but there are (other) situations where binary size matters. Rolling updates (or scaling horizontally) where the time is dominated by the time it takes to copy the new binaries, e.g.

> the model where an OS is one to many with applications works fine for personal machines, it's no longer relevant for most servers

Stacking services with different usage characteristics to increase utilization of underlying hardware is still relevant. I wouldn't be surprised if enabling very commonly included libraries to be loaded dynamically could save significant memory across a fleet .. and while the standard way this is done is fragile, it's not hard to imagine something that could be as reliable as static linking, esp in cases where you're using something like buck to build the world on every release anyway

43. preseinger ◴[] No.35478191{4}[source]
i have a few devices like that around, but the thing is that the software i put on them is basically unrelated to the software that's being discussed here

definitely i am not using buck or bazel or whatever to build binaries that go on those little sticks

replies(1): >>35478358 #
44. bscphil ◴[] No.35478228{13}[source]
> it allows for all of the things you list, yes, but those things just aren't really valuable compared to the reliable execution of a specific binary

> smaller executable sizes, shared code in memory, synchronized security updates, are all basically value-zero, in any modern infrastructure

This highlights the fact that you're extremely focused on one particular model of development, one where a single person or group deploys software that they are responsible for running and maintaining - often software that they've written themselves.

This is, obviously, an extremely appropriate paradigm for the enterprise. Static linking makes a lot of sense here. Python's virtual environments are basically the approved workaround for the fact that Python was built for systems that are not statically linked, and I cherish it for exactly that reason. Use Go on your servers - I do myself! But that doesn't mean it's appropriate everywhere.

Sometimes developers in this mindset forget there's a whole other world out there, a world of personal computers, that each have hundreds or thousands of applications installed. Applications on these systems are not deployed, they are installed. The mechanism by which this happens (on Linux) is via distributions and maintainers, and dynamic linking needs to be understood as designed for that ecosystem. Linux operating systems are built around making things simple, reliable, and secure for collections of software that are built and distributed by maintainers.

I'm firmly on the side of the fence that says that dynamic linking is the correct way to do that. All the benefits you mention are just a free bonus, of course, but I care about them as well. Smaller executable sizes? Huge win on my 256 GB SSD. Synchronized security updates? Of course I care about those as an end user!

replies(3): >>35478290 #>>35478304 #>>35488025 #
45. preseinger ◴[] No.35478267{16}[source]
servers vastly, almost totally, outnumber end-user systems, in terms of deployed software

end-user systems account for at best single-digit percentages of all systems relevant to this discussion

(mobile is not relevant to this discussion)

46. preseinger ◴[] No.35478290{14}[source]
> Sometimes developers in this mindset forget there's a whole other world out there, a world of personal computers, that each have hundreds or thousands of applications installed.

it's not that i forget about these use cases, it's that i don't really consider them relevant

tooling that supports industrial use cases like mine is not really able to support end-user use cases like yours at the same time

linux operating systems may have at one point been built around making things as you describe by distribution maintainers, but that model is anachronistic and no longer useful to the overwhelming majority of its user base, the huge majority of software is neither built nor distributed by maintainers, it is built and distributed by private enterprises

replies(1): >>35486856 #
47. charcircuit ◴[] No.35478304{14}[source]
>Applications on these systems are not deployed

In a way they are. You deploy it to the store and then as people's computers download the update automatically.

A counter example to your claims about Linux is Android. Libraries are not shared between apps (beyond the android framework and libc). This is despite the fact that phones have limited storage.

48. cozzyd ◴[] No.35478358{5}[source]
sure, but people are suggesting staticly linking everything and many modern languages don't really support dynamic linking.
49. dikei ◴[] No.35478709{4}[source]
"Appeal to authority" doesn't prove your point buddy, especially if that "authority" is yourself.
replies(1): >>35504670 #
50. howinteresting ◴[] No.35479652{10}[source]
Thanks for the information about SunOS. My point still stands: the C ecosystem makes it possible in a way that other language models simply don't.

> Then I would urge you to reconsider.

Done. No change to my beliefs.

51. maccard ◴[] No.35480090{4}[source]
On windows they do.
52. bscphil ◴[] No.35486856{15}[source]
> > Sometimes developers in this mindset forget there's a whole other world out there, a world of personal computers, that each have hundreds or thousands of applications installed.

> it's not that i forget about these use cases, it's that i don't really consider them relevant

Yes, exactly! It's an extremely myopic vision. You've spent this long thread arguing against dynamic linking on the basis of what is only a small fraction of total human / computer interactions! By "not relevant" you mean not relevant to the enterprise. I grant that of course - but these uses cases are (by definition) relevant to hundreds of millions of PC users.

> the huge majority of software is neither built nor distributed by maintainers, it is built and distributed by private enterprises

The overwhelming majority of the software I run is built and distributed by maintainers. Literally, there are only a few exceptions, like static-built games that rarely or never change and are (unfortunately) closed source. I daresay that's true for the majority of Linux users - the vast majority of the software we install and use is not "built and distributed by private enterprises".

This reality is what Linux-on-the-desktop is built for. There are millions of people who are going to want to continue using computers this way, and people like me will continue contributing to and developing distributions for this use case, even if shipping static or closed-source binaries to Linux users becomes common.

replies(1): >>35494249 #
53. rektide ◴[] No.35488025{14}[source]
I hugely agree that the parent is definitely definitely favoring one and only one kind of software model.

You raise the world of personal computers. And I think dynamic linking is absolutely a choice that has huge advantages for these folks.

There's other realms too. Embedded software needs smaller systems, so the dynamic library savings can be huge there. Hyper-scaler systems, where thousands of workloads can be running concurrently, can potentially scale to much much much higher usages with dynamic linking.

It's a little far afield, but with systems like webasssembly we're really looking less at a couple orgs within a company each shipping a monolith or two, and we're potentially looking way more at having lots of very small functions with a couple helper libraries interacting. This isn't exactly a classic dynamic library, but especially with the very safe sandboxing built in, the ideal model is far closer to something like dynamic linking where each library can be shared than it is shared.

54. preseinger ◴[] No.35494249{16}[source]
linux-on-the-desktop is also like statistically zero of linux installations (modulo mobile) but if that's counter to a belief of yours then we're definitely not going to make progress here so (shrug)

like i'm not sure you understand the scale of enterprise linux. a single organization of not-that-very-many people can easily create and destroy hundreds of millions of deployed systems every day, each with a novel configuration of installed software. i've seen it countless times.

replies(1): >>35498227 #
55. bscphil ◴[] No.35498227{17}[source]
I think we're arguing on multiple fronts here and that is confusing things.

1. My point about Linux on the desktop is that there are in practice users like me who are already getting the (many) advantages of dynamic linking, and don't want to give up those advantages. To the point that some of us are going to support and work on distributions that continue the traditional Linux way in this area. In your view, the ecosystem has moved to software being built and distributed by private corporations. I don't think this has happened - on Windows software was always built and distributed this way; on (desktop) Linux it never was and largely still isn't!

2. My point about the desktop in general is that this use case matters to the vast majority of computer-using human beings much more than enterprise. The number of deployed containers that get created and destroyed every day doesn't change that fact, nor does the fact that Linux users are merely a tiny fraction of this desktop use case. This is what creates the myopia I was talking about - you're thinking about metrics like "number of systems deployed" whereas I'm thinking of number of human-computer interactions that are impacted. I don't think you can just discard what matters on the desktop or paint it as irrelevant. Desktop computing shouldn't be subordinate to the technical requirements of servers!

So to summarize the argument: (a) desktop use cases still matter because they comprise the majority of human-computer interactions, (b) dynamic linking and the maintainer model are the superior approach for desktop computing, and in fact complement each other in important ways, and (c) even if most desktop users can't take advantage of this model because of the dominance of closed source software and the corporate development model, desktop Linux can and does, and will hopefully continue to do so into the future.

replies(1): >>35499385 #
56. preseinger ◴[] No.35499385{18}[source]
> Desktop computing shouldn't be subordinate to the technical requirements of servers!

i guess this is the crux of the discussion

linux desktop computing for sure _is_ subordinate to linux server computing, by any reasonable usage metric

i'm not trying to deny your experience in any way, nor suggest that dynamic linking goes away, or anything like that -- your use case is real, linux on the desktop is real, that use case isn't going away

but it is pretty clear at this point that linux on the server is wildly successful, linux on mobile is successful (for android), and that linux on the desktop is at best a niche use case

the majority of human interactions with linux occur via applications, services, tools, etc. that are served by linux servers, and not by software running on local machines like desktops or laptops

linux is a server operating system first and foremost

replies(1): >>35550153 #
57. crabbone ◴[] No.35504670{5}[source]
This is not an "appeal to authority". It means to say that I was using virtual environments before you started programming, and am acutely aware of their existence: the solution you offer is so laughable it doesn't deserve a serious discussion, just too many things are "naive" at best your "solution", but mostly your "solution" is just irrelevant / a misunderstanding of the problem.
replies(2): >>35508801 #>>35575985 #
58. dikei ◴[] No.35508801{6}[source]
I'm not the original poster you replied to, just a passer by.
59. rektide ◴[] No.35550153{19}[source]
whether we want unobservable ungovernable far off machines running the future forever, or whether we want a future where actual people can compute & see what happens seems to matter. the numbers may perhaps stack up to suborn PC needs to industrial computing needs now, but is that the future anyone should actually want? should the invisible hand of capital be the primary thing humanity should try to align to?

and where is the growth potential? is the industrial need going to become greatly newly empowered & helpful to this planet, to us? will it deliver & share the value potential out there? PC may be a smaller factor today, but i for one am incredibly fantastically excited to imagine a potential future 10 years from now where people start to PC again, albeit in a different way.

individual PCs have no chance. it's why the cloud has won. on-demand access wherever you are, consistent experience across devices is incredibly incredibly convenient. but networks of PCs that work well together is exciting, and we've only so very recently started emerging the capability to have nice easy to manage ops/automated multi-machine personal-computing. we've only recently emerged to maturity where a better, competitive personal computing is really conceivable.

it's been the alpha linux geeks learning how to compute and industrial players learning how to compute, and the invisible hand has been fat happy & plump from it, but imo there's such a huge potential here to re-open computing to persons, to create compelling interesting differently-capable sovereign/owned computing systems, that are free from so many of the small tatters & deprevations & enshittifications that cloud- that doing everyting on other people's computers as L-Users- unnerringly drops on us. we should & could be a more powerful, more technically-cultural culture, and i think we've severely underrated how much subtle progress there's been to make that a much less awful, specialized, painful, time-consuming, low-availability, disconnected effort than it used to be.

60. rektide ◴[] No.35550312{15}[source]
binary size is also memory size. memory size matters. applications sharing the same libraries can be a huge win for how much stuff you can fit on a server, and that can be a colossal time/money/energy saver.

yes: if you're a company that tends to only run 1-20 applications, no, the memory-savings probably won't matter to you. that matches quite a large number of use cases. but a lot of companies run way more workloads than anyone would guess. quite a few just have no cost-control and/or just don't know, but there's probably some pretty sizable potential wins. it's even more important for hyper-scalers, where they're running many many customer processes at a time. even companies like facebook though, i forget the statistic, but sometime in the last quarter there was quote saying like >30% of their energy usage was just powering ram. willing to bet, they definitely optimize for binary size. they definitely look at it.

there's significant work being put towards drastically reducing scale of disk/memory usage across multiple containers, for example. composefs is one brilliant very exciting example that could help us radically scale up how much compute we can host. https://news.ycombinator.com/item?id=34524651

i also haven't seen the very important very critical other type of memory mentioned, cache. maybe we can just keep paying to add DRAM forever and ever (especially with CXL coming across the horizon), but the SRAM in your core-complex will almost always tend to be limited (although word is Zen4 might get within striking distance of 1GB which is EPIC). static builds are never going to share cache effectively. the instruction cache will always be unique per process. the most valuable expensive fancy memory on the computer is totally trashed & wasted by static binaries.

there's really nothing to recommend about static binaries, other than them being extremely stupid. them requiring not a single iota of thought to use is the primary win. (things like monomorphic optimization can be done in dynamic libraries with various metaprogramming & optimizing runtimes, hopefully one's that don't need to keep respawning duplicate copies ad-nauseum.)

i do think you're correct about the dominant market segment of computing, & you're speaking truthfully to a huge % of small & mid-sized businesses, where the computing needs are just incredibly simple & the ratio of processes to computers is quote low. their potential savings are not that high, since there's just not that much duplicate code to keep dynamically linking. but i also think that almost all interesting upcoming models of computing emphasize creating a lot more smaller lighter processes, that there are huge security & managability benefits, and that there's not a snowman's chance in hell that static-binary style computing has any role to play in the better possible futures we're opening up.

replies(1): >>35552509 #
61. preseinger ◴[] No.35552509{16}[source]
you're very sensitive to the costs of static linking but i don't think you see the benefit

the benefit is that a statically linked binary will behave the same on all systems and doesn't need any specific runtime support above or beyond the bare minimum

this is important if you want a coherent deployment model at scale -- it cannot be the case that the same artifact X works fine on one subset of hosts, but not on another subset of hosts, because their openssl libraries are different or whatever

static linking is not stupid, it doesn't mean that hosts can only have like 10 processes on them, it doesn't imply that the computing needs it serves are simple, quite the opposite

future models of computing are shrinking stuff like the OS to zero, the thing that matters is the application, security (in the DLL sense you mean here) is not a property of a host, it's a property of an application, it seems pretty clear to me that static linking is where we're headed, see e.g. containers

62. androidbishop ◴[] No.35575985{6}[source]
I'm still waiting on an actual argument here other than condescending name calling, appeals to authority (yes that is what you are doing), and casual hand-waving away any serious discussion because it doesn't "deserve" it.

Let's go through the points which I was referring to:

"Python doesn't have a package manager. pip can install packages, but installing packages iteratively will break dependencies of packages installed in previous iterations. So, if you call pip install twice or more, you are likely to end up with a broken system."

"Likely" seems like a stretch here since it's pretty damned rare that I've come across this when using virtual environments. With a virtual environment, you have an isolated system. Why are you installing packages iteratively in the first place? Use a requirements.txt with the packages you need, then freeze it. If you end up with a conflict, delete the virtual environment and recreate a fresh one, problem solved.

"Python cannot deal with different programs wanting different versions of the same dependency"

It does when you're running your applications using virtual environments. Again, you say that it's irrelevant, but this is literally what this shit solves. I come from a world where multiple applications are run on separate docker containers so this doesn't really apply anyway, but if you had to run multiple applications on the same server you can set the PYTHONPATH env variable and venv binary to the virtual environment when running each application.

"Python version iterates very fast. It's even worse for most of the Python packages. To stand still you need to update all the time, because everything goes stale very fast. In addition, this creates too many versions of packages for dependency solvers to process leading to insanely long installation times, which, in turn, prompts the package maintainers to specify very precise version requirements (to reduce the time one has to wait for the solver to figure out what to install), but this, in turn, creates a situation where there are lots of allegedly incompatible packages."

Maybe I'm misunderstanding what you are saying here, but this seems like a retread of your first point with some casual opinions thrown in. If you delete the venv and re-install all the packages at once, shouldn't it resolve dependency issues? "Insanely long installation times"? Seems to be a lot quicker than maven or gradle in my experience, and much easier to use. I get a lot of dependency issues with those managers as well, so this doesn't seem to be a unique problem for python, if it really is a problem when using virtual environments.

"Python package maintainers have too many elements in support matrix. This leads to quick abandonment of old versions, fragmented support across platforms and versions."

I admit I don't know anything about this. Maybe it's true, but I imagine this is true for community packages of just about any language.

"Python packages are low quality. Many Python programmers don't understand what needs to go into a package, they either put too little or too much or just the wrong stuff altogether."

This is not only purely subjective opinion, it's not even one that seems to be common. Maybe that's true for less popular packages (and again, I'm not convinced it wouldn't be the same for less popular packages in other languages), but the ones most people use for common tasks I often see heralded as fantastic examples of programming that I should be reviewing to level up my own code.

"All of the above could've been solved by better moderation of community-generated packages, stricter rules on package submission process, longer version release cycles, formalizing package requirements across different platforms, creating tools s.a. package manager to aid in this process..."

I'm not familiar enough with the politics, culture, and process of maintaining Python's packages or package management system to speak to any of this. It seems like this would generally be good advice regardless of the state it's currently in. But these are broad, systemic solutions that require a revamp of the culture and bureaucracy of the entire package management system, a completely different set of tools than the ones that already exist (that would likely create backwards incompatibility issues), and no meaningful way to measure the success of these initiatives because most of your complaints are subjective opinions. Furthermore, at least half of your complaints seem to already be mitigated using virtual environments and industry best-practices, so I'm struggling to see where any of this is helpful.