binary size is also memory size. memory size matters. applications sharing the same libraries can be a huge win for how much stuff you can fit on a server, and that can be a colossal time/money/energy saver.
yes: if you're a company that tends to only run 1-20 applications, no, the memory-savings probably won't matter to you. that matches quite a large number of use cases. but a lot of companies run way more workloads than anyone would guess. quite a few just have no cost-control and/or just don't know, but there's probably some pretty sizable potential wins. it's even more important for hyper-scalers, where they're running many many customer processes at a time. even companies like facebook though, i forget the statistic, but sometime in the last quarter there was quote saying like >30% of their energy usage was just powering ram. willing to bet, they definitely optimize for binary size. they definitely look at it.
there's significant work being put towards drastically reducing scale of disk/memory usage across multiple containers, for example. composefs is one brilliant very exciting example that could help us radically scale up how much compute we can host. https://news.ycombinator.com/item?id=34524651
i also haven't seen the very important very critical other type of memory mentioned, cache. maybe we can just keep paying to add DRAM forever and ever (especially with CXL coming across the horizon), but the SRAM in your core-complex will almost always tend to be limited (although word is Zen4 might get within striking distance of 1GB which is EPIC). static builds are never going to share cache effectively. the instruction cache will always be unique per process. the most valuable expensive fancy memory on the computer is totally trashed & wasted by static binaries.
there's really nothing to recommend about static binaries, other than them being extremely stupid. them requiring not a single iota of thought to use is the primary win. (things like monomorphic optimization can be done in dynamic libraries with various metaprogramming & optimizing runtimes, hopefully one's that don't need to keep respawning duplicate copies ad-nauseum.)
i do think you're correct about the dominant market segment of computing, & you're speaking truthfully to a huge % of small & mid-sized businesses, where the computing needs are just incredibly simple & the ratio of processes to computers is quote low. their potential savings are not that high, since there's just not that much duplicate code to keep dynamically linking. but i also think that almost all interesting upcoming models of computing emphasize creating a lot more smaller lighter processes, that there are huge security & managability benefits, and that there's not a snowman's chance in hell that static-binary style computing has any role to play in the better possible futures we're opening up.