A comprehensive library might offer a more neat DX, but you'd have to ship library code you don't use. (Yes, tree-shaking exists, but still is tricky and not widespread.)
If it's the browser's job to implement the standard library, how do you ensure that all browsers do this in a compliant and timely fashion? And if not, how do you optimise code-on-demand delivery over the internet?
I don't deny there are/could be solutions to this. But historically JS devs have wrestled with these issues as best they can and that has shaped what we see today.
What is this unknown runtime environment? Even during the browser war, there was just an handful of browsers. And IE was the only major outlier. Checking the existence of features and polyfilling is not that complicated.
And most time, the browser is already downloading lot of images and other resources. Arguing about bundle size is very hypocritical of developers that won't blink at adding 17 analytics modules.
Judging by what we see in the world, most developers don't agree with you. And neither do I. A handful of browsers, multiplied by many versions per browser in the wild (before evergreen browsers like Chrome became widespread, but even today with e.g. Safari, or enterprise users), multiplied by a sprawling API surface (dare I say it, a standard library) is not trivial. And that's not even considering browser bugs and regressions.
> very hypocritical of developers that won't blink
Not a great argument, as developers don't necessarily get to choose how to add analytics, and plenty of them try to push back against doing so.
Also, the cost of parsing and JIT'ing JS code is byte-for-byte different to the cost of decoding an image.