←back to thread

50 points senfiaj | 1 comments | | HN request time: 0.199s | source
1. eviks ◴[] No.45809630[source]
> A significant part of this bloat is actually a tradeoff

Or actually not, and the list doesn't help go beyond "users have more resources, so it's just easier to waste more resources"

> Layers & frameworks

There are a million of these, with performance difference of orders of magnitude. So an empty reference explains nothing re bloat

But also

> localization, input, vector icons, theming, high-DPI

It's not bloat if it allows users to read text in an app! Or read one that's not blurry! Or one that doesn't "burn his eyes"

> Robustness & error handling / reporting.

Same thing, are you talking about a washing machine sending gigabytes of data per day for no improvement whatsoever "in robustness"? Or are you taking about some virtualized development environment with perfect time travel/reproduction, where whatever hardware "bloat" is needed wouldn't even affect the user? What is the actual difference between error handling in the past besides easy sending of your crash dumps?

> Engineering trade-offs. We accept a larger baseline to ship faster, safer code across many devices.

But we do not do that! The code is too often slower precisely because people have a ready list of empty statements like this

> Hardware grew ~three orders of magnitude. Developer time is often more valuable than RAM or CPU cycles

What about the value of time/resources of your users ? Why ignore reality outside of this simplistic dichotomy. Or will the devs not even see the suffering because the "robust error handling and reporting" is nothing of the sort, it mostly /dev/nulls a lot of user experience?