←back to thread

838 points turrini | 2 comments | | HN request time: 0s | source
Show context
caseyy ◴[] No.43972418[source]
There is an argument to be made that the market buys bug-filled, inefficient software about as well as it buys pristine software. And one of them is the cheapest software you could make.

It's similar to the "Market for Lemons" story. In short, the market sells as if all goods were high-quality but underhandedly reduces the quality to reduce marginal costs. The buyer cannot differentiate between high and low-quality goods before buying, so the demand for high and low-quality goods is artificially even. The cause is asymmetric information.

This is already true and will become increasingly more true for AI. The user cannot differentiate between sophisticated machine learning applications and a washing machine spin cycle calling itself AI. The AI label itself commands a price premium. The user overpays significantly for a washing machine[0].

It's fundamentally the same thing when a buyer overpays for crap software, thinking it's designed and written by technologists and experts. But IC1-3s write 99% of software, and the 1 QA guy in 99% of tech companies is the sole measure to improve quality beyond "meets acceptance criteria". Occasionally, a flock of interns will perform an "LGTM" incantation in hopes of improving the software, but even that is rarely done.

[0] https://www.lg.com/uk/lg-experience/inspiration/lg-ai-wash-e...

replies(27): >>43972654 #>>43972713 #>>43972732 #>>43973044 #>>43973105 #>>43973120 #>>43973128 #>>43973198 #>>43973257 #>>43973418 #>>43973432 #>>43973703 #>>43973853 #>>43974031 #>>43974052 #>>43974503 #>>43975121 #>>43975380 #>>43976615 #>>43976692 #>>43979081 #>>43980549 #>>43982939 #>>43984708 #>>43986570 #>>43995397 #>>43998494 #
dahart ◴[] No.43973432[source]
The dumbest and most obvious of realizations finally dawned on me after trying to build a software startup that was based on quality differentiation. We were sure that a better product would win people over and lead to viral success. It didn’t. Things grew, but so slowly that we ran out of money after a few years before reaching break even.

What I realized is that lower costs, and therefore lower quality, are a competitive advantage in a competitive market. Duh. I’m sure I knew and said that in college and for years before my own startup attempt, but this time I really felt it in my bones. It suddenly made me realize exactly why everything in the market is mediocre, and why high quality things always get worse when they get more popular. Pressure to reduce costs grows with the scale of a product. Duh. People want cheap, so if you sell something people want, someone will make it for less by cutting “costs” (quality). Duh. What companies do is pay the minimum they need in order to stay alive & profitable. I don’t mean it never happens, sometimes people get excited and spend for short bursts, young companies often try to make high quality stuff, but eventually there will be an inevitable slide toward minimal spending.

There’s probably another name for this, it’s not quite the Market for Lemons idea. I don’t think this leads to market collapse, I think it just leads to stable mediocrity everywhere, and that’s what we have.

replies(35): >>43973826 #>>43974086 #>>43974427 #>>43974658 #>>43975070 #>>43975211 #>>43975222 #>>43975294 #>>43975564 #>>43975730 #>>43976403 #>>43976446 #>>43976469 #>>43976551 #>>43976628 #>>43976708 #>>43976757 #>>43976758 #>>43977001 #>>43977618 #>>43977824 #>>43978077 #>>43978446 #>>43978599 #>>43978709 #>>43978867 #>>43979353 #>>43979364 #>>43979714 #>>43979843 #>>43980458 #>>43981165 #>>43981846 #>>43982145 #>>43983217 #
naasking ◴[] No.43974427[source]
> What I realized is that lower costs, and therefore lower quality,

This implication is the big question mark. It's often true but it's not at all clear that it's necessarily true. Choosing better languages, frameworks, tools and so on can all help with lowering costs without necessarily lowering quality. I don't think we're anywhere near the bottom of the cost barrel either.

I think the problem is focusing on improving the quality of the end products directly when the quality of the end product for a given cost is downstream of the quality of our tools. We need much better tools.

For instance, why are our languages still obsessed with manipulating pointers and references as a primary mode of operation, just so we can program yet another linked list? Why can't you declare something as a "Set with O(1) insert" and the language or its runtime chooses an implementation? Why isn't direct relational programming more common? I'm not talking programming in verbose SQL, but something more modern with type inference and proper composition, more like LINQ, eg. why can't I do:

    let usEmployees = from x in Employees where x.Country == "US";

    func byFemale(Query<Employees> q) =>
      from x in q where x.Sex == "Female";

    let femaleUsEmployees = byFemale(usEmployees);
These abstract over implementation details that we're constantly fiddling with in our end programs, often for little real benefit. Studies have repeatedly shown that humans can write less than 20 lines of correct code per day, so each of those lines should be as expressive and powerful as possible to drive down costs without sacrificing quality.
replies(7): >>43974948 #>>43975561 #>>43975743 #>>43976283 #>>43979978 #>>43981699 #>>44018060 #
mike_hearn ◴[] No.43975743[source]
Hm, you could do that quite easily but there isn't much juice to be squeezed from runtime selected data structures. Set with O(1) insert:

    var set = new HashSet<Employee>();
Done. Don't need any fancy support for that. Or if you want to load from a database, using the repository pattern and Kotlin this time instead of Java:

    @JdbcRepository(dialect = ANSI) interface EmployeeQueries : CrudRepository<Employee, String> {
        fun findByCountryAndGender(country: String, gender: String): List<Employee>
    }

    val femaleUSEmployees = employees.findByCountryAndGender("US", "Female")
That would turn into an efficient SQL query that does a WHERE ... AND ... clause. But you can also compose queries in a type safe way client side using something like jOOQ or Criteria API.
replies(1): >>43975843 #
naasking ◴[] No.43975843[source]
> Hm, you could do that quite easily but there isn't much juice to be squeezed from runtime selected data structures. Set with O(1) insert:

But now you've hard-coded this selection, why can't the performance characteristics also be easily parameterized and combined, eg. insert is O(1), delete is O(log(n)), or by defining indexes in SQL which can be changed at any time at runtime? Or maybe the performance characteristics can be inferred from the types of queries run on a collection elsewhere in the code.

> That would turn into an efficient SQL query that does a WHERE ... AND ... clause.

For a database you have to manually construct, with a schema you have to manually and poorly to an object model match, using a library or framework you have to painstakingly select from how many options?

You're still stuck in this mentality that you have to assemble a set of distinct tools to get a viable development environment for most general purpose programming, which is not what I'm talking about. Imagine the relational model built-in to the language, where you could parametrically specify whether collections need certain efficient operations, whether collections need to be durable, or atomically updatable, etc.

There's a whole space of possible languages that have relational or other data models built-in that would eliminate a lot of problems we have with standard programming.

replies(2): >>43976186 #>>43977741 #
mike_hearn ◴[] No.43976186{3}[source]
There are research papers that examine this question of whether runtime optimizing data structures is a win, and it's mostly not outside of some special cases like strings. Most collections are quite small. Really big collections tend to be either caches (which are often specialized anyway), or inside databases where you do have more flexibility.

A language fully integrated with the relational model exists, that's PL/SQL and it's got features like classes and packages along with 'natural' SQL integration. You can do all the things you ask for: specify what operations on a collection need to be efficient (indexes), whether they're durable (temporary tables), atomically updatable (LOCK TABLE IN EXCLUSIVE MODE) and so on. It even has a visual GUI builder (APEX). And people do build whole apps in it.

Obviously, this approach is not universal. There are downsides. One can imagine a next-gen attempt at such a language that combined the strengths of something like Java/.NET with the strengths of PL/SQL.

replies(2): >>43978666 #>>43986115 #
naasking ◴[] No.43978666{4}[source]
> There are research papers that examine this question of whether runtime optimizing data structures is a win

If you mean JIT and similar tech, that's not really what I'm describing either. I'm talking about lifting the time and space complexity of data structures to parameters so you don't have to think about specific details.

Again, think about how tables in a relational database work, where you can write queries against sets without regard for the underlying implementation, and you have external/higher level tools to tune a running program's data structures for better time or space behavior.

> A language fully integrated with the relational model exists, that's PL/SQL

Not a general purpose language suitable for most programming, and missing all of the expressive language features I described, like type/shape inference, higher order queries and query composition and so on. See my previous comments. The tool you mentioned leaves a lot to be desired.

replies(1): >>43987827 #
1. mike_hearn ◴[] No.43987827{5}[source]
I guess the closest to that I've seen would be something like Permazen with some nice syntax sugar on top. It's not the relational model, but it does simplify away a lot of the complexity of the object relational mismatch (for Java) whilst preserving the expressiveness of a 'full' mainstream language.
replies(1): >>44000862 #
2. naasking ◴[] No.44000862[source]
Yes, that's getting closer, but as you implied it still leaves something to be desired. Ironically what I'm describing is sort of an evolution of Access database programming from 20+ years ago. Everything old is new again.