←back to thread

837 points turrini | 3 comments | | HN request time: 0.006s | source
Show context
caseyy ◴[] No.43972418[source]
There is an argument to be made that the market buys bug-filled, inefficient software about as well as it buys pristine software. And one of them is the cheapest software you could make.

It's similar to the "Market for Lemons" story. In short, the market sells as if all goods were high-quality but underhandedly reduces the quality to reduce marginal costs. The buyer cannot differentiate between high and low-quality goods before buying, so the demand for high and low-quality goods is artificially even. The cause is asymmetric information.

This is already true and will become increasingly more true for AI. The user cannot differentiate between sophisticated machine learning applications and a washing machine spin cycle calling itself AI. The AI label itself commands a price premium. The user overpays significantly for a washing machine[0].

It's fundamentally the same thing when a buyer overpays for crap software, thinking it's designed and written by technologists and experts. But IC1-3s write 99% of software, and the 1 QA guy in 99% of tech companies is the sole measure to improve quality beyond "meets acceptance criteria". Occasionally, a flock of interns will perform an "LGTM" incantation in hopes of improving the software, but even that is rarely done.

[0] https://www.lg.com/uk/lg-experience/inspiration/lg-ai-wash-e...

replies(27): >>43972654 #>>43972713 #>>43972732 #>>43973044 #>>43973105 #>>43973120 #>>43973128 #>>43973198 #>>43973257 #>>43973418 #>>43973432 #>>43973703 #>>43973853 #>>43974031 #>>43974052 #>>43974503 #>>43975121 #>>43975380 #>>43976615 #>>43976692 #>>43979081 #>>43980549 #>>43982939 #>>43984708 #>>43986570 #>>43995397 #>>43998494 #
dahart ◴[] No.43973432[source]
The dumbest and most obvious of realizations finally dawned on me after trying to build a software startup that was based on quality differentiation. We were sure that a better product would win people over and lead to viral success. It didn’t. Things grew, but so slowly that we ran out of money after a few years before reaching break even.

What I realized is that lower costs, and therefore lower quality, are a competitive advantage in a competitive market. Duh. I’m sure I knew and said that in college and for years before my own startup attempt, but this time I really felt it in my bones. It suddenly made me realize exactly why everything in the market is mediocre, and why high quality things always get worse when they get more popular. Pressure to reduce costs grows with the scale of a product. Duh. People want cheap, so if you sell something people want, someone will make it for less by cutting “costs” (quality). Duh. What companies do is pay the minimum they need in order to stay alive & profitable. I don’t mean it never happens, sometimes people get excited and spend for short bursts, young companies often try to make high quality stuff, but eventually there will be an inevitable slide toward minimal spending.

There’s probably another name for this, it’s not quite the Market for Lemons idea. I don’t think this leads to market collapse, I think it just leads to stable mediocrity everywhere, and that’s what we have.

replies(35): >>43973826 #>>43974086 #>>43974427 #>>43974658 #>>43975070 #>>43975211 #>>43975222 #>>43975294 #>>43975564 #>>43975730 #>>43976403 #>>43976446 #>>43976469 #>>43976551 #>>43976628 #>>43976708 #>>43976757 #>>43976758 #>>43977001 #>>43977618 #>>43977824 #>>43978077 #>>43978446 #>>43978599 #>>43978709 #>>43978867 #>>43979353 #>>43979364 #>>43979714 #>>43979843 #>>43980458 #>>43981165 #>>43981846 #>>43982145 #>>43983217 #
naasking ◴[] No.43974427[source]
> What I realized is that lower costs, and therefore lower quality,

This implication is the big question mark. It's often true but it's not at all clear that it's necessarily true. Choosing better languages, frameworks, tools and so on can all help with lowering costs without necessarily lowering quality. I don't think we're anywhere near the bottom of the cost barrel either.

I think the problem is focusing on improving the quality of the end products directly when the quality of the end product for a given cost is downstream of the quality of our tools. We need much better tools.

For instance, why are our languages still obsessed with manipulating pointers and references as a primary mode of operation, just so we can program yet another linked list? Why can't you declare something as a "Set with O(1) insert" and the language or its runtime chooses an implementation? Why isn't direct relational programming more common? I'm not talking programming in verbose SQL, but something more modern with type inference and proper composition, more like LINQ, eg. why can't I do:

    let usEmployees = from x in Employees where x.Country == "US";

    func byFemale(Query<Employees> q) =>
      from x in q where x.Sex == "Female";

    let femaleUsEmployees = byFemale(usEmployees);
These abstract over implementation details that we're constantly fiddling with in our end programs, often for little real benefit. Studies have repeatedly shown that humans can write less than 20 lines of correct code per day, so each of those lines should be as expressive and powerful as possible to drive down costs without sacrificing quality.
replies(7): >>43974948 #>>43975561 #>>43975743 #>>43976283 #>>43979978 #>>43981699 #>>44018060 #
ndriscoll ◴[] No.43976283[source]
You can do this in Scala[0], and you'll get type inference and compile time type checking, informational messages (like the compiler prints an INFO message showing the SQL query that it generates), and optional schema checking against a database for the queries your app will run. e.g.

    case class Person(name: String, age: Int)
    inline def onlyJoes(p: Person) = p.name == "Joe"

    // run a SQL query
    run( query[Person].filter(p => onlyJoes(p)) )
    
    // Use the same function with a Scala list
    val people: List[Person] = ...
    val joes = people.filter(p => onlyJoes(p))

    // Or, after defining some typeclasses/extension methods
    val joesFromDb = query[Person].onlyJoes.run
    val joesFromList = people.onlyJoes
This integrates with a high-performance functional programming framework/library that has a bunch of other stuff like concurrent data structures, streams, an async runtime, and a webserver[1][2]. The tools already exist. People just need to use them.

[0] https://github.com/zio/zio-protoquill?tab=readme-ov-file#sha...

[1] https://github.com/zio

[2] https://github.com/zio/zio-http

replies(1): >>43978923 #
1. naasking ◴[] No.43978923[source]
Notice how you're still specifying List types? That's not what I'm describing.

You're also just describing a SQL mapping tool, which is also not really it either, though maybe that would be part of the runtime invisible to the user. Define a temporary table whose shape is inferred from another query, that's durable and garbage collected when it's no longer in use, and make it look like you're writing code against any other collection type, and declaratively specify the time complexity of insert, delete and lookup operations, then you're close to what I'm after.

replies(1): >>43979193 #
2. ndriscoll ◴[] No.43979193[source]
The explicit annotation on people is there for illustration. In real code it can be inferred from whatever the expression is (as the other lines are).

I don't think it's reasonable to specify the time complexity of insert/delete/lookup. For one, joins quickly make you care about multi-column indices and the precise order things are in and the exact queries you want to perform. e.g. if you join A with B, are your results sorted such that you can do a streaming join with C in the same order? This could be different for different code paths. Simply adding indices also adds maintenance overhead to each operation, which doesn't affect (what people usually mean by) the time complexity (it scales with number of indices, not dataset size), but is nonetheless important for real-world performance. Adding and dropping indexes on the fly can also be quite expensive if your dataset size is large enough to care about performance.

That all said, you could probably get at what you mean by just specifying indices instead of complexity and treating an embedded sqlite table as a native mutable collection type with methods to create/drop indices and join with other tables. You could create the table in the constructor (maybe using Object.hash() for the name or otherwise anonymously naming it?) and drop it in the finalizer. Seems pretty doable in a clean way in Scala. In some sense, the query builders are almost doing this, but they tend to make you call `run` to go from statement to result instead of implicitly always using sqlite.

replies(1): >>44000929 #
3. naasking ◴[] No.44000929[source]
> In real code it can be inferred from whatever the expression is (as the other lines are).

What I meant is that there would be no explicit List<T> types, or array types, or hash tables, or trees, etc. Contiguity of the data is an implementation detail that doesn't matter for the vast majority of programming, much like how fields are packed in an object is almost completely irrelevant. Existing languages drive people to attend to these small details like collection choice that largely don't matter except in extreme circumstances (like game programming).

What it would have is something more like a Set<T ordered by T.X>, and maybe not even ordering should be specifiable as that's typically a detail of presentation/consumers of data. Restrictions are freeing, so the point is to eliminate many ill-advised premature optimizations and unnecessary internal details. Maybe the runtime will use one of those classic collections internally from the constraints you specify on the set, but the fundamental choice would not typically be visible.

> That all said, you could probably get at what you mean by just specifying indices instead of complexity and treating an embedded sqlite table as a native mutable collection type with methods to create/drop indices and join with other tables.

Yes, something like sqlite would likely be part of the runtime of such a language, and seems like the most straightforward way to prototype it. Anyway, I don't have a concrete semantics worked out so much as rough ideas of certain properties, and this is only one of them.