←back to thread

763 points alihm | 1 comments | | HN request time: 0.21s | source
Show context
meander_water ◴[] No.44469163[source]
> the "taste-skill discrepancy." Your taste (your ability to recognize quality) develops faster than your skill (your ability to produce it). This creates what Ira Glass famously called "the gap," but I think of it as the thing that separates creators from consumers.

This resonated quite strongly with me. It puts into words something that I've been feeling when working with AI. If you're new to something and using AI for it, it automatically boosts the floor of your taste, but not your skill. And you end up never slowing down to make mistakes and learn, because you can just do it without friction.

replies(8): >>44469175 #>>44469439 #>>44469556 #>>44469609 #>>44470520 #>>44470531 #>>44470633 #>>44474386 #
Loughla ◴[] No.44469175[source]
This is the disconnect between proponents and detractors of AI.

Detractors say it's the process and learning that builds depth.

Proponents say it doesn't matter because the tool exists and will always exist.

It's interesting seeing people argue about AI, because they're plainly not speaking about the same issue and simply talking past each other.

replies(4): >>44469235 #>>44469655 #>>44469774 #>>44471477 #
jibal ◴[] No.44471477[source]
This is a radical misrepresentation of the dispute.
replies(1): >>44472222 #
Loughla ◴[] No.44472222[source]
I disagree with you. Please expand.
replies(1): >>44472898 #
skydhash ◴[] No.44472898[source]
Not GP, but I agree with him and I will expand.

The fact isn't that we don't know how to use AI. We've done so and the result can be very good sometimes (mostly because we know what's good and not). What's pushing us away from it is its unreliability. Our job is to automate some workflow (the business's and some of our owns') so that people can focus on the important matters and have the relevant information to make decisions.

The defect of LLM is that you have to monitor its whole output. It's like driving a car where the steering wheel loosely connected to the front wheels and the position for straight ahead varies all the time. Or in the case of agents, it's like sleeping in a plane and finding yourself in Russia instead of Chile. If you care about quality, the cognitive load is a lot. If you only care about moving forward (even if the path made is a circle or the direction is wrong), then I guess it's OK.

So we go for standard solutions where fixed problems stays fixed and the amount of issues is a downward slope (in a well managed codebase), not an oscillating wave that is centered around some positive value.

replies(1): >>44474762 #
1. Loughla ◴[] No.44474762[source]
I understand that but I'm not sure how it's a response to my original statement.