←back to thread

763 points alihm | 1 comments | | HN request time: 0.223s | source
Show context
meander_water ◴[] No.44469163[source]
> the "taste-skill discrepancy." Your taste (your ability to recognize quality) develops faster than your skill (your ability to produce it). This creates what Ira Glass famously called "the gap," but I think of it as the thing that separates creators from consumers.

This resonated quite strongly with me. It puts into words something that I've been feeling when working with AI. If you're new to something and using AI for it, it automatically boosts the floor of your taste, but not your skill. And you end up never slowing down to make mistakes and learn, because you can just do it without friction.

replies(8): >>44469175 #>>44469439 #>>44469556 #>>44469609 #>>44470520 #>>44470531 #>>44470633 #>>44474386 #
Loughla ◴[] No.44469175[source]
This is the disconnect between proponents and detractors of AI.

Detractors say it's the process and learning that builds depth.

Proponents say it doesn't matter because the tool exists and will always exist.

It's interesting seeing people argue about AI, because they're plainly not speaking about the same issue and simply talking past each other.

replies(4): >>44469235 #>>44469655 #>>44469774 #>>44471477 #
ants_everywhere ◴[] No.44469655[source]
I usually see the opposite.

Detractors from AI often refuse to learn how to use it or argue that it doesn't do everything perfectly so you shouldn't use it.

Proponents say it's the process and learning that builds depth and you have to learn how to use it well before you can have a sensible opinion about it.

The same disconnect was in place for every major piece of technology, from mechanical weaving, to mechanical computing, to motorized carriages, to synthesized music. You can go back and read the articles written about these technologies and they're nearly identical to what the AI detractors have been saying.

One side always says you're giving away important skills and the new technology produces inferior work. They try to frame it in moral terms. But at heart the objections are about the fear of one's skills becoming economically obsolete.

replies(4): >>44470204 #>>44470707 #>>44471805 #>>44472099 #
bluefirebrand ◴[] No.44470204[source]
> But at heart the objections are about the fear of one's skills becoming economically obsolete.

I won't deny that there is some of this in my AI hesitancy

But honestly the bigger barrier for me is that I fear signing my name on subpar work that I would otherwise be embarrassed to claim as my own

If I don't type it into the editor myself, I'm not putting my name on it. It is not my code and I'm not claiming either credit nor responsibility for it

replies(3): >>44470237 #>>44470346 #>>44470597 #
benreesman ◴[] No.44470597[source]
I think you're very wise to preserve your commit handle as something other than a shift operator annotation, not everyone is.

I think I'm using it more than it sounds like you are, but I make very clear notations to myself and others about what's a big generated test suite that I froze in amber after it cleared a huge replay event, and what I've been over a fine tooth comb with personally. I type about the same amount of prose and code every day as ever, but I type a lot of code into the prompt now "like this, not like that" in a comment.

The percentage of hand-authored lines varies wildly from probably 20% of unit tests to still close to 100℅ on io_uring submission queue polling or whatever.

If it one shots a build file, eh, I put opus as the meta.authors and move on.

replies(1): >>44473354 #
mwcampbell ◴[] No.44473354[source]
I wonder if it's actually accurate to attribute authorship to the model. As I understand it, the code is actually derived from all of the text that went into the training set. So, strictly speaking, I guess proper attribution is impossible. More generally, I wonder what you think about the whole plagiarism/stealing issue. Is it something you're at all uneasy about as you use LLMs? Not trying to accuse or argue; I'm curious about different perspectives on this, as it's currently the hang-up preventing me from jumping into LLM-assisted coding.
replies(1): >>44473892 #
1. benreesman ◴[] No.44473892[source]
I'm very much on the record that I want Altman tried in the Hague for crimes against humanity, and he's not the only one. So I'm no sympathizer of the TESCREAL/EA sociopaths who run frontier AI labs in 2025 (Amodei is no better).

And in a lot of areas it's clearly just copyright laundering, the way the Valley always says that breaking the law is progress if it's done with a computer (AI means computer now in policy circles).

But on code? Coding is sort of a special case in the sense that our tradition of sharing/copying/pasting/gisting-to-our-buddies-fuck-the-boss is so strong that it's kind of a different thing. Coding is also a special case on LLMs being at all useful over and above like, non-spammed Google, it's completely absurd that they generalize outside of that hyper-specific niche. And it's completely absurd the `gpt-4-1106-preview` was better than pre-AI/pre-SEO Google: LLM is both arsonist and fireman like Ethan Hunt in that Mission Impossible flick with Alex Baldwin.

So if you're asking if I think the frontier vendors have the moral high ground on anything? No, they're very very bad people and I don't associate with people who even work there.

But if you're asking if I care about my code going into a model?

https://i.ibb.co/1YPxjVvq/2025-07-05-12-40-28.png