Most active commenters
  • Loughla(3)
  • ants_everywhere(3)

←back to thread

757 points alihm | 31 comments | | HN request time: 1.288s | source | bottom
Show context
meander_water ◴[] No.44469163[source]
> the "taste-skill discrepancy." Your taste (your ability to recognize quality) develops faster than your skill (your ability to produce it). This creates what Ira Glass famously called "the gap," but I think of it as the thing that separates creators from consumers.

This resonated quite strongly with me. It puts into words something that I've been feeling when working with AI. If you're new to something and using AI for it, it automatically boosts the floor of your taste, but not your skill. And you end up never slowing down to make mistakes and learn, because you can just do it without friction.

replies(8): >>44469175 #>>44469439 #>>44469556 #>>44469609 #>>44470520 #>>44470531 #>>44470633 #>>44474386 #
1. Loughla ◴[] No.44469175[source]
This is the disconnect between proponents and detractors of AI.

Detractors say it's the process and learning that builds depth.

Proponents say it doesn't matter because the tool exists and will always exist.

It's interesting seeing people argue about AI, because they're plainly not speaking about the same issue and simply talking past each other.

replies(4): >>44469235 #>>44469655 #>>44469774 #>>44471477 #
2. jchw ◴[] No.44469235[source]
> It's interesting seeing people argue about AI, because they're plainly not speaking about the same issue and simply talking past each other.

It's important to realize this is actually a general truth of humans arguing. Sometimes people do disagree about the facts on the ground and what is actually true versus what is bullshit, but a lot of the time what really happens is people completely agree on the facts and even most of the implications of the facts but completely disagree on how to frame them. Doesn't even have to be Internet arguments. A lot of hot-button political topics have always been like this, too.

It's easy to dismiss people's arguments as being irrelevant, but I think there's room to say that if you were to interrogate their worldview in detail you might find that they have coherent reasoning behind why it is relevant from their perspective, even if you disagree.

Though it hasn't really improved my ability to argue or even not argue (perhaps more important), I've definitely noticed this in myself when introspecting, and it definitely makes me think more about why I feel driven to argue, what good it is, and how to do it better.

3. ants_everywhere ◴[] No.44469655[source]
I usually see the opposite.

Detractors from AI often refuse to learn how to use it or argue that it doesn't do everything perfectly so you shouldn't use it.

Proponents say it's the process and learning that builds depth and you have to learn how to use it well before you can have a sensible opinion about it.

The same disconnect was in place for every major piece of technology, from mechanical weaving, to mechanical computing, to motorized carriages, to synthesized music. You can go back and read the articles written about these technologies and they're nearly identical to what the AI detractors have been saying.

One side always says you're giving away important skills and the new technology produces inferior work. They try to frame it in moral terms. But at heart the objections are about the fear of one's skills becoming economically obsolete.

replies(4): >>44470204 #>>44470707 #>>44471805 #>>44472099 #
4. ninetyninenine ◴[] No.44469774[source]
>It's interesting seeing people argue about AI, because they're plainly not speaking about the same issue and simply talking past each other.

There's actually some ground truth facts about AI many people are not knowledgeable about.

Many people believe we understand in totality how LLMs work. The absolute truth of this is that we overall we do NOT understand how LLMs work AT all.

The mistaken belief that we understand LLMs is the driver behind most of the arguments. People think we understand LLMs and that we Understand that the output of LLMs is just stochastic parroting, when the truth is We Do Not understand Why or How an LLM produced a specific response for a specific prompt.

Whether the process of an LLM producing a response resembles anything close to sentience or consciousness, we actually do not know because we aren't even sure about the definitions of those words, Nor do we understand how an LLM works.

This erroneous belief is so pervasive amongst people that I'm positive I'll get extremely confident responses declaring me wrong.

These debates are not the result of people talking past each other. It's because a large segment of people on HN literally are Misinformed about LLMs.

replies(2): >>44470427 #>>44471349 #
5. bluefirebrand ◴[] No.44470204[source]
> But at heart the objections are about the fear of one's skills becoming economically obsolete.

I won't deny that there is some of this in my AI hesitancy

But honestly the bigger barrier for me is that I fear signing my name on subpar work that I would otherwise be embarrassed to claim as my own

If I don't type it into the editor myself, I'm not putting my name on it. It is not my code and I'm not claiming either credit nor responsibility for it

replies(3): >>44470237 #>>44470346 #>>44470597 #
6. add-sub-mul-div ◴[] No.44470237{3}[source]
Unfortunately the majority don't think like this and will take whatever shortcut allows them to go home at 5.
7. armada651 ◴[] No.44470346{3}[source]
> If I don't type it into the editor myself, I'm not putting my name on it. It is not my code and I'm not claiming either credit nor responsibility for it

This of course isn't just a moral concern, it's a legal one. I want ownership of my code, I don't want to find out later the AI just copied another project and now I've violated a license by not giving attribution.

Very few open-source projects are in the public domain and even the most permissive license requires attribution.

replies(2): >>44473003 #>>44473110 #
8. whatevertrevor ◴[] No.44470427[source]
I couldn't agree more, and not just on HN but the world at large.

For the general populace including many tech people who are not ML researchers, understanding how convolutional neural nets work is already tricky enough. For non tech people, I'd hazard a guess that LLM/ generative AI is complexity-indistinguishable from "The YouTube/Tiktok Algorithm".

And this lack of understanding, and in many cases lack of conscious acknowledgement of the lack of understanding has made many "debates" sound almost like theocratic arguments. Very little interest in grounding positions against facts, yet strongly held opinions.

Some are convinced we're going to get AGI in a couple years, others think it's just a glorified text generator that cannot produce new content. And worse there's seemingly little that changes their mind on it.

And there are self contradictory positions held too. Just as an example: I've heard people express AI produced stuff to not qualify as art (philosophically and in terms of output quality) but at the same express deep concern how tech companies will replace artists...

replies(1): >>44473679 #
9. benreesman ◴[] No.44470597{3}[source]
I think you're very wise to preserve your commit handle as something other than a shift operator annotation, not everyone is.

I think I'm using it more than it sounds like you are, but I make very clear notations to myself and others about what's a big generated test suite that I froze in amber after it cleared a huge replay event, and what I've been over a fine tooth comb with personally. I type about the same amount of prose and code every day as ever, but I type a lot of code into the prompt now "like this, not like that" in a comment.

The percentage of hand-authored lines varies wildly from probably 20% of unit tests to still close to 100℅ on io_uring submission queue polling or whatever.

If it one shots a build file, eh, I put opus as the meta.authors and move on.

replies(1): >>44473354 #
10. Shorel ◴[] No.44470707[source]
> But at heart the objections are about the fear of one's skills becoming economically obsolete.

Unless I can become a millionaire just with those skills, they are in a limbo between economically adequate and economically obsolete.

11. exceptione ◴[] No.44471349[source]

  > we do NOT understand how LLMs work AT all.
  > We Do Not understand Why or How an LLM produced a specific response for a
  > specific prompt.

You mean the system is not deterministic? How the system works should be quite clear. I think the uncertainty is more about the premise if billions of tokens and their weights relative to each other is enough to reach intelligence. These debates are older than LLM's. In 'old' AI we were looking at (limited) autonomous agents that had the capability to participate in an environment and exchange knowledge about the world with each other. The next step for LLM's would be to update their own weights. That would be too costly in terms of money and time yet. What we do know is that for something to be seen as intelligent it cannot live in a jar. I consider the current crop as shared 8-bit computers, while each of us need one with terabytes of RAM.
replies(1): >>44473243 #
12. jibal ◴[] No.44471477[source]
This is a radical misrepresentation of the dispute.
replies(1): >>44472222 #
13. ludicrousdispla ◴[] No.44471805[source]
>> Proponents say it's the process and learning that builds depth and you have to learn how to use it well before you can have a sensible opinion about it.

That's like telling a chef they'll improve their cooking skills by adding a can of soup to everything.

14. SirHumphrey ◴[] No.44472099[source]
> Detractors from AI often refuse to learn how to use it or argue that it doesn't do everything perfectly so you shouldn't use it.

But here is the problem - to effectively learn the tool, you must learn to use. Not learning how to effectively AI and then complaining that the results are bad is building a straw-men and then burning it.

But what I am giving away when using LLM is not skills, it's the ability to learn those skills. Because if the LLM instead of me is solving all easy and intermediate problems I cannot learn how to solve hard problems. The process of digging for an answer through documentation gives me a better understanding of how some technology works.

Those kinds of problems existed before - programming languages robed people of the necessity to learn assembly - high level languages of the necessity to learn low level languages - low code solutions of the necessity to learn how to code. Some of these solutions (like low level and high level programming languages) are robust enough that this trade-off makes sense - some are not (like low code).

I think it's too early to call weather AI agents go one way or the other. Putting eggs in both baskets means learning how to use AI tools and at the same time still maintaining the ability to work without them.

replies(2): >>44472449 #>>44472937 #
15. Loughla ◴[] No.44472222[source]
I disagree with you. Please expand.
replies(1): >>44472898 #
16. paulryanrogers ◴[] No.44472449{3}[source]
I stopped using auto complete for a while because I found that having to search for docs and source forced me to learn the APIs more thoroughly. Or so it seemed.
17. skydhash ◴[] No.44472898{3}[source]
Not GP, but I agree with him and I will expand.

The fact isn't that we don't know how to use AI. We've done so and the result can be very good sometimes (mostly because we know what's good and not). What's pushing us away from it is its unreliability. Our job is to automate some workflow (the business's and some of our owns') so that people can focus on the important matters and have the relevant information to make decisions.

The defect of LLM is that you have to monitor its whole output. It's like driving a car where the steering wheel loosely connected to the front wheels and the position for straight ahead varies all the time. Or in the case of agents, it's like sleeping in a plane and finding yourself in Russia instead of Chile. If you care about quality, the cognitive load is a lot. If you only care about moving forward (even if the path made is a circle or the direction is wrong), then I guess it's OK.

So we go for standard solutions where fixed problems stays fixed and the amount of issues is a downward slope (in a well managed codebase), not an oscillating wave that is centered around some positive value.

replies(1): >>44474762 #
18. fsmv ◴[] No.44472937{3}[source]
If you assume all AI detractors haven't tried it enough then you're the one building a straw man
replies(1): >>44473790 #
19. ◴[] No.44473003{4}[source]
20. thfuran ◴[] No.44473110{4}[source]
And, though I don't think it's nearly settled, in other areas courts seen to be leaning towards the output of generative AI not being copyrightable.
21. ninetyninenine ◴[] No.44473243{3}[source]
https://www.youtube.com/watch?v=qrvK_KuIeJk&t=284s

For context, George Hinton is basically the Father of AI. He's responsible for the current resurgence of machine learning and utilizing GPUs for ML.

The video puts it plainly. You can get pedantic and try to build scaffolding around your old opinion in attempt to fit it into a different paradigm but that's just self justification and an attempt to avoid realizing or admitting that you held a strong belief that was utterly incorrect. The overall point is:

   We have never understood how LLMs work. 
That's really all that needs to be said here.
22. mwcampbell ◴[] No.44473354{4}[source]
I wonder if it's actually accurate to attribute authorship to the model. As I understand it, the code is actually derived from all of the text that went into the training set. So, strictly speaking, I guess proper attribution is impossible. More generally, I wonder what you think about the whole plagiarism/stealing issue. Is it something you're at all uneasy about as you use LLMs? Not trying to accuse or argue; I'm curious about different perspectives on this, as it's currently the hang-up preventing me from jumping into LLM-assisted coding.
replies(1): >>44473892 #
23. the_af ◴[] No.44473679{3}[source]
> Just as an example: I've heard people express AI produced stuff to not qualify as art (philosophically and in terms of output quality) but at the same express deep concern how tech companies will replace artists...

I don't think this is self contradictory at all.

One may have beliefs about the meaning of human produced art and how it cannot -- and shouldn't -- be replaced by AI, and at the same time believe that companies will cut costs and replace artists with AI, regardless of any philosophical debates. As an example, studio execs and producers are already leveraging AI as a tool to put movie industry professionals (writers, and possibly actors in the future) "in their place"; it's a power move for them, for example against strikes.

replies(1): >>44477221 #
24. ants_everywhere ◴[] No.44473790{4}[source]
I said often not always
replies(1): >>44474014 #
25. benreesman ◴[] No.44473892{5}[source]
I'm very much on the record that I want Altman tried in the Hague for crimes against humanity, and he's not the only one. So I'm no sympathizer of the TESCREAL/EA sociopaths who run frontier AI labs in 2025 (Amodei is no better).

And in a lot of areas it's clearly just copyright laundering, the way the Valley always says that breaking the law is progress if it's done with a computer (AI means computer now in policy circles).

But on code? Coding is sort of a special case in the sense that our tradition of sharing/copying/pasting/gisting-to-our-buddies-fuck-the-boss is so strong that it's kind of a different thing. Coding is also a special case on LLMs being at all useful over and above like, non-spammed Google, it's completely absurd that they generalize outside of that hyper-specific niche. And it's completely absurd the `gpt-4-1106-preview` was better than pre-AI/pre-SEO Google: LLM is both arsonist and fireman like Ethan Hunt in that Mission Impossible flick with Alex Baldwin.

So if you're asking if I think the frontier vendors have the moral high ground on anything? No, they're very very bad people and I don't associate with people who even work there.

But if you're asking if I care about my code going into a model?

https://i.ibb.co/1YPxjVvq/2025-07-05-12-40-28.png

26. seadan83 ◴[] No.44474014{5}[source]
All the same. There's a mixture of no-true-scotsman in the argument that (paraphrasing) "often they did not learn to use the tool well", and then this part is a strawman argument:

"They try to frame it in moral terms. But at heart the objections are about the fear of one's skills becoming economically obsolete."

replies(1): >>44480428 #
27. Loughla ◴[] No.44474762{4}[source]
I understand that but I'm not sure how it's a response to my original statement.
28. whatevertrevor ◴[] No.44477221{4}[source]
Yeah, I know that's the theory, but if AI generated art is slop then it follows that it can't actually replace quality art.

I don't think people will suddenly accept worse standards for art, and anyone producing high quality work will have a significant advantage.

And now if your argument is that the average consumer can't tell the difference, then well for mass production does the difference actually matter?

replies(1): >>44477515 #
29. the_af ◴[] No.44477515{5}[source]
Well, my main argument is that it's replacing humans, not that the quality is necessarily worse for mass produced slop.

Let's be cynical for a moment. A lot of Hollywood (and adjacent) movies are effectively slop. I mean, take almost all blockbusters, almost 99% action/scifi/superhero movies... they are slop. I'm not saying you cannot like them, but there's no denying they are slop. If you take offense at this proposition, just pretend it's not about any particular movie you adore, it's about the rest -- I'm not here to argue the merits of individual movies.

(As an aside, the same can be said about a lot of fantasy literature, Young Adult fiction, etc. It's by the numbers slop, maybe done with good intentions but slop nonetheless).

Superhero movie scripts could right now be written by AI, maybe with some curation by a human reviewer/script doctor.

But... as long as we accept these movies still exist, do we want to cut most humans out of the loop? These movies employ tons of people (I mean, just look at the credits), people with maybe high aspirations to which this is a job, an opportunity to hone their craft, earn their paychecks, and maybe eventually do something better. And these movies take a lot of hard, passionate work to make.

You bet your ass studios are going to either get rid of all these people or use AI to push their paychecks lower, or replace them if they protest unhealthy working conditions or whatever. Studio execs are on record admitting to this.

And does it matter? After all, the umpteenth Star Wars or Spiderman movie is just more slop.

Well, it matters to me, and I hope it's clear my argument is not exactly "AI cannot make another Avengers movie".

I also hope to have shown this position is not self-contradicting at all.

30. ants_everywhere ◴[] No.44480428{6}[source]
I remember when I first learned the names of logical fallacies too, but you aren't using either of them correctly
replies(1): >>44482952 #
31. seadan83 ◴[] No.44482952{7}[source]
Then please educate me on how the logical fallacies are misapplied.

In short, what it comes down to, is you do not know this to be true: "Detractors from AI often refuse to learn how to use it or argue that it doesn't do everything perfectly so you shouldn't use it." If you do know that to be true, please provide the citations. Sociology is a bitch, because we like to make stereotypes but it turns out that you really don't know anything about the individual you are talking to. You don't know their experiences, their learnings, their age.

Further, humans tend to have very small sample sizes based on their experiences. If you met one detractor every second for the rest of the year, your experiences would still not be statistically significant.

You can say, in your experience, in your conversations, but as a general true-ism - you need to provide some data. Further, even in your conversations, do you always really know how much the other person knows? For example, you assumed (or at least heavily implied) that I just learned the name of logical fallacies. I'm actually quite old, it's been a long while since I learned the name of logical fallacies. Regardless, it does not matter so long as the fallacies are correctly applied. Which I think they were, and I'll defend it in depth compared to your shallow dismissal.

Quoting from earlier:

> Detractors from AI often refuse to learn how to use it.. you have to learn how to use it well before you can have a sensible opinion about it.

Clearly, if you don't like AI, you just have not learned enough about it. This argument assumes that detractors are not coming from a place of experience. This is an no-true-scotsman. They wouldn't be detractors if they had more experience, you just need to do it better! The assumption of the experience level of detractors gives away the fallacy. Clearly detractors just have not learned enough.

From a definition of no-true-scotsman [1], "The no true Scotsman fallacy is the attempt to defend a generalization by denying the validity of any counterexamples given." In this case, the counterexamples provided by detractors are discounted because they (assumingly) simply have not learned how to use AI. A detractor could say "this technology does not work", and of course they are 'wrong' because they don't know how to use it well enough. Thus, the generalization is that AI is useful and the detractors are wrong due to a lack of knowledge (and so implying if they knew more, they would not be detractors).

-----

I'll define here that straw man is misrepresenting a counter argument in a weaker form, and then showing that weaker form to be false in order to discredit the entirety of the argument.

There multiple straw man:

> The same disconnect was in place for every major piece of technology, from mechanical weaving, to mechanical computing, to motorized carriages, to synthesized music. You can go back and read the articles written about these technologies and they're nearly identical to what the AI detractors have been saying... They try to frame it in moral terms.

Perhaps the disconnect is actually different. I'd say it is. Because there is no fear of job loss from AI (from this detractor at least) these examples are not relevant. That makes them a strawman.

> But at heart the objections are about the fear of one's skills becoming economically obsolete.

So:

  (1) The argument of detractors is morality based

  (2) The argument of detractors is rooted in the fear of "becoming economically obsolete".

I'd say the strongest arguments of detractors is that the technology simply doesn't work well. Period. If that is the case, then there is NO fear of "becoming economically obsolete."

Let's look at the original statement:

> Detractors say it's the process and learning that builds depth.

Which means detractors are saying that AI tools are bad because they prohibit learning. Yet, now we have words put in their mouths that the detractors actually fear becoming 'economically obsolete' and it's similar to other examples that did not prove to be the case. That is exactly a weaker form of the counter argument that is then discredited through the examples of synthesized music, etc..

So, it's not the case that AI hinders learning, it's that the detractors are afraid AI will take their jobs and they are wrong because there are similar examples where that was not the case. That's a strawman.

[1] https://www.scribbr.com/fallacies/no-true-scotsman-fallacy/