Most active commenters
  • gsf_emergency_2(4)
  • nickelpro(4)
  • Loughla(3)
  • (3)
  • ants_everywhere(3)
  • whatevertrevor(3)
  • benreesman(3)
  • the_af(3)

←back to thread

763 points alihm | 83 comments | | HN request time: 1.959s | source | bottom
1. meander_water ◴[] No.44469163[source]
> the "taste-skill discrepancy." Your taste (your ability to recognize quality) develops faster than your skill (your ability to produce it). This creates what Ira Glass famously called "the gap," but I think of it as the thing that separates creators from consumers.

This resonated quite strongly with me. It puts into words something that I've been feeling when working with AI. If you're new to something and using AI for it, it automatically boosts the floor of your taste, but not your skill. And you end up never slowing down to make mistakes and learn, because you can just do it without friction.

replies(8): >>44469175 #>>44469439 #>>44469556 #>>44469609 #>>44470520 #>>44470531 #>>44470633 #>>44474386 #
2. Loughla ◴[] No.44469175[source]
This is the disconnect between proponents and detractors of AI.

Detractors say it's the process and learning that builds depth.

Proponents say it doesn't matter because the tool exists and will always exist.

It's interesting seeing people argue about AI, because they're plainly not speaking about the same issue and simply talking past each other.

replies(4): >>44469235 #>>44469655 #>>44469774 #>>44471477 #
3. jchw ◴[] No.44469235[source]
> It's interesting seeing people argue about AI, because they're plainly not speaking about the same issue and simply talking past each other.

It's important to realize this is actually a general truth of humans arguing. Sometimes people do disagree about the facts on the ground and what is actually true versus what is bullshit, but a lot of the time what really happens is people completely agree on the facts and even most of the implications of the facts but completely disagree on how to frame them. Doesn't even have to be Internet arguments. A lot of hot-button political topics have always been like this, too.

It's easy to dismiss people's arguments as being irrelevant, but I think there's room to say that if you were to interrogate their worldview in detail you might find that they have coherent reasoning behind why it is relevant from their perspective, even if you disagree.

Though it hasn't really improved my ability to argue or even not argue (perhaps more important), I've definitely noticed this in myself when introspecting, and it definitely makes me think more about why I feel driven to argue, what good it is, and how to do it better.

4. milkey_mouse ◴[] No.44469439[source]
If anything it's the opposite, except maybe at the very low end: AI boosts implementation skill (at least by increasing speed), but not {research, coding, writing} taste. Hence slop of all sorts.
replies(1): >>44469497 #
5. ◴[] No.44469497[source]
6. chatmasta ◴[] No.44469556[source]
This is exactly why I’m wary of ever attempting a developer-focused startup ever again.

What’s not mentioned is the utter frustration when you can see your own output is not up to your own expectations, but you can’t execute on any plan to resolve that discrepancy.

“I know what developers want, so I can build it for them” is a death knell proportionate to your own standards…

The most profitable business I built was something I hacked together in two weeks during college holiday break, when I barely knew how to code. There was no source control (I was googling “what is GitHub” at the time), it was my first time writing Python, I stored passwords in plaintext… but within a year it was generating $20k a month in revenue. It did eventually collapse under its own weight from technical debt, bugs and support cost… and I wasn’t equipped to solve those problems.

But meanwhile, as the years went on and I actually learned about quality, I lost the ability to ship because I gained the ability to recognize when it wasn’t ready… it’s not quite “perfectionism,” but it’s borne of the same pathology, of letting perfect be the enemy of good.

replies(4): >>44470028 #>>44470894 #>>44472540 #>>44473800 #
7. furyofantares ◴[] No.44469609[source]
I'm confused. I often say of every genAI I've seen of all types that it is totally lacking in taste and only has skill. And it drastically raises your skill floor immediately, perhaps all the way up to your taste, closing the gap.

Maybe that actually is what you were saying? But I'm confused because you used the opposite words.

replies(4): >>44470373 #>>44472200 #>>44472295 #>>44473313 #
8. ants_everywhere ◴[] No.44469655[source]
I usually see the opposite.

Detractors from AI often refuse to learn how to use it or argue that it doesn't do everything perfectly so you shouldn't use it.

Proponents say it's the process and learning that builds depth and you have to learn how to use it well before you can have a sensible opinion about it.

The same disconnect was in place for every major piece of technology, from mechanical weaving, to mechanical computing, to motorized carriages, to synthesized music. You can go back and read the articles written about these technologies and they're nearly identical to what the AI detractors have been saying.

One side always says you're giving away important skills and the new technology produces inferior work. They try to frame it in moral terms. But at heart the objections are about the fear of one's skills becoming economically obsolete.

replies(4): >>44470204 #>>44470707 #>>44471805 #>>44472099 #
9. ninetyninenine ◴[] No.44469774[source]
>It's interesting seeing people argue about AI, because they're plainly not speaking about the same issue and simply talking past each other.

There's actually some ground truth facts about AI many people are not knowledgeable about.

Many people believe we understand in totality how LLMs work. The absolute truth of this is that we overall we do NOT understand how LLMs work AT all.

The mistaken belief that we understand LLMs is the driver behind most of the arguments. People think we understand LLMs and that we Understand that the output of LLMs is just stochastic parroting, when the truth is We Do Not understand Why or How an LLM produced a specific response for a specific prompt.

Whether the process of an LLM producing a response resembles anything close to sentience or consciousness, we actually do not know because we aren't even sure about the definitions of those words, Nor do we understand how an LLM works.

This erroneous belief is so pervasive amongst people that I'm positive I'll get extremely confident responses declaring me wrong.

These debates are not the result of people talking past each other. It's because a large segment of people on HN literally are Misinformed about LLMs.

replies(2): >>44470427 #>>44471349 #
10. gsf_emergency_2 ◴[] No.44470028[source]
>letting perfect be the enemy of good.

My attempt to improve the cliche:

  Let skill be the enemy of taste
2 issues here. Neither can be developed (perfected?) in isolation, but they certainly ramp up at different rates. They should probably feed back into each other somehow, whether adversarially or not
replies(1): >>44470439 #
11. bluefirebrand ◴[] No.44470204{3}[source]
> But at heart the objections are about the fear of one's skills becoming economically obsolete.

I won't deny that there is some of this in my AI hesitancy

But honestly the bigger barrier for me is that I fear signing my name on subpar work that I would otherwise be embarrassed to claim as my own

If I don't type it into the editor myself, I'm not putting my name on it. It is not my code and I'm not claiming either credit nor responsibility for it

replies(3): >>44470237 #>>44470346 #>>44470597 #
12. add-sub-mul-div ◴[] No.44470237{4}[source]
Unfortunately the majority don't think like this and will take whatever shortcut allows them to go home at 5.
13. armada651 ◴[] No.44470346{4}[source]
> If I don't type it into the editor myself, I'm not putting my name on it. It is not my code and I'm not claiming either credit nor responsibility for it

This of course isn't just a moral concern, it's a legal one. I want ownership of my code, I don't want to find out later the AI just copied another project and now I've violated a license by not giving attribution.

Very few open-source projects are in the public domain and even the most permissive license requires attribution.

replies(2): >>44473003 #>>44473110 #
14. phi-go ◴[] No.44470373[source]
To me the argument also only makes sense as you understood it.
15. whatevertrevor ◴[] No.44470427{3}[source]
I couldn't agree more, and not just on HN but the world at large.

For the general populace including many tech people who are not ML researchers, understanding how convolutional neural nets work is already tricky enough. For non tech people, I'd hazard a guess that LLM/ generative AI is complexity-indistinguishable from "The YouTube/Tiktok Algorithm".

And this lack of understanding, and in many cases lack of conscious acknowledgement of the lack of understanding has made many "debates" sound almost like theocratic arguments. Very little interest in grounding positions against facts, yet strongly held opinions.

Some are convinced we're going to get AGI in a couple years, others think it's just a glorified text generator that cannot produce new content. And worse there's seemingly little that changes their mind on it.

And there are self contradictory positions held too. Just as an example: I've heard people express AI produced stuff to not qualify as art (philosophically and in terms of output quality) but at the same express deep concern how tech companies will replace artists...

replies(1): >>44473679 #
16. whatevertrevor ◴[] No.44470439{3}[source]
The issue as the article points out is you can grow taste much much faster by only engaging in consumption, which leaves skill in the dirt.
replies(1): >>44470689 #
17. theshrike79 ◴[] No.44470520[source]
This is Rick Rubin pretty much. He has 100/100 in taste, but almost 0/100 in skill.

He can't really play an instrument, but he knows exactly what works and what doesn't and can articulate it.

replies(4): >>44471377 #>>44472049 #>>44473802 #>>44474353 #
18. benreesman ◴[] No.44470531[source]
I don't know much about Ira Glass and I'm not going to be a 5 minute wikipedia expert about it, so maybe I'm missing out on very relevant philosophy (I hope someone links the must read thing), but those would be very intentionally inverted meanings of the taste/skill dichotomy.

LLMs are good at things with a lot of quantity in the training set, you can signal boost stuff, but its not perfect (and its non-obvious that you want rare/special/advanced stuff to be the sweet spot as a vendor, that's a small part of your TAM by construction).

This has all kinds of interesting tells, for example Claude is better at Bazel than Gemini is, which is kind of extreme given Google has infinite perfect Bazel and Anthropic has open source (really bad) Bazel, so you know Gemini hasn't gotten the google4 pipeline decontamination thing dialed in.

All else equal you expect a homogenizing effect where over time everything is like NextJS, Golang, and Docker.

There are outlier events, like how Claude got trained on nixpkgs in a serious way recently, but idk, maybe they want to get into defense or something.

Skill is very rarely the problem for computers, if you're considering it as district from taste (sometimes you call them both together just skill).

19. benreesman ◴[] No.44470597{4}[source]
I think you're very wise to preserve your commit handle as something other than a shift operator annotation, not everyone is.

I think I'm using it more than it sounds like you are, but I make very clear notations to myself and others about what's a big generated test suite that I froze in amber after it cleared a huge replay event, and what I've been over a fine tooth comb with personally. I type about the same amount of prose and code every day as ever, but I type a lot of code into the prompt now "like this, not like that" in a comment.

The percentage of hand-authored lines varies wildly from probably 20% of unit tests to still close to 100℅ on io_uring submission queue polling or whatever.

If it one shots a build file, eh, I put opus as the meta.authors and move on.

replies(1): >>44473354 #
20. simianwords ◴[] No.44470633[source]
This is not what Ira Glass meant by taste gap. What he rather means is that taste is important. It’s what gets you into the field and what makes you stick around. Happy to be corrected on this.
replies(1): >>44471287 #
21. gsf_emergency_2 ◴[] No.44470689{4}[source]
I've heard that one way to pace is to... only consume your own stuff (aka dogfooding :)

More grown-up way to do it is to consume your mates' stuff?

(Trying to go from where TFA left off)

replies(1): >>44473380 #
22. Shorel ◴[] No.44470707{3}[source]
> But at heart the objections are about the fear of one's skills becoming economically obsolete.

Unless I can become a millionaire just with those skills, they are in a limbo between economically adequate and economically obsolete.

23. ido ◴[] No.44470894[source]

    a developer-focused startup
I'm sorry to tell you it doesn't just apply to developer-focused startups!
replies(1): >>44471697 #
24. michaelbrave ◴[] No.44471287[source]
yes that was the gist of Ira Glass's quote, but he also added to it that it makes you feel frustrated when you have taste but are not creating things that live up to that taste, but that as a young artist you should push through that.

Here is a copy paste of the quote:

“Nobody tells this to people who are beginners, I wish someone told me. All of us who do creative work, we get into it because we have good taste. But there is this gap. For the first couple years you make stuff, it’s just not that good. It’s trying to be good, it has potential, but it’s not. But your taste, the thing that got you into the game, is still killer. And your taste is why your work disappoints you. A lot of people never get past this phase, they quit. Most people I know who do interesting, creative work went through years of this. We know our work doesn’t have this special thing that we want it to have. We all go through this. And if you are just starting out or you are still in this phase, you gotta know its normal and the most important thing you can do is do a lot of work. Put yourself on a deadline so that every week you will finish one story. It is only by going through a volume of work that you will close that gap, and your work will be as good as your ambitions. And I took longer to figure out how to do this than anyone I’ve ever met. It’s gonna take awhile. It’s normal to take awhile. You’ve just gotta fight your way through.” ― Ira Glass

25. exceptione ◴[] No.44471349{3}[source]

  > we do NOT understand how LLMs work AT all.
  > We Do Not understand Why or How an LLM produced a specific response for a
  > specific prompt.

You mean the system is not deterministic? How the system works should be quite clear. I think the uncertainty is more about the premise if billions of tokens and their weights relative to each other is enough to reach intelligence. These debates are older than LLM's. In 'old' AI we were looking at (limited) autonomous agents that had the capability to participate in an environment and exchange knowledge about the world with each other. The next step for LLM's would be to update their own weights. That would be too costly in terms of money and time yet. What we do know is that for something to be seen as intelligent it cannot live in a jar. I consider the current crop as shared 8-bit computers, while each of us need one with terabytes of RAM.
replies(1): >>44473243 #
26. missinglugnut ◴[] No.44471377[source]
Being able to articulate taste is a skill in and of itself.
replies(1): >>44473254 #
27. jibal ◴[] No.44471477[source]
This is a radical misrepresentation of the dispute.
replies(1): >>44472222 #
28. ludicrousdispla ◴[] No.44471697{3}[source]
Within every startup, there is a developer-focused startup that is trying to get out. I suppose that is because it is easier for people to think about problems that affect them directly.

Or maybe it's the only way in which companies these days give software developers agency.

29. ludicrousdispla ◴[] No.44471805{3}[source]
>> Proponents say it's the process and learning that builds depth and you have to learn how to use it well before you can have a sensible opinion about it.

That's like telling a chef they'll improve their cooking skills by adding a can of soup to everything.

30. alistairSH ◴[] No.44472049[source]
That’s an odd take for a massively successful person. In the realm of producing hip-hop, his taste and skill are at the top of the industry.

Sort of like saying Bill Belichick has a skill gap because he’s not a top NFL player. AFAIK he never played pro ball at all (and college wasn’t at a top D1 program). Bit, he’s undeniably one of the most successful coaches in the business.

replies(6): >>44472253 #>>44472310 #>>44473153 #>>44473298 #>>44474293 #>>44475021 #
31. SirHumphrey ◴[] No.44472099{3}[source]
> Detractors from AI often refuse to learn how to use it or argue that it doesn't do everything perfectly so you shouldn't use it.

But here is the problem - to effectively learn the tool, you must learn to use. Not learning how to effectively AI and then complaining that the results are bad is building a straw-men and then burning it.

But what I am giving away when using LLM is not skills, it's the ability to learn those skills. Because if the LLM instead of me is solving all easy and intermediate problems I cannot learn how to solve hard problems. The process of digging for an answer through documentation gives me a better understanding of how some technology works.

Those kinds of problems existed before - programming languages robed people of the necessity to learn assembly - high level languages of the necessity to learn low level languages - low code solutions of the necessity to learn how to code. Some of these solutions (like low level and high level programming languages) are robust enough that this trade-off makes sense - some are not (like low code).

I think it's too early to call weather AI agents go one way or the other. Putting eggs in both baskets means learning how to use AI tools and at the same time still maintaining the ability to work without them.

replies(2): >>44472449 #>>44472937 #
32. conartist6 ◴[] No.44472200[source]
The gap will open itself back up again. If you can do anything in 10 seconds with a GenAI, it won't be long until 1,000,000 people have all done it and it's considered poor taste...
33. Loughla ◴[] No.44472222{3}[source]
I disagree with you. Please expand.
replies(1): >>44472898 #
34. cheschire ◴[] No.44472253{3}[source]
You’re saying the same thing as GP. Let me attempt to clarify.

What GP is saying is not that Rick Rubin has no skill anywhere, but that he recognized he has 100/100 taste and instead of trying to become a hip hop artist, instead became a producer for other artists.

In the same way, you’ve described how Bill Belichick recognized his taste in what makes a player good is not enough to make him also a good player, so he positioned himself to take advantage of his 100/100 taste rather than whatever skill value he may have.

replies(1): >>44473471 #
35. debugnik ◴[] No.44472295[source]
Closing the gap? I think we're inverting the gap: Many people now have access to a higher skill level than they've developed taste for (if they ever did), which makes them unable to judge their own slop.
replies(2): >>44472901 #>>44473548 #
36. dgfitz ◴[] No.44472310{3}[source]
As an aside, beleichick was a lacrosse player as a hobby/sport/passion, not an American football player. I’m very torn at the moment if he was an incredible coach or just rode the wave or Brady talent.

I pay a lot of attention to football as a hobby (and a gambling outlet) so these next two seasons at UNC for ‘ol Bill will be really telling.

replies(1): >>44473031 #
37. paulryanrogers ◴[] No.44472449{4}[source]
I stopped using auto complete for a while because I found that having to search for docs and source forced me to learn the APIs more thoroughly. Or so it seemed.
38. webdevver ◴[] No.44472540[source]
what was the $20k/mo business?
replies(1): >>44474683 #
39. skydhash ◴[] No.44472898{4}[source]
Not GP, but I agree with him and I will expand.

The fact isn't that we don't know how to use AI. We've done so and the result can be very good sometimes (mostly because we know what's good and not). What's pushing us away from it is its unreliability. Our job is to automate some workflow (the business's and some of our owns') so that people can focus on the important matters and have the relevant information to make decisions.

The defect of LLM is that you have to monitor its whole output. It's like driving a car where the steering wheel loosely connected to the front wheels and the position for straight ahead varies all the time. Or in the case of agents, it's like sleeping in a plane and finding yourself in Russia instead of Chile. If you care about quality, the cognitive load is a lot. If you only care about moving forward (even if the path made is a circle or the direction is wrong), then I guess it's OK.

So we go for standard solutions where fixed problems stays fixed and the amount of issues is a downward slope (in a well managed codebase), not an oscillating wave that is centered around some positive value.

replies(1): >>44474762 #
40. ItsHarper ◴[] No.44472901{3}[source]
Yeah this type of gap is going to become a huge problem the way things are going
41. fsmv ◴[] No.44472937{4}[source]
If you assume all AI detractors haven't tried it enough then you're the one building a straw man
replies(1): >>44473790 #
42. ◴[] No.44473003{5}[source]
43. BoxFour ◴[] No.44473031{4}[source]
> I’m very torn at the moment if he was an incredible coach or just rode the wave or Brady talent.

Honestly, it’s hard to imagine they’d have been anywhere near that successful if the answer wasn't just "both."

You see plenty of examples of great coaches stuck with lousy rosters (Parcells with the Cowboys), and also great players on poorly run teams (Patricia-era Lions). Usually when a team only has one or the other, they continually flame out early in the playoffs.

> these next two seasons at UNC for ‘ol Bill will be really telling.

I wouldn’t read too much into that. He’s 73, the game’s evolved a lot, and coaching college is a whole different thing from the NFL. It’s incredibly rare for someone to excel at both — guys like Pete Carroll being the exception that prove the rule.

replies(1): >>44473315 #
44. thfuran ◴[] No.44473110{5}[source]
And, though I don't think it's nearly settled, in other areas courts seen to be leaning towards the output of generative AI not being copyrightable.
45. abenga ◴[] No.44473153{3}[source]
Rick Rubin said this in a popular interview himself, fwiw.
46. ninetyninenine ◴[] No.44473243{4}[source]
https://www.youtube.com/watch?v=qrvK_KuIeJk&t=284s

For context, George Hinton is basically the Father of AI. He's responsible for the current resurgence of machine learning and utilizing GPUs for ML.

The video puts it plainly. You can get pedantic and try to build scaffolding around your old opinion in attempt to fit it into a different paradigm but that's just self justification and an attempt to avoid realizing or admitting that you held a strong belief that was utterly incorrect. The overall point is:

   We have never understood how LLMs work. 
That's really all that needs to be said here.
47. worldsayshi ◴[] No.44473254{3}[source]
Another important skill in this area, or maybe it's a personality trait: Being able to tell yourself that taste is actually really important. You have to kind of double down on following ideas to their extreme, or something like that. Or maybe taking very subtle emotions very seriously.

Most of the time when you chase taste you are working on splitting hairs. Or it will look like that to an outside observer.

replies(1): >>44473489 #
48. satyrun ◴[] No.44473298{3}[source]
Rubin was also in the right place at the right time.

Putting out Run-DMC – Raising Hell, Slayer – Reign in Blood and Beastie Boys – Licensed to Ill in the same year is completely insane but things are probably much different if he is 20 years older or 20 years younger. '

He was in the perfect place as hip hop and metal were taking off.

49. furyofantares ◴[] No.44473313[source]
After sleeping on it and reading some replies I think I worked out what they were saying. Take drawing - your skill at producing an image is raised to a professional aesthetic (what I was saying) but your skill at drawing is unchanged (what they are saying).

But they're saying your taste, in the context of self-judgment at attempting to learn to draw, might also be raised to a professional aesthetic, because you can already produce images of that level by typing words.

I guess I will add that a difference here is we are talking about taste somewhat differently. To me, genai has been a demonstration that taste and skill are not two points on the same dimension.

50. satyrun ◴[] No.44473315{5}[source]
Exactly. It is such a stupid debate when Belichick coached and molded Brady into what he became.

Everyone has always said Belichick is basically an encyclopedia of football knowledge.

replies(1): >>44476563 #
51. mwcampbell ◴[] No.44473354{5}[source]
I wonder if it's actually accurate to attribute authorship to the model. As I understand it, the code is actually derived from all of the text that went into the training set. So, strictly speaking, I guess proper attribution is impossible. More generally, I wonder what you think about the whole plagiarism/stealing issue. Is it something you're at all uneasy about as you use LLMs? Not trying to accuse or argue; I'm curious about different perspectives on this, as it's currently the hang-up preventing me from jumping into LLM-assisted coding.
replies(1): >>44473892 #
52. the_af ◴[] No.44473380{5}[source]
I think you must consume (I hate that word, but let's go with it) elsewhere. Someone said (maybe Stephen King in "On Writing"?) in order to be a writer you must be voracious reader, and there's no escaping this. It rings true to me.

Of course the problem of taste growing much faster than skill remains, but I don't think the answer is to "consume" (yuck) less. I actually don't know if there's an answer.

replies(3): >>44474886 #>>44477997 #>>44481735 #
53. dasil003 ◴[] No.44473471{4}[source]
It’s weird to frame Belicheck as a talent picker first. Yes, he had a lot of control but he was a coach first, not a GM. The thing that made him extraordinary was not identifying talent it was orchestrating a team system to take advantage of individual talents. Compared to other coaches that had one system and tried to fit players rigidly into it, Belicheck was master of adapting the system to the personnel. Of course he also had Brady and a lot of control on personnel, but it’s ridiculous to speak as if it was primarily his taste that made the Patriots great.
54. dpritchett ◴[] No.44473489{4}[source]
An uncomfortable thing about skill, taste, and experience is that it’s often easier to demonstrate the superiority of one path over another than it is to explain the differences in a way the audience is prepared to absorb.

I imagine this is a large part of why tooling and language wars are still compelling throughout decades of computing. No amount of lecturing on the joy of e.g. Rails vs. Node will really convince anyone to use an “outdated”, slow, dynamically typed language like Ruby in 2025 — even in places where it’d be a major win.

55. dmbche ◴[] No.44473548{3}[source]
Might be unrelated, but I feel like the "boost" that everyone is talking about is cause by translating one medium into text, which most people are more capable at than the medium they are trying to produce in.

While it lets you create something you previously can't, the qualities of the medium are replaced with those of languguage.

I.e. to produce visual images you don't need an understanding of conrtast, composition, tranparency, chroma and all that, you just need to be able to articulate what you want.

I think that's where the lack of taste appears, you have a text-based interaction with a non-language medium.

How when a movie tries to keep as close as possible to a book it rarely will be a noteworthy movie, versus something built from the ground up in that medium

56. the_af ◴[] No.44473679{4}[source]
> Just as an example: I've heard people express AI produced stuff to not qualify as art (philosophically and in terms of output quality) but at the same express deep concern how tech companies will replace artists...

I don't think this is self contradictory at all.

One may have beliefs about the meaning of human produced art and how it cannot -- and shouldn't -- be replaced by AI, and at the same time believe that companies will cut costs and replace artists with AI, regardless of any philosophical debates. As an example, studio execs and producers are already leveraging AI as a tool to put movie industry professionals (writers, and possibly actors in the future) "in their place"; it's a power move for them, for example against strikes.

replies(1): >>44477221 #
57. ants_everywhere ◴[] No.44473790{5}[source]
I said often not always
replies(1): >>44474014 #
58. Kheyas ◴[] No.44473800[source]
Do you need to ship?
59. ◴[] No.44473802[source]
60. benreesman ◴[] No.44473892{6}[source]
I'm very much on the record that I want Altman tried in the Hague for crimes against humanity, and he's not the only one. So I'm no sympathizer of the TESCREAL/EA sociopaths who run frontier AI labs in 2025 (Amodei is no better).

And in a lot of areas it's clearly just copyright laundering, the way the Valley always says that breaking the law is progress if it's done with a computer (AI means computer now in policy circles).

But on code? Coding is sort of a special case in the sense that our tradition of sharing/copying/pasting/gisting-to-our-buddies-fuck-the-boss is so strong that it's kind of a different thing. Coding is also a special case on LLMs being at all useful over and above like, non-spammed Google, it's completely absurd that they generalize outside of that hyper-specific niche. And it's completely absurd the `gpt-4-1106-preview` was better than pre-AI/pre-SEO Google: LLM is both arsonist and fireman like Ethan Hunt in that Mission Impossible flick with Alex Baldwin.

So if you're asking if I think the frontier vendors have the moral high ground on anything? No, they're very very bad people and I don't associate with people who even work there.

But if you're asking if I care about my code going into a model?

https://i.ibb.co/1YPxjVvq/2025-07-05-12-40-28.png

61. seadan83 ◴[] No.44474014{6}[source]
All the same. There's a mixture of no-true-scotsman in the argument that (paraphrasing) "often they did not learn to use the tool well", and then this part is a strawman argument:

"They try to frame it in moral terms. But at heart the objections are about the fear of one's skills becoming economically obsolete."

replies(1): >>44480428 #
62. mnky9800n ◴[] No.44474293{3}[source]
I think by skill they mean that Rick Rubin plays no instruments and actively acknowledges this. In interviews he repeatedly claims his only skill is knowing what sounds good and will make money.
63. vikramkr ◴[] No.44474353[source]
P vs NP
64. nickelpro ◴[] No.44474386[source]
There's no meaningful taste-skill gap in programming because programming doesn't involve tacit skills. If you know what you're supposed to do, it is trivial to type that into a keyboard.

The taste-skill gap emerges when you intellectually recognize what a quality creation would be, but are physically unable to produce that creation, and judge the creations you are physically capable of producing as low quality

The oft cited example is drawing a circle. Everyone knows what a perfectly round circle looks like, but drawing one takes practice.

It doesn't take practice to type code. If you know what code you're supposed to write, you write it. The problem is all in the taste step, to know what code to write in the first place.

replies(2): >>44474500 #>>44474502 #
65. purplesyringa ◴[] No.44474500[source]
That's absolutely not the case. I can look at code and realize that it's garbage because the architecture sucks, performance degradation is out of the window, and there's lots of special casing and unhandled edge cases. That's the taste part. But I can also absolutely be underqualified and be unable to figure out how to improve the architecture, fix performance issues, or simplify special/edge case handling.
replies(1): >>44480312 #
66. mjr00 ◴[] No.44474502[source]
> There's no meaningful taste-skill gap in programming because programming doesn't involve tacit skills. If you know what you're supposed to do, it is trivial to type that into a keyboard.

Strongly disagree here. The taste-skill gap still applies even when there's no mechanical skill involved. A lot of amateur music production is entirely "in the box" and the taste-skill gap very much exists, even though it's trivial to e.g. click a button to change a compressor's settings.

In programming, or more broadly application development, this manifests as crappy user interfaces or crappy APIs. Some developers may not notice or care, sure, but for many the feeling is, "this doesn't seem right, but I'm not exactly sure what's wrong or how to fix it." And that feeling is the taste-skill gap.

replies(2): >>44474958 #>>44480346 #
67. chatmasta ◴[] No.44474683{3}[source]
Selling proxies for scraping… this was circa 2011.
68. Loughla ◴[] No.44474762{5}[source]
I understand that but I'm not sure how it's a response to my original statement.
69. zambal ◴[] No.44474886{6}[source]
I don't think it's actually a problem. Taste can guide the direction skill needs to go.
70. mitjam ◴[] No.44474958{3}[source]
Yes and for me vibe coding / agent assisted coding is not just pouring canned skills but about developing skills to handle this new machine in a way to produce intended results.
71. mitjam ◴[] No.44475021{3}[source]
he also said he started always with anxiety, was pushing, working not in a comfort zone. For me this Looks very much like „do, learn“. Another Rick Rubim quote: Humanity breeds in the mistakes. https://www.youtube.com/watch?v=brPHcAJn7ZU
72. dgfitz ◴[] No.44476563{6}[source]
That’s my whole point. Brady went on to win a ring in Tampa. Bill did… what?

I don’t give belicheck the credit for teaching Brady, you can’t teach that. It’s not stupid at all if you’re a fan of the sport.

73. whatevertrevor ◴[] No.44477221{5}[source]
Yeah, I know that's the theory, but if AI generated art is slop then it follows that it can't actually replace quality art.

I don't think people will suddenly accept worse standards for art, and anyone producing high quality work will have a significant advantage.

And now if your argument is that the average consumer can't tell the difference, then well for mass production does the difference actually matter?

replies(1): >>44477515 #
74. the_af ◴[] No.44477515{6}[source]
Well, my main argument is that it's replacing humans, not that the quality is necessarily worse for mass produced slop.

Let's be cynical for a moment. A lot of Hollywood (and adjacent) movies are effectively slop. I mean, take almost all blockbusters, almost 99% action/scifi/superhero movies... they are slop. I'm not saying you cannot like them, but there's no denying they are slop. If you take offense at this proposition, just pretend it's not about any particular movie you adore, it's about the rest -- I'm not here to argue the merits of individual movies.

(As an aside, the same can be said about a lot of fantasy literature, Young Adult fiction, etc. It's by the numbers slop, maybe done with good intentions but slop nonetheless).

Superhero movie scripts could right now be written by AI, maybe with some curation by a human reviewer/script doctor.

But... as long as we accept these movies still exist, do we want to cut most humans out of the loop? These movies employ tons of people (I mean, just look at the credits), people with maybe high aspirations to which this is a job, an opportunity to hone their craft, earn their paychecks, and maybe eventually do something better. And these movies take a lot of hard, passionate work to make.

You bet your ass studios are going to either get rid of all these people or use AI to push their paychecks lower, or replace them if they protest unhealthy working conditions or whatever. Studio execs are on record admitting to this.

And does it matter? After all, the umpteenth Star Wars or Spiderman movie is just more slop.

Well, it matters to me, and I hope it's clear my argument is not exactly "AI cannot make another Avengers movie".

I also hope to have shown this position is not self-contradicting at all.

75. gsf_emergency_2 ◴[] No.44477997{6}[source]
>yuck

Some alts to choose from: "use","utilize","imbibe","process","assimilate","experience"

76. nickelpro ◴[] No.44480312{3}[source]
Then your taste hasn't developed. You don't know what good code for the problem even looks like. It's not that your code doesn't resemble what you wanted to make, you don't know what you want to make at all.
77. nickelpro ◴[] No.44480346{3}[source]
If you know what sound you want to hear, but don't know the compressor settings to make that sound, that is a taste-skill gap.

If you don't know what sound you want to hear at all, that's undeveloped taste.

If you know what code you want to type, but don't know how to use a keyboard, that would be a taste-skill gap.

If you don't know what code you want to type at all, that's undeveloped taste.

replies(1): >>44487147 #
78. ants_everywhere ◴[] No.44480428{7}[source]
I remember when I first learned the names of logical fallacies too, but you aren't using either of them correctly
replies(1): >>44482952 #
79. jpc0 ◴[] No.44481735{6}[source]
Something important is “consuming” critically.

You can be a passive consumer and never improve your taste or skill. However when you consume with the intent of asking how and then attempting to answer that question ( for skill ) and why ( for taste ) you get a much different experience.

Read code, looking for patterns. Look at design looking for patterns.

Then play, try to implement what you saw, implement to opposite and see how if feels, see what happens to the code.

This is a lot of work, but helps you improve.

replies(1): >>44485230 #
80. seadan83 ◴[] No.44482952{8}[source]
Then please educate me on how the logical fallacies are misapplied.

In short, what it comes down to, is you do not know this to be true: "Detractors from AI often refuse to learn how to use it or argue that it doesn't do everything perfectly so you shouldn't use it." If you do know that to be true, please provide the citations. Sociology is a bitch, because we like to make stereotypes but it turns out that you really don't know anything about the individual you are talking to. You don't know their experiences, their learnings, their age.

Further, humans tend to have very small sample sizes based on their experiences. If you met one detractor every second for the rest of the year, your experiences would still not be statistically significant.

You can say, in your experience, in your conversations, but as a general true-ism - you need to provide some data. Further, even in your conversations, do you always really know how much the other person knows? For example, you assumed (or at least heavily implied) that I just learned the name of logical fallacies. I'm actually quite old, it's been a long while since I learned the name of logical fallacies. Regardless, it does not matter so long as the fallacies are correctly applied. Which I think they were, and I'll defend it in depth compared to your shallow dismissal.

Quoting from earlier:

> Detractors from AI often refuse to learn how to use it.. you have to learn how to use it well before you can have a sensible opinion about it.

Clearly, if you don't like AI, you just have not learned enough about it. This argument assumes that detractors are not coming from a place of experience. This is an no-true-scotsman. They wouldn't be detractors if they had more experience, you just need to do it better! The assumption of the experience level of detractors gives away the fallacy. Clearly detractors just have not learned enough.

From a definition of no-true-scotsman [1], "The no true Scotsman fallacy is the attempt to defend a generalization by denying the validity of any counterexamples given." In this case, the counterexamples provided by detractors are discounted because they (assumingly) simply have not learned how to use AI. A detractor could say "this technology does not work", and of course they are 'wrong' because they don't know how to use it well enough. Thus, the generalization is that AI is useful and the detractors are wrong due to a lack of knowledge (and so implying if they knew more, they would not be detractors).

-----

I'll define here that straw man is misrepresenting a counter argument in a weaker form, and then showing that weaker form to be false in order to discredit the entirety of the argument.

There multiple straw man:

> The same disconnect was in place for every major piece of technology, from mechanical weaving, to mechanical computing, to motorized carriages, to synthesized music. You can go back and read the articles written about these technologies and they're nearly identical to what the AI detractors have been saying... They try to frame it in moral terms.

Perhaps the disconnect is actually different. I'd say it is. Because there is no fear of job loss from AI (from this detractor at least) these examples are not relevant. That makes them a strawman.

> But at heart the objections are about the fear of one's skills becoming economically obsolete.

So:

  (1) The argument of detractors is morality based

  (2) The argument of detractors is rooted in the fear of "becoming economically obsolete".

I'd say the strongest arguments of detractors is that the technology simply doesn't work well. Period. If that is the case, then there is NO fear of "becoming economically obsolete."

Let's look at the original statement:

> Detractors say it's the process and learning that builds depth.

Which means detractors are saying that AI tools are bad because they prohibit learning. Yet, now we have words put in their mouths that the detractors actually fear becoming 'economically obsolete' and it's similar to other examples that did not prove to be the case. That is exactly a weaker form of the counter argument that is then discredited through the examples of synthesized music, etc..

So, it's not the case that AI hinders learning, it's that the detractors are afraid AI will take their jobs and they are wrong because there are similar examples where that was not the case. That's a strawman.

[1] https://www.scribbr.com/fallacies/no-true-scotsman-fallacy/

81. gsf_emergency_2 ◴[] No.44485230{7}[source]
You've just suggested to me following optimization:

  Prioritize "consuming" your frenemies' stuff
Because one always has to pay full attention when doing that :)
82. mjr00 ◴[] No.44487147{4}[source]
> If you know what code you want to type, but don't know how to use a keyboard, that would be a taste-skill gap.

Ira Glass is a writer. Do you think he meant the taste-skill gap was when people couldn't physically write the words on the page they wanted?

replies(1): >>44490478 #
83. nickelpro ◴[] No.44490478{5}[source]
I'm not Ira Glass, I have no idea what he meant. I would argue that taste-skill gap doesn't exist in writing either.

You either know what you want to write or you don't. If you hate the words you wrote, write something else. If you don't know what you want to write, that's undeveloped taste, not a gap preventing your from expressing your good taste.