←back to thread

759 points alihm | 4 comments | | HN request time: 1.096s | source
Show context
meander_water ◴[] No.44469163[source]
> the "taste-skill discrepancy." Your taste (your ability to recognize quality) develops faster than your skill (your ability to produce it). This creates what Ira Glass famously called "the gap," but I think of it as the thing that separates creators from consumers.

This resonated quite strongly with me. It puts into words something that I've been feeling when working with AI. If you're new to something and using AI for it, it automatically boosts the floor of your taste, but not your skill. And you end up never slowing down to make mistakes and learn, because you can just do it without friction.

replies(8): >>44469175 #>>44469439 #>>44469556 #>>44469609 #>>44470520 #>>44470531 #>>44470633 #>>44474386 #
Loughla ◴[] No.44469175[source]
This is the disconnect between proponents and detractors of AI.

Detractors say it's the process and learning that builds depth.

Proponents say it doesn't matter because the tool exists and will always exist.

It's interesting seeing people argue about AI, because they're plainly not speaking about the same issue and simply talking past each other.

replies(4): >>44469235 #>>44469655 #>>44469774 #>>44471477 #
ants_everywhere ◴[] No.44469655[source]
I usually see the opposite.

Detractors from AI often refuse to learn how to use it or argue that it doesn't do everything perfectly so you shouldn't use it.

Proponents say it's the process and learning that builds depth and you have to learn how to use it well before you can have a sensible opinion about it.

The same disconnect was in place for every major piece of technology, from mechanical weaving, to mechanical computing, to motorized carriages, to synthesized music. You can go back and read the articles written about these technologies and they're nearly identical to what the AI detractors have been saying.

One side always says you're giving away important skills and the new technology produces inferior work. They try to frame it in moral terms. But at heart the objections are about the fear of one's skills becoming economically obsolete.

replies(4): >>44470204 #>>44470707 #>>44471805 #>>44472099 #
SirHumphrey ◴[] No.44472099[source]
> Detractors from AI often refuse to learn how to use it or argue that it doesn't do everything perfectly so you shouldn't use it.

But here is the problem - to effectively learn the tool, you must learn to use. Not learning how to effectively AI and then complaining that the results are bad is building a straw-men and then burning it.

But what I am giving away when using LLM is not skills, it's the ability to learn those skills. Because if the LLM instead of me is solving all easy and intermediate problems I cannot learn how to solve hard problems. The process of digging for an answer through documentation gives me a better understanding of how some technology works.

Those kinds of problems existed before - programming languages robed people of the necessity to learn assembly - high level languages of the necessity to learn low level languages - low code solutions of the necessity to learn how to code. Some of these solutions (like low level and high level programming languages) are robust enough that this trade-off makes sense - some are not (like low code).

I think it's too early to call weather AI agents go one way or the other. Putting eggs in both baskets means learning how to use AI tools and at the same time still maintaining the ability to work without them.

replies(2): >>44472449 #>>44472937 #
fsmv ◴[] No.44472937[source]
If you assume all AI detractors haven't tried it enough then you're the one building a straw man
replies(1): >>44473790 #
1. ants_everywhere ◴[] No.44473790[source]
I said often not always
replies(1): >>44474014 #
2. seadan83 ◴[] No.44474014[source]
All the same. There's a mixture of no-true-scotsman in the argument that (paraphrasing) "often they did not learn to use the tool well", and then this part is a strawman argument:

"They try to frame it in moral terms. But at heart the objections are about the fear of one's skills becoming economically obsolete."

replies(1): >>44480428 #
3. ants_everywhere ◴[] No.44480428[source]
I remember when I first learned the names of logical fallacies too, but you aren't using either of them correctly
replies(1): >>44482952 #
4. seadan83 ◴[] No.44482952{3}[source]
Then please educate me on how the logical fallacies are misapplied.

In short, what it comes down to, is you do not know this to be true: "Detractors from AI often refuse to learn how to use it or argue that it doesn't do everything perfectly so you shouldn't use it." If you do know that to be true, please provide the citations. Sociology is a bitch, because we like to make stereotypes but it turns out that you really don't know anything about the individual you are talking to. You don't know their experiences, their learnings, their age.

Further, humans tend to have very small sample sizes based on their experiences. If you met one detractor every second for the rest of the year, your experiences would still not be statistically significant.

You can say, in your experience, in your conversations, but as a general true-ism - you need to provide some data. Further, even in your conversations, do you always really know how much the other person knows? For example, you assumed (or at least heavily implied) that I just learned the name of logical fallacies. I'm actually quite old, it's been a long while since I learned the name of logical fallacies. Regardless, it does not matter so long as the fallacies are correctly applied. Which I think they were, and I'll defend it in depth compared to your shallow dismissal.

Quoting from earlier:

> Detractors from AI often refuse to learn how to use it.. you have to learn how to use it well before you can have a sensible opinion about it.

Clearly, if you don't like AI, you just have not learned enough about it. This argument assumes that detractors are not coming from a place of experience. This is an no-true-scotsman. They wouldn't be detractors if they had more experience, you just need to do it better! The assumption of the experience level of detractors gives away the fallacy. Clearly detractors just have not learned enough.

From a definition of no-true-scotsman [1], "The no true Scotsman fallacy is the attempt to defend a generalization by denying the validity of any counterexamples given." In this case, the counterexamples provided by detractors are discounted because they (assumingly) simply have not learned how to use AI. A detractor could say "this technology does not work", and of course they are 'wrong' because they don't know how to use it well enough. Thus, the generalization is that AI is useful and the detractors are wrong due to a lack of knowledge (and so implying if they knew more, they would not be detractors).

-----

I'll define here that straw man is misrepresenting a counter argument in a weaker form, and then showing that weaker form to be false in order to discredit the entirety of the argument.

There multiple straw man:

> The same disconnect was in place for every major piece of technology, from mechanical weaving, to mechanical computing, to motorized carriages, to synthesized music. You can go back and read the articles written about these technologies and they're nearly identical to what the AI detractors have been saying... They try to frame it in moral terms.

Perhaps the disconnect is actually different. I'd say it is. Because there is no fear of job loss from AI (from this detractor at least) these examples are not relevant. That makes them a strawman.

> But at heart the objections are about the fear of one's skills becoming economically obsolete.

So:

  (1) The argument of detractors is morality based

  (2) The argument of detractors is rooted in the fear of "becoming economically obsolete".

I'd say the strongest arguments of detractors is that the technology simply doesn't work well. Period. If that is the case, then there is NO fear of "becoming economically obsolete."

Let's look at the original statement:

> Detractors say it's the process and learning that builds depth.

Which means detractors are saying that AI tools are bad because they prohibit learning. Yet, now we have words put in their mouths that the detractors actually fear becoming 'economically obsolete' and it's similar to other examples that did not prove to be the case. That is exactly a weaker form of the counter argument that is then discredited through the examples of synthesized music, etc..

So, it's not the case that AI hinders learning, it's that the detractors are afraid AI will take their jobs and they are wrong because there are similar examples where that was not the case. That's a strawman.

[1] https://www.scribbr.com/fallacies/no-true-scotsman-fallacy/