←back to thread

758 points alihm | 2 comments | | HN request time: 0.497s | source
Show context
meander_water ◴[] No.44469163[source]
> the "taste-skill discrepancy." Your taste (your ability to recognize quality) develops faster than your skill (your ability to produce it). This creates what Ira Glass famously called "the gap," but I think of it as the thing that separates creators from consumers.

This resonated quite strongly with me. It puts into words something that I've been feeling when working with AI. If you're new to something and using AI for it, it automatically boosts the floor of your taste, but not your skill. And you end up never slowing down to make mistakes and learn, because you can just do it without friction.

replies(8): >>44469175 #>>44469439 #>>44469556 #>>44469609 #>>44470520 #>>44470531 #>>44470633 #>>44474386 #
Loughla ◴[] No.44469175[source]
This is the disconnect between proponents and detractors of AI.

Detractors say it's the process and learning that builds depth.

Proponents say it doesn't matter because the tool exists and will always exist.

It's interesting seeing people argue about AI, because they're plainly not speaking about the same issue and simply talking past each other.

replies(4): >>44469235 #>>44469655 #>>44469774 #>>44471477 #
ninetyninenine ◴[] No.44469774[source]
>It's interesting seeing people argue about AI, because they're plainly not speaking about the same issue and simply talking past each other.

There's actually some ground truth facts about AI many people are not knowledgeable about.

Many people believe we understand in totality how LLMs work. The absolute truth of this is that we overall we do NOT understand how LLMs work AT all.

The mistaken belief that we understand LLMs is the driver behind most of the arguments. People think we understand LLMs and that we Understand that the output of LLMs is just stochastic parroting, when the truth is We Do Not understand Why or How an LLM produced a specific response for a specific prompt.

Whether the process of an LLM producing a response resembles anything close to sentience or consciousness, we actually do not know because we aren't even sure about the definitions of those words, Nor do we understand how an LLM works.

This erroneous belief is so pervasive amongst people that I'm positive I'll get extremely confident responses declaring me wrong.

These debates are not the result of people talking past each other. It's because a large segment of people on HN literally are Misinformed about LLMs.

replies(2): >>44470427 #>>44471349 #
1. exceptione ◴[] No.44471349[source]

  > we do NOT understand how LLMs work AT all.
  > We Do Not understand Why or How an LLM produced a specific response for a
  > specific prompt.

You mean the system is not deterministic? How the system works should be quite clear. I think the uncertainty is more about the premise if billions of tokens and their weights relative to each other is enough to reach intelligence. These debates are older than LLM's. In 'old' AI we were looking at (limited) autonomous agents that had the capability to participate in an environment and exchange knowledge about the world with each other. The next step for LLM's would be to update their own weights. That would be too costly in terms of money and time yet. What we do know is that for something to be seen as intelligent it cannot live in a jar. I consider the current crop as shared 8-bit computers, while each of us need one with terabytes of RAM.
replies(1): >>44473243 #
2. ninetyninenine ◴[] No.44473243[source]
https://www.youtube.com/watch?v=qrvK_KuIeJk&t=284s

For context, George Hinton is basically the Father of AI. He's responsible for the current resurgence of machine learning and utilizing GPUs for ML.

The video puts it plainly. You can get pedantic and try to build scaffolding around your old opinion in attempt to fit it into a different paradigm but that's just self justification and an attempt to avoid realizing or admitting that you held a strong belief that was utterly incorrect. The overall point is:

   We have never understood how LLMs work. 
That's really all that needs to be said here.