←back to thread

399 points nomdep | 5 comments | | HN request time: 0.283s | source
Show context
socalgal2 ◴[] No.44296080[source]
> Another common argument I've heard is that Generative AI is helpful when you need to write code in a language or technology you are not familiar with. To me this also makes little sense.

I'm not sure I get this one. When I'm learning new tech I almost always have questions. I used to google them. If I couldn't find an answer I might try posting on stack overflow. Sometimes as I'm typing the question their search would finally kick in and find the answer (similar questions). Other times I'd post the question, if it didn't get closed, maybe I'd get an answer a few hours or days later.

Now I just ask ChatGPT or Gemini and more often than not it gives me the answer. That alone and nothing else (agent modes, AI editing or generating files) is enough to increase my output. I get answers 10x faster than I used to. I'm not sure what that has to do with the point about learning. Getting answers to those question is learning, regardless of where the answer comes from.

replies(13): >>44296120 #>>44296159 #>>44296324 #>>44296351 #>>44296416 #>>44296810 #>>44296818 #>>44297019 #>>44297098 #>>44298720 #>>44299945 #>>44300631 #>>44301438 #
plasticeagle ◴[] No.44296416[source]
ChatGPT and Gemini literally only know the answer because they read StackOverflow. Stack Overflow only exists because they have visitors.

What do you think will happen when everyone is using the AI tools to answer their questions? We'll be back in the world of Encyclopedias, in which central authorities spent large amounts of money manually collecting information and publishing it. And then they spent a good amount of time finding ways to sell that information to us, which was only fair because they spent all that time collating it. The internet pretty much destroyed that business model, and in some sense the AI "revolution" is trying to bring it back.

Also, he's specifically talking about having a coding tool write the code for you, he's not talking about using an AI tool to answer a question, so that you can go ahead and write the code yourself. These are different things, and he is treating them differently.

replies(8): >>44296713 #>>44296870 #>>44297074 #>>44299662 #>>44300158 #>>44300604 #>>44300688 #>>44301747 #
socalgal2 ◴[] No.44296713[source]
> ChatGPT and Gemini literally only know the answer because they read StackOverflow. Stack Overflow only exists because they have visitors.

I know this isn't true because I work on an API that has no answers on stackoverflow (too new), nor does it have answers anywhere else. Yet, the AI seems to able to accurately answer many questions about it. To be honest I've been somewhat shocked at this.

replies(2): >>44296793 #>>44298224 #
bbarnett ◴[] No.44296793[source]
It is absolutely true, and AI cannot think, reason, comprehend anything it has not seen before. If you're getting answers, it has seen it elsewhere, or it is literally dumb, statistical luck.

That doesn't mean it knows the answer. That means it guessed or hallucinated correctly. Guessing isn't knowing.

edit: people seem to be missing my point, so let me rephrase. Of course AIs don't think, but that wasn't what I was getting at. There is a vast difference between knowing something, and guessing.

Guessing, even in humans, is just the human mind statistically and automatically weighing probabilities and suggesting what may be the answer.

This is akin to what a model might do, without any real information. Yet in both cases, there's zero validation that anything is even remotely correct. It's 100% conjecture.

It therefore doesn't know the answer, it guessed it.

When it comes to being correct about a language or API that there's zero info on, it's just pure happenstance that it got it correct. It's important to know the differences, and not say it "knows" the answer. It doesn't. It guessed.

One of the most massive issues with LLMs is we don't get a probability response back. You ask a human "Do you know how this works", and an honest and helpful human might say "No" or "No, but you should try this. It might work".

That's helpful.

Conversely a human pretending it knows and speaking with deep authority when it doesn't is a liar.

LLMs need more of this type of response, which indicates certainty or not. They're useless without this. But of course, an LLM indicating a lack of certainty, means that customers might use it less, or not trust it as much, so... profits first! Speak with certainty on all things!

replies(9): >>44296831 #>>44296847 #>>44296866 #>>44296943 #>>44297195 #>>44297366 #>>44298266 #>>44299561 #>>44299961 #
1. demosthanos ◴[] No.44298266[source]
This is wrong. I write toy languages and frameworks for fun. These are APIs that simply don't exist outside of my code base, and LLMs are consistently able to:

* Read the signatures of the functions.

* Use the code correctly.

* Answer questions about the behavior of the underlying API by consulting the code.

Of course they're just guessing if they go beyond what's in their context window, but don't underestimate context window!

replies(1): >>44298293 #
2. bbarnett ◴[] No.44298293[source]
So, you're saying you provided examples of the code and APIs and more, in the context window, and it succeeds? That sounds very much unlike the post I responded to, which claimed "no knowledge". You're also seemingly missing this:

"If you're getting answers, it has seen it elsewhere"

The context window is 'elsewhere'.

replies(2): >>44298418 #>>44300719 #
3. demosthanos ◴[] No.44298418[source]
If that's the distinction you're drawing then it's totally meaningless in the context of the question of where the information is going to come from if not Stack Overflow. We're never in a situation where we're using an open source library that has zero information about it: The code is by definition available to be put in the context window.

As they say, it sounds like you're technically correct, which is the best kind of correct. You're correct within the extremely artificial parameters that you created for yourself, but not in any real world context that matters when it comes to real people using these tools.

replies(1): >>44300435 #
4. fnordpiglet ◴[] No.44300435{3}[source]
The argument is futile as the goal posts move constantly. In one moment the assertion is it’s just megacopy paste, then the next when evidence is shown that it’s able to one shot construct seemingly novel and correct answers from an api spec or grammar never seen before, the goal posts move to “it’s unable to produce results on things it’s never been trained on or in its context” - as if making up a fake language and asking it write code in it and its inability to do so without a grammar is an indication of literally anything.

To anyone who has used these tools in anger it’s remarkable given they’re only trained on large corpuses of language and feedback they’re able to produce what they do. I don’t claim they exist outside their weights, that’s absurd. But the entire point of non linear function activations with many layers and parameters is to learn highly complex non linear relationships. The fact they can be trained as much as they are with as much data as they have without overfitting or gradient explosions means the very nature of language contains immense information in its encoding and structure, and the network by definition of how it works and is trained does -not- just return what it was trained on. It’s able to curve fit complex functions that inter relate semantic concepts that are clearly not understood as we understand them, but in some ways it represents an “understanding” that’s sometimes perhaps more complex and nuanced than even we can.

Anyway the stochastic parrot euphemism misses the point that parrots are incredibly intelligent animals - which is apt since those who use that phrase are missing the point.

5. semiquaver ◴[] No.44300719[source]
This is moving goalposts vs the original claim upthread that LLMs are just regurgitating human-authored stackoverflow answers and without those answers it would be useless.

It’s silly to say that something LLMs can reliably do is impossible and every time it happens it’s “dumb luck”.