←back to thread

399 points nomdep | 1 comments | | HN request time: 0.203s | source
Show context
socalgal2 ◴[] No.44296080[source]
> Another common argument I've heard is that Generative AI is helpful when you need to write code in a language or technology you are not familiar with. To me this also makes little sense.

I'm not sure I get this one. When I'm learning new tech I almost always have questions. I used to google them. If I couldn't find an answer I might try posting on stack overflow. Sometimes as I'm typing the question their search would finally kick in and find the answer (similar questions). Other times I'd post the question, if it didn't get closed, maybe I'd get an answer a few hours or days later.

Now I just ask ChatGPT or Gemini and more often than not it gives me the answer. That alone and nothing else (agent modes, AI editing or generating files) is enough to increase my output. I get answers 10x faster than I used to. I'm not sure what that has to do with the point about learning. Getting answers to those question is learning, regardless of where the answer comes from.

replies(13): >>44296120 #>>44296159 #>>44296324 #>>44296351 #>>44296416 #>>44296810 #>>44296818 #>>44297019 #>>44297098 #>>44298720 #>>44299945 #>>44300631 #>>44301438 #
plasticeagle ◴[] No.44296416[source]
ChatGPT and Gemini literally only know the answer because they read StackOverflow. Stack Overflow only exists because they have visitors.

What do you think will happen when everyone is using the AI tools to answer their questions? We'll be back in the world of Encyclopedias, in which central authorities spent large amounts of money manually collecting information and publishing it. And then they spent a good amount of time finding ways to sell that information to us, which was only fair because they spent all that time collating it. The internet pretty much destroyed that business model, and in some sense the AI "revolution" is trying to bring it back.

Also, he's specifically talking about having a coding tool write the code for you, he's not talking about using an AI tool to answer a question, so that you can go ahead and write the code yourself. These are different things, and he is treating them differently.

replies(8): >>44296713 #>>44296870 #>>44297074 #>>44299662 #>>44300158 #>>44300604 #>>44300688 #>>44301747 #
socalgal2 ◴[] No.44296713[source]
> ChatGPT and Gemini literally only know the answer because they read StackOverflow. Stack Overflow only exists because they have visitors.

I know this isn't true because I work on an API that has no answers on stackoverflow (too new), nor does it have answers anywhere else. Yet, the AI seems to able to accurately answer many questions about it. To be honest I've been somewhat shocked at this.

replies(2): >>44296793 #>>44298224 #
bbarnett ◴[] No.44296793[source]
It is absolutely true, and AI cannot think, reason, comprehend anything it has not seen before. If you're getting answers, it has seen it elsewhere, or it is literally dumb, statistical luck.

That doesn't mean it knows the answer. That means it guessed or hallucinated correctly. Guessing isn't knowing.

edit: people seem to be missing my point, so let me rephrase. Of course AIs don't think, but that wasn't what I was getting at. There is a vast difference between knowing something, and guessing.

Guessing, even in humans, is just the human mind statistically and automatically weighing probabilities and suggesting what may be the answer.

This is akin to what a model might do, without any real information. Yet in both cases, there's zero validation that anything is even remotely correct. It's 100% conjecture.

It therefore doesn't know the answer, it guessed it.

When it comes to being correct about a language or API that there's zero info on, it's just pure happenstance that it got it correct. It's important to know the differences, and not say it "knows" the answer. It doesn't. It guessed.

One of the most massive issues with LLMs is we don't get a probability response back. You ask a human "Do you know how this works", and an honest and helpful human might say "No" or "No, but you should try this. It might work".

That's helpful.

Conversely a human pretending it knows and speaking with deep authority when it doesn't is a liar.

LLMs need more of this type of response, which indicates certainty or not. They're useless without this. But of course, an LLM indicating a lack of certainty, means that customers might use it less, or not trust it as much, so... profits first! Speak with certainty on all things!

replies(9): >>44296831 #>>44296847 #>>44296866 #>>44296943 #>>44297195 #>>44297366 #>>44298266 #>>44299561 #>>44299961 #
1. lechatonnoir ◴[] No.44296831[source]
This is such a pointless, tired take.

You want to say this guy's experience isn't reproducible? That's one thing, but that's probably not the case unless you're assuming they're pretty stupid themselves.

You want to say that it Is reproducible, but that "that doesn't mean AI can think"? Okay, but that's not what the thread was about.