←back to thread

399 points nomdep | 1 comments | | HN request time: 0s | source
Show context
socalgal2 ◴[] No.44296080[source]
> Another common argument I've heard is that Generative AI is helpful when you need to write code in a language or technology you are not familiar with. To me this also makes little sense.

I'm not sure I get this one. When I'm learning new tech I almost always have questions. I used to google them. If I couldn't find an answer I might try posting on stack overflow. Sometimes as I'm typing the question their search would finally kick in and find the answer (similar questions). Other times I'd post the question, if it didn't get closed, maybe I'd get an answer a few hours or days later.

Now I just ask ChatGPT or Gemini and more often than not it gives me the answer. That alone and nothing else (agent modes, AI editing or generating files) is enough to increase my output. I get answers 10x faster than I used to. I'm not sure what that has to do with the point about learning. Getting answers to those question is learning, regardless of where the answer comes from.

replies(13): >>44296120 #>>44296159 #>>44296324 #>>44296351 #>>44296416 #>>44296810 #>>44296818 #>>44297019 #>>44297098 #>>44298720 #>>44299945 #>>44300631 #>>44301438 #
socalgal2 ◴[] No.44296120[source]
To add, another experience I had. I was using an API I'm not that familiar with. My program was crashing. Looking at the stack trace I didn't see why. Maybe if I had many months experience with this API it would be obvious but it certainly wasn't to me. For fun I just copy and pasted the stack trace into Gemini. ~60 frames worth of C++. It immediately pointed out the likely cause given the API I was using. I fixed the bug with a 2 line change once I had that clue from the AI. That seems pretty useful to me. I'm not sure how long it would have taken me to find it otherwise since, as I said, I'm not that familiar with that API.
replies(2): >>44296616 #>>44329586 #
nottorp ◴[] No.44296616[source]
You remember when Google used to do the same thing for you way before "AI"?

Okay, maybe sometimes the post about the stack trace was in Chinese, but a plain search used to be capable of giving the same answer as a LLM.

It's not that LLMs are better, it's search that got entshittified.

replies(7): >>44296642 #>>44296691 #>>44296762 #>>44296763 #>>44297244 #>>44297283 #>>44298113 #
nsonha ◴[] No.44297244[source]
> when Google used to do the same thing for you way before "AI"?

Which is never? Do you often just lie to win arguments? LLM gives you a synthesized answer, search engine only returns what already exists. By definition it can not give you anything that is not a super obvious match

replies(1): >>44297384 #
nottorp ◴[] No.44297384[source]
> Which is never?

In my experience it was "a lot". Because my stack traces were mostly hardware related problems on arm linux in that period.

But I suppose your stack traces were much different and superior and no one can have stack traces that are different from yours. The world is composed of just you and your project.

> Do you often just lie to win arguments?

I do not enjoy being accused of lying by someone stuck in their own bubble.

When you said "Which is never" did you lie consciously or subconsciously btw?

replies(2): >>44299629 #>>44305949 #
1. nsonha ◴[] No.44305949[source]
> your stack traces were much different and superior and no one can have stack traces that are different from yours

Very few devs bother to post stack traces (or generally any programming question) online. They only do that when they're stuck so badly.

Most people work out their problem then move on. If no one posts about it your search never hits.