←back to thread

490 points todsacerdoti | 8 comments | | HN request time: 0.714s | source | bottom
Show context
JonChesterfield ◴[] No.44382974[source]
Interesting. Harder line than the LLVM one found at https://llvm.org/docs/DeveloperPolicy.html#ai-generated-cont...

I'm very old man shouting at clouds about this stuff. I don't want to review code the author doesn't understand and I don't want to merge code neither of us understand.

replies(8): >>44383040 #>>44383128 #>>44383155 #>>44383230 #>>44383315 #>>44383409 #>>44383434 #>>44384226 #
compton93 ◴[] No.44383040[source]
I don't want to review code the author doesn't understand

This really bothers me. I've had people ask me to do some task except they get AI to provide instructions on how to do the task and send me the instructions, rather than saying "Hey can you please do X". It's insulting.

replies(4): >>44383112 #>>44383861 #>>44386706 #>>44387097 #
andy99 ◴[] No.44383112[source]
Had someone higher up ask about something in my area of expertise. I said I didn't think is was possible, he followed up with a chatGPT conversation he had where it "gave him some ideas that we could use as an approach", as if that was some useful insight.

This is the same people that think that "learning to code" is a translation issue they don't have time for as opposed to experience they don't have.

replies(10): >>44383199 #>>44383252 #>>44383294 #>>44383446 #>>44383599 #>>44383887 #>>44383941 #>>44383965 #>>44386199 #>>44388138 #
1. alluro2 ◴[] No.44383294[source]
A friend experienced a similar thing at work - he gave a well-informed assessment of why something is difficult to implement and it would take a couple of weeks, based on the knowledge of the system and experience with it - only for the manager to reply within 5 min with a screenshot of an (even surprisingly) idiotic ChatGPT reply, and a message along the lines of "here's how you can do it, I guess by the end of the day".

I know several people like this, and it seems they feel like they have god powers now - and that they alone can communicate with "the AI" in this way that is simply unreachable by the rest of the peasants.

replies(4): >>44383594 #>>44383716 #>>44385869 #>>44387589 #
2. OptionOfT ◴[] No.44383594[source]
Same here. You throw a question in a channel. Someone responds in 1 minute with a code example that either you had laying around, or would take > 5 minutes to write.

The code example was AI generated. I couldn't find a single line of code anywhere in any codebase. 0 examples on GitHub.

And of course it didn't work.

But, it sent me on a wild goose because I trusted this person to give me a valuable insight. It pisses me off so much.

replies(1): >>44386873 #
3. AdieuToLogic ◴[] No.44383716[source]
> I know several people like this, and it seems they feel like they have god powers now - and that they alone can communicate with "the AI" in this way that is simply unreachable by the rest of the peasants.

A far too common trap people fall into is the fallacy of "your job is easy as all you have to do is <insert trivialization here>, but my job is hard because ..."

Statistically generated text (token) responses constructed by LLM's to simplistic queries are an accelerant to the self-aggrandizing problem.

4. spit2wind ◴[] No.44385869[source]
Sounds like a teachable moment.

If it's that simple, sounds like you've got your solution! Go ahead and take care of it. If it fits V&V and other normal procedures, like passing tests and documentation, then we'll merge it in. Shouldn't be a problem for you since it will only take a moment.

replies(1): >>44389001 #
5. mailund ◴[] No.44386873[source]
I experienced mentioning an issue I was stuck on during standup one day, then some guy on my team DMs me a screenshot of chatGPT with text about how to solve the issue. When I explained to him why the solution he had sent me didn't make sense and wouldn't solve the issue, he sent me back the reply the LLM would give by pasting in my reply, at which point I stopped responding.

I'm just really confused what people who send LLM content to other people think they are achieving? Like if I wanted an LLM response, I would just prompt the LLM myself, instead of doing it indirectly though another person who copy/pastes back and forth.

6. latexr ◴[] No.44387589[source]
> and a message along the lines of "here's how you can do it, I guess by the end of the day".

— How about you do it, motherfucker?! If it’s that simple, you do it! And when you can’t, I’ll come down there, push your face on the keyboard, and burn your office to the ground, how about that?

— Well, you don’t have to get mean about it.

— Yeah, I do have to get mean about it. Nothing worse than an ignorant, arrogant, know-it-all.

If Harlan Ellison were a programmer today.

https://www.youtube.com/watch?v=S-kiU0-f0cg&t=150s

replies(1): >>44388978 #
7. alluro2 ◴[] No.44388978[source]
Hah, that's a good clip :) Those "angry people" are really essential as an outlet for the rest of us.
8. alluro2 ◴[] No.44389001[source]
Absolutely agree :) If only he wasn't completely non-technical, managing a team of ~30 devs of varying skill levels and experience - which is the root cause of most of the issues, I assume.