Most active commenters
  • alluro2(3)

←back to thread

493 points todsacerdoti | 30 comments | | HN request time: 1.014s | source | bottom
Show context
JonChesterfield ◴[] No.44382974[source]
Interesting. Harder line than the LLVM one found at https://llvm.org/docs/DeveloperPolicy.html#ai-generated-cont...

I'm very old man shouting at clouds about this stuff. I don't want to review code the author doesn't understand and I don't want to merge code neither of us understand.

replies(8): >>44383040 #>>44383128 #>>44383155 #>>44383230 #>>44383315 #>>44383409 #>>44383434 #>>44384226 #
compton93 ◴[] No.44383040[source]
I don't want to review code the author doesn't understand

This really bothers me. I've had people ask me to do some task except they get AI to provide instructions on how to do the task and send me the instructions, rather than saying "Hey can you please do X". It's insulting.

replies(4): >>44383112 #>>44383861 #>>44386706 #>>44387097 #
1. andy99 ◴[] No.44383112[source]
Had someone higher up ask about something in my area of expertise. I said I didn't think is was possible, he followed up with a chatGPT conversation he had where it "gave him some ideas that we could use as an approach", as if that was some useful insight.

This is the same people that think that "learning to code" is a translation issue they don't have time for as opposed to experience they don't have.

replies(11): >>44383199 #>>44383252 #>>44383294 #>>44383446 #>>44383599 #>>44383887 #>>44383941 #>>44383965 #>>44386199 #>>44388138 #>>44390838 #
2. candiddevmike ◴[] No.44383199[source]
Imagine a boring dystopia where everyone is given hallucinated tasks from LLMs that may in some crazy way be feasible but aren't, and you can't argue that they're impossible without being fired since leadership lacks critical thinking.
replies(3): >>44383246 #>>44383580 #>>44384800 #
3. tines ◴[] No.44383246[source]
Reminds me of the wonderful skit, The Expert: https://www.youtube.com/watch?v=BKorP55Aqvg
replies(2): >>44383318 #>>44383670 #
4. a4isms ◴[] No.44383252[source]
> This is the same people that think that "learning to code" is a translation issue they don't have time for as opposed to experience they don't have.

This is very, very germane and a very quotable line. And these people have been around from long before LLMs appeared. These are the people who dash off an incomplete idea on Friday afternoon and expect to see a finished product in production by next Tuesday, latest. They have no self-awareness of how much context and disambiguation is needed to go from "idea in my head" to working, deterministic software that drives something like a process change in a business.

replies(2): >>44383411 #>>44385328 #
5. alluro2 ◴[] No.44383294[source]
A friend experienced a similar thing at work - he gave a well-informed assessment of why something is difficult to implement and it would take a couple of weeks, based on the knowledge of the system and experience with it - only for the manager to reply within 5 min with a screenshot of an (even surprisingly) idiotic ChatGPT reply, and a message along the lines of "here's how you can do it, I guess by the end of the day".

I know several people like this, and it seems they feel like they have god powers now - and that they alone can communicate with "the AI" in this way that is simply unreachable by the rest of the peasants.

replies(4): >>44383594 #>>44383716 #>>44385869 #>>44387589 #
6. stirfish ◴[] No.44383318{3}[source]
And the solution: https://www.youtube.com/watch?v=B7MIJP90biM
7. bobjordan ◴[] No.44383411[source]
You can change "software" to "hardware" and this is still an all too common viewpoint, even for engineers that should know better.
8. alganet ◴[] No.44383446[source]
In corporate, you are _forced_ to trust your coworker somehow and swallow it. Specially higher-ups.

In free software though, these kinds of nonsense suggestions always happened, way before AI. Just look at any project mailing list.

It is expected that any new suggestion will encounter some resistance, the new contributor itself should be aware of that. For serious projects specifically, the levels of skepticism are usually way higher than corporations, and that's healthy and desirable.

9. whoisthemachine ◴[] No.44383580[source]
Unfortunately this is the most likely outcome.
10. OptionOfT ◴[] No.44383594[source]
Same here. You throw a question in a channel. Someone responds in 1 minute with a code example that either you had laying around, or would take > 5 minutes to write.

The code example was AI generated. I couldn't find a single line of code anywhere in any codebase. 0 examples on GitHub.

And of course it didn't work.

But, it sent me on a wild goose because I trusted this person to give me a valuable insight. It pisses me off so much.

replies(1): >>44386873 #
11. colechristensen ◴[] No.44383599[source]
People keep asking me if AI is going to take my job and recent experience shows that it very much is not. AI is great for being mostly correct and then giving someone without enough context a mostly correct way to shoot themselves in the foot.

AI further encourages the problem in DevOps/Systems Engineering/SRE where someone comes to you and says "hey can you do this for me" having come up with the solution instead of giving you the problem "hey can you help me accomplish this"... AI gives them solutions which is more steps away to detangle into what really needs to be done.

AI has knowledge, but it doesn't have taste. Especially when it doesn't have all of the context a person with experience, it just has bad taste in solutions or just the absence of taste but with the additional problem that it makes it much easier for people to do things.

Permissions on what people have access to read and permission to change is now going to have to be more restricted because not only are we dealing with folks who have limited experience with permissions, now we have them empowered by AI to do more things which are less advisable.

replies(1): >>44387184 #
12. dotancohen ◴[] No.44383670{3}[source]
That is incredibly accurate - I used to be at meetings like that monthly. Please submit this as an HN discussion.
13. AdieuToLogic ◴[] No.44383716[source]
> I know several people like this, and it seems they feel like they have god powers now - and that they alone can communicate with "the AI" in this way that is simply unreachable by the rest of the peasants.

A far too common trap people fall into is the fallacy of "your job is easy as all you have to do is <insert trivialization here>, but my job is hard because ..."

Statistically generated text (token) responses constructed by LLM's to simplistic queries are an accelerant to the self-aggrandizing problem.

14. ◴[] No.44383887[source]
15. joshstrange ◴[] No.44383941[source]
I’ve started to experience/see this and it makes me want to scream.

You can’t dismiss it out of hand (especially with it coming from up the chain) but it takes no time at all to generate by someone who knows nothing about the problem space (or worse, just enough to be dangerous) and it could take hours or more to debunk/disprove the suggestion.

I don’t know what to call this? Cognitive DDOS? Amplified Plausibility Attack? There should be a name for it and it should be ridiculed.

replies(1): >>44385285 #
16. petesergeant ◴[] No.44383965[source]
> Had someone higher up ask about something in my area of expertise. I said I didn't think is was possible, he followed up with a chatGPT conversation he had where it "gave him some ideas that we could use as an approach", as if that was some useful insight.

I would find it very insulting if someone did this to me, for sure, as well as a huge waste of my time.

On the other hand I've also worked with some very intransigent developers who've actively fought against things they simply didn't want to do on flimsy technical grounds, knowing it couldn't be properly challenged by the requester.

On yet another hand, I've also been subordinate to people with a small amount of technical knowledge -- or a small amount of knowledge about a specific problem -- who'll do the exact same thing without ChatGPT: fire a bunch of mid-wit ideas downstream that you have already thought about, but you then need to spend a bunch of time explaining why their hot-takes aren't good. Or the CEO of a small digital agency I worked at circa 2004 asking us if we'd ever considered using CSS for our projects (which were of course CSS heavy).

17. turol ◴[] No.44384800[source]
That is a very good description of the Paranoia RPG.
18. whatevertrevor ◴[] No.44385285[source]
It's simply the Bullshit Asymmetry Principle/Brandolini's Law. It's just that bullshit generation speedrunners have recently discovered tool-assists.
19. 1dom ◴[] No.44385328[source]
The unfortunate truth is that approach does work, sometimes. It's really easy and common for capable engineers to think their way out of doing something because of all the different things they can think about it.

Sometimes, an unreasonable dumbass whose only authority comes from corporate heirarchy is needed to mandate the engineers start chipping away at the tasks. If they weren't a dumbass, they'd know the unreasonable thing they're mandating, and if they weren't unreasonable, they wouldn't mandate the someone does it.

I am an an engineer. "Sometimes" could be swapped for "rarely" above, but the point still stands: as much frustration as I have towards those people, they do occasionally lead to the impossible being delivered. But then again, a stopped clock -> twice a day etc.

replies(2): >>44385818 #>>44389884 #
20. taleinat ◴[] No.44385818{3}[source]
That approach sometimes does work, but usually very poorly and often not at all.

It can work very well when the higher-up is well informed and does have deep technical experience and understanding. Steve Jobs and Elon Musk are great, well-known examples of this. They've also provided great examples of the same approach mostly failing when applied outside of their areas of deep expertise and understanding.

21. spit2wind ◴[] No.44385869[source]
Sounds like a teachable moment.

If it's that simple, sounds like you've got your solution! Go ahead and take care of it. If it fits V&V and other normal procedures, like passing tests and documentation, then we'll merge it in. Shouldn't be a problem for you since it will only take a moment.

replies(1): >>44389001 #
22. sltr ◴[] No.44386199[source]
Reminds me of "Appeal to Aithority". (not a typo)

An LLM said it, so it must be true.

https://blog.ploeh.dk/2025/03/10/appeal-to-aithority/

23. mailund ◴[] No.44386873{3}[source]
I experienced mentioning an issue I was stuck on during standup one day, then some guy on my team DMs me a screenshot of chatGPT with text about how to solve the issue. When I explained to him why the solution he had sent me didn't make sense and wouldn't solve the issue, he sent me back the reply the LLM would give by pasting in my reply, at which point I stopped responding.

I'm just really confused what people who send LLM content to other people think they are achieving? Like if I wanted an LLM response, I would just prompt the LLM myself, instead of doing it indirectly though another person who copy/pastes back and forth.

24. MoreQARespect ◴[] No.44387184[source]
The question about whether it takes jobs away is more whether one programmer with taste can multiply their productivity between ~3-15x and take the same salary while demand for coding remains constant. It's less about whether the tool can directly replace 100% of the functions of a good programmer.
25. latexr ◴[] No.44387589[source]
> and a message along the lines of "here's how you can do it, I guess by the end of the day".

— How about you do it, motherfucker?! If it’s that simple, you do it! And when you can’t, I’ll come down there, push your face on the keyboard, and burn your office to the ground, how about that?

— Well, you don’t have to get mean about it.

— Yeah, I do have to get mean about it. Nothing worse than an ignorant, arrogant, know-it-all.

If Harlan Ellison were a programmer today.

https://www.youtube.com/watch?v=S-kiU0-f0cg&t=150s

replies(1): >>44388978 #
26. masfuerte ◴[] No.44388138[source]
You should send him a chatGPT critique of his management style.

(Or not, unless you enjoy workplace drama.)

27. alluro2 ◴[] No.44388978{3}[source]
Hah, that's a good clip :) Those "angry people" are really essential as an outlet for the rest of us.
28. alluro2 ◴[] No.44389001{3}[source]
Absolutely agree :) If only he wasn't completely non-technical, managing a team of ~30 devs of varying skill levels and experience - which is the root cause of most of the issues, I assume.
29. lowbloodsugar ◴[] No.44389884{3}[source]
if they're only right twice a day, you can run out of money doing stupid things before you hit midnight. in practice, there's a difference between a PHB asking a "stupid" question that leads to engineers having a lightbulb moment, vs a PHB insisting on going down a route that will never work.
30. itslennysfault ◴[] No.44390838[source]
At a company I used to work at I saw the CEO do this publicly (on slack) to the CTO who was an absolute expert on the topic at hand, and had spent 1000s of hours optimizing a specific system. Then, the CEO comes in and says I think this will fix our problems (link to ChatGPT convo). SOO insulting. That was the day I decided I should start looking for a new job.