←back to thread

881 points embedding-shape | 1 comments | | HN request time: 0.204s | source

As various LLMs become more and more popular, so does comments with "I asked Gemini, and Gemini said ....".

While the guidelines were written (and iterated on) during a different time, it seems like it might be time to have a discussion about if those sort of comments should be welcomed on HN or not.

Some examples:

- https://news.ycombinator.com/item?id=46164360

- https://news.ycombinator.com/item?id=46200460

- https://news.ycombinator.com/item?id=46080064

Personally, I'm on HN for the human conversation, and large LLM-generated texts just get in the way of reading real text from real humans (assumed, at least).

What do you think? Should responses that basically boil down to "I asked $LLM about $X, and here is what $LLM said:" be allowed on HN, and the guidelines updated to state that people shouldn't critique it (similar to other guidelines currently), or should a new guideline be added to ask people from refrain from copy-pasting large LLM responses into the comments, or something else completely?

Show context
ManlyBread ◴[] No.46206958[source]
I think that the whole point of the discussion forum is to talk to other people, so I am in favor of banning AI replies. There's zero value in these posts because anyone can type chatgpt.com in the browser and then ask whatever question they want at any time while getting input from an another human being is not always guaranteed.
replies(2): >>46208341 #>>46212480 #
Aachen ◴[] No.46208341[source]
You're like the 9th out of the 10 top-level replies I've read so far that says this, with the 10th one saying it in a different way (without suggesting they could have asked it themselves). What I find interesting is that everyone agrees and nobody argues about prompt engineering, as in, nobody says it's helpful that a skilled querier shares responses from the system. Apparently there's now the sentiment that literally anybody else could have done the same without thought

Whether prompt engineering is a skill is perhaps a different topic. I just found this meta statistic in this thread interesting to observe

replies(2): >>46208812 #>>46212223 #
ManlyBread ◴[] No.46212223[source]
This is probably the first time I see the term "prompt engineer" mentioned this year. I though that this joke has ran its' course back in 2023 and is largely forgotten nowadays.
replies(1): >>46215218 #
1. alwa ◴[] No.46215218[source]
A silly name, but I’ve definitely watched peers coax sensible results out of braggadocious LLMs… and also watched friends say “make me an app that enters the TPS report data for me” (or “make fully playable Grand Theft Auto, but on Mars”) and be surprised that the result is trash.