←back to thread

881 points embedding-shape | 2 comments | | HN request time: 0.443s | source

As various LLMs become more and more popular, so does comments with "I asked Gemini, and Gemini said ....".

While the guidelines were written (and iterated on) during a different time, it seems like it might be time to have a discussion about if those sort of comments should be welcomed on HN or not.

Some examples:

- https://news.ycombinator.com/item?id=46164360

- https://news.ycombinator.com/item?id=46200460

- https://news.ycombinator.com/item?id=46080064

Personally, I'm on HN for the human conversation, and large LLM-generated texts just get in the way of reading real text from real humans (assumed, at least).

What do you think? Should responses that basically boil down to "I asked $LLM about $X, and here is what $LLM said:" be allowed on HN, and the guidelines updated to state that people shouldn't critique it (similar to other guidelines currently), or should a new guideline be added to ask people from refrain from copy-pasting large LLM responses into the comments, or something else completely?

1. kreck ◴[] No.46209688[source]
Yes.

Saying “ChatGPT told me …” is a fast track to getting your input dismissed on our team. That phrasing shifts accountability from you to the AI. If we really wanted advice straight from the model, we wouldn’t need a human in the loop - we’d ask it ourselves.

replies(1): >>46210146 #
2. abustamam ◴[] No.46210146[source]
We use AI heavily in our product and development flow. Sometimes we'd encounter a problem none of us can figure out at the moment so some of us would use ChatGPT to brainstorm some solutions. We'd present the solutions, poke holes into them, and go forward from there. Sometimes we don't use the actual ideas from GPT but ideas that were inspired by the generated ideas.

The intent isn't to shift accountability, it's to brainstorm. A shitty idea gets shot down quickly, whereas a good idea gets implemented.

Edit: sentence