←back to thread

881 points embedding-shape | 9 comments | | HN request time: 1.327s | source | bottom

As various LLMs become more and more popular, so does comments with "I asked Gemini, and Gemini said ....".

While the guidelines were written (and iterated on) during a different time, it seems like it might be time to have a discussion about if those sort of comments should be welcomed on HN or not.

Some examples:

- https://news.ycombinator.com/item?id=46164360

- https://news.ycombinator.com/item?id=46200460

- https://news.ycombinator.com/item?id=46080064

Personally, I'm on HN for the human conversation, and large LLM-generated texts just get in the way of reading real text from real humans (assumed, at least).

What do you think? Should responses that basically boil down to "I asked $LLM about $X, and here is what $LLM said:" be allowed on HN, and the guidelines updated to state that people shouldn't critique it (similar to other guidelines currently), or should a new guideline be added to ask people from refrain from copy-pasting large LLM responses into the comments, or something else completely?

Show context
gortok ◴[] No.46206694[source]
While we will never be able to get folks to stop using AI to “help” them shape their replies, it’s super annoying to have folks think that by using AI that they’re doing others a favor. If I wanted to know what an AI thinks I’ll ask it. I’m here because I want to know what other people think.

At this point, I make value judgments when folks use AI for their writing, and will continue to do so.

replies(19): >>46206849 #>>46206977 #>>46207007 #>>46207266 #>>46207964 #>>46207981 #>>46208275 #>>46208494 #>>46208639 #>>46208676 #>>46208750 #>>46208883 #>>46209129 #>>46209200 #>>46209329 #>>46209332 #>>46209416 #>>46211449 #>>46211831 #
hotsauceror ◴[] No.46207007[source]
I agree with this sentiment.

When I hear "ChatGPT says..." on some topic at work, I interpret that as "Let me google that for you, only I neither care nor respect you enough to bother confirming that that answer is correct."

replies(7): >>46207092 #>>46207476 #>>46209024 #>>46209098 #>>46209421 #>>46210608 #>>46210884 #
gardenhedge ◴[] No.46207092[source]
I disagree. It's not a potential avenue for further investigation. Imo ai should always be consulted
replies(2): >>46207341 #>>46208634 #
1. OptionOfT ◴[] No.46207341[source]
But I'm not interested in the AI's point of view. I have done that myself.

I want to hear your thoughts, based on your unique experience, not the AI's which is an average of the experience of the data it ingested. The things that are unique will not surface because they aren't seen enough times.

Your value is not in copy-pasting. It's in your experience.

replies(1): >>46208546 #
2. zby ◴[] No.46208546[source]
What if I agree with what AI wrote? Should I try to hide that it was generated?
replies(2): >>46208717 #>>46208929 #
3. MarkusQ ◴[] No.46208717[source]
Did you agree with it before the AI wrote it though (in which case, what was the point of involving the AI)?

If you agree with it after seeing it, but wouldn't have thought to write it yourself, what reason is there to believe you wouldn't have found some other, contradictory AI output just as agreeable? Since one of the big objections to AI output is that they uncritically agree with nonsense from the user, scycophancy-squared is even more objectionable. It's worth taking the effort to avoid falling into this trap.

replies(1): >>46209268 #
4. subscribed ◴[] No.46208929[source]
No, but this is different.

"I asked an $LLM and it said" is very different than "in my opinion".

Your opinion may be supported by any sources you want as long as it's a genuine opinion (yours), presumably something you can defend as it's your opinion.

replies(1): >>46209340 #
5. zby ◴[] No.46209268{3}[source]
Well - the point of involving the AI is that very often it explains my intuitions way better than I can. It instantiates them and fills in all the details, sometimes showing new ways.

I find the second paragraphs contradictory - either you fear that I would agree with random stuff that the AI writes or you believe that the sycophant AI is writing what I believe. I like to think that I can recognise good arguments, but if I am wrong here - then why would you prefer my writing from an LLM generated one?

replies(2): >>46209552 #>>46209991 #
6. zby ◴[] No.46209340{3}[source]
I don't know - the linked examples were low quality - sure.
7. MarkusQ ◴[] No.46209552{4}[source]
> why would you prefer my writing from an LLM generated one?

Because I'm interested in hearing your voice, your thoughts, as you express them, for the same reason I like eating real fruit, grown on a tree, to sucking high-fructose fruit goo squeezed fresh from a tube.

8. swampangel ◴[] No.46209991{4}[source]
> Well - the point of involving the AI is that very often it explains my intuitions way better than I can. It instantiates them and fills in all the details

> I like to think that I can recognise good arguments, but if I am wrong here - then why would you prefer my writing from an LLM generated one?

Because the AI will happily argue either side of a debate, in both cases the meaningful/useful/reliable information in the post is constrained by the limits of _your_ knowledge. The LLM-based one will merely be longer.

Can you think of a time when you asked AI to support your point, and upon reviewing its argument, decided it was unconvincing after all and changed your mind?

replies(1): >>46211773 #
9. Kim_Bruning ◴[] No.46211773{5}[source]
You could instead ask Kimi K2 to demolish your point instead, and you may have to hold it back from insulting your mom in the ps.

Generally if your point holds up under polishing under Kimi pressure, by all means post it on HN, I'd say.

Other LLMs do tend to be more gentle with you, but if you ask them to be critical or to steelman the opposing view, they can be powerful tools for actually understanding where someone else is coming from.

Try this: Ask an LLM to read the view of the person you're answering to, and ask it steelman their arguments. Now think to see if your point is still defensible, or what kinds of sources or data you'd need to bolster it.