←back to thread

321 points distantprovince | 1 comments | | HN request time: 0.205s | source
Show context
skeledrew ◴[] No.44617731[source]
Not seeing a problem here as long as the one showing the output has reviewed it themselves before showing, and made the decision to show based on that review. That's what we should be advocating for. So far what I'm seeing is people slamming others or ignoring automatically on even the vague suspicion that something has been generated.

Just the other day I witnessed in a chat someone commenting that another (who previously sent an AI summary of something) had sent a "block of text" which they wouldn't read because it was too much, then went to read it when they were told it was from Quora, not generated. It was a wild moment for me, and I said as much.

replies(2): >>44617787 #>>44621175 #
johnnyanmac ◴[] No.44621175[source]
> the one showing the output has reviewed it themselves before showing

Now let's really asks ourselves how this works out in reality. Cut corners. People using LLM's are not using it to enhance their conversation; they are using it to get it over with.

It also doesn't help that yes, AI generated text tends to be overly verbose. Saying a lot of nothing. There are times where that formality is needed, but not in some casual work conversations. Just get to the point.

replies(1): >>44622088 #
1. skeledrew ◴[] No.44622088[source]
Shortening a conversation is a kind of enhancement. It means a state of satisfaction or completion has been reached that much sooner. Why debate back and forth for 20 minutes with incomplete arguments when 2 minutes will suffice by generating a well-prompted thread on the argument? Unless the lengthy arguing is the point.

Get a short answer by including "keep answer short" or similar in the prompt. It just works.