←back to thread

321 points distantprovince | 1 comments | | HN request time: 0.23s | source
Show context
jmugan ◴[] No.44617776[source]
I love the post but disagree with the first example. "I asked ChatGPT and this is what it said: <...>". That seems totally fine to me. The sender put work into the prompt and the user is free to read the AI output if they choose.
replies(1): >>44617948 #
guywithahat ◴[] No.44617948[source]
I think in any real conversation, you're treating AI as this authority figure to end the conversation, despite the fact it could easily be wrong. I would extract the logic out and defend the logic on your own feet to be less rude.
replies(2): >>44618102 #>>44619505 #
1. justaj ◴[] No.44619505[source]
And what if you let a human expert fact-check the output of an LLM? Provided you're transparent about the output (and its preceding prompt(s)) ?

Because I'd much rather ask an LLM about a topic I don't know much about and let a human expert verify its contents than waste the time of a human expert in explaining the concept to me.

Once it's verified, I add it to my own documentation library so that I can refer to it later on.