←back to thread

175 points koch | 1 comments | | HN request time: 0.196s | source
Show context
janaagaard ◴[] No.44487201[source]
A Danish audio newspaper host / podcaster had the exact apposite conclusion when he used ChatGPT to write the manuscript for one his episodes. He ended up spending as much time as he usually does because he had to fact check everything that the LLM came up with. Spoiler: It made up a lot of stuff despite it being very clear in the prompt, that it should not do so. To him, it was the most fun part, that is writing the manuscript, that the chatbot could help him with. His conclusion about artificial intelligence was this:

“We thought we were getting an accountant, but we got a poet.”

Frederik Kulager: Jeg fik ChatGPT til at skrive dette afsnit, og testede, om min chefredaktør ville opdage det. https://open.spotify.com/episode/22HBze1k55lFnnsLtRlEu1?si=h...

replies(3): >>44487466 #>>44487532 #>>44488237 #
notachatbot123 ◴[] No.44487532[source]
> It made up a lot of stuff despite it being very clear in the prompt, that it should not do so.

LLMs are not sentient. They are designed to make stuff up based on probability.

replies(5): >>44488286 #>>44489054 #>>44489217 #>>44491184 #>>44499344 #
1. 6510 ◴[] No.44499344[source]
Making stuff up is not actually an issue. What matters is how you present it. If I was less sure about this I would write: Making stuff up might not be an issue. It could be that how you present it is more important. Even less sure: Perhaps it would help if it didn't sound equally confident about everything?