←back to thread

Using LLMs at Oxide

(rfd.shared.oxide.computer)
694 points steveklabnik | 2 comments | | HN request time: 0s | source
Show context
bryancoxwell ◴[] No.46178509[source]
Find it interesting that the section about LLM’s tells when using it for writing is absolutely littered with emdashes
replies(4): >>46178523 #>>46178524 #>>46178632 #>>46178868 #
minimaxir ◴[] No.46178524[source]
You can stop LLMs from using em-dashes by just telling it to "never use em-dashes". This same type of prompt engineering works to mitigate almost every sign of AI-generated writing, which is one reason why AI writing heuristics/detectors can never be fully reliable.
replies(2): >>46178654 #>>46181995 #
1. jgalt212 ◴[] No.46181995[source]
I guess, but if even in you set aside any obvious tells, pretty much all expository writing out of an LLM still reads like pablum without any real conviction or tons of hedges against observed opinions.

"lack of conviction" would be a useful LLM metric.

replies(1): >>46182934 #
2. minimaxir ◴[] No.46182934[source]
I ran a test for a potential blog post where I take every indicator of AI writing and tell the LLM "don't do any of these" and resulted in high school AP English quality writing. Which could be considered a lack of conviction level of writing.