←back to thread

321 points distantprovince | 4 comments | | HN request time: 0s | source
1. ninetyninenine ◴[] No.44617585[source]
The problem here is that I’ve been accused multiple times of using LLMs to write slop when it was genuinely written by myself.

So I apologized and began actually using LLMs while making sure the prompt included style guides and rules to avoid the tell tale signs of AI. Then some of these geniuses thanked me for being more genuine in my response.

A lot of this stuff is delusional. You only find it rude because you’re aware it’s written by AI. It’s the awareness itself that triggers it. In reality you can’t tell the difference.

This post, for example.

replies(3): >>44617641 #>>44617986 #>>44621878 #
2. ◴[] No.44617641[source]
3. scarface_74 ◴[] No.44617986[source]
I did too, the AWS “house style” (former ProServe employee) of writing even before LLMs can come across as AI Slop. Look at some blog posts on AWS even pre-2021.

I too use an LLM to help me get rid of generic filler and I do have my own style of technical writing and editing. You would never know I use an LLM.

4. card_zero ◴[] No.44621878[source]
Shades of Cyrano de Bergerac and pig-butchering scams. Which lead me to read about Milgram's "cyranoids": https://en.wikipedia.org/wiki/Cyranoid

And then "echoborgs": https://en.wikipedia.org/wiki/Echoborg

On the whole it's considered bad to mislead people. If my love letter to you is in fact a pre-written form, "my darling [insert name here]", and you suspect, but your suspicion is just baseless paranoia and a lucky guess, I suppose you're being delusional and I'm not being rude. But I'm still doing something wrong. Even if you don't suspect, and I call off the scam, I was still messing with you.

But the definition of being "misleading" is tricky, because we have personas and need them in order to communicate, which in any context at all is a kind of honest, sincere play-acting.