←back to thread

685 points jclarkcom | 2 comments | | HN request time: 0.001s | source
Show context
divvvyy ◴[] No.45948256[source]
Wild tale, but very annoying that he wrote it with an AI. It's horribly jarring to read.
replies(7): >>45948370 #>>45948402 #>>45948417 #>>45948447 #>>45948460 #>>45948476 #>>45950991 #
Grimblewald ◴[] No.45948370[source]
How do you know?

I'm not trying to be recalcitrant, rather I am genuinly curious. The reason I ask is that no one talks like a LLM, but LLMs do talk like someone. LLMs learned to mimic human speech patterns, and some unlucky soul(s) out there have had their voice stolen. Earlier versions of LLMs of LLMs that more closely followed the pattern and structure of a wikipedia entry were mimicking a style that that was based of someone elses style and given some wiki users had prolific levels of contributions, much of their naturally generated text would register as highly likely to be "AI" via those bullshit ai detector tools.

So, given what we know of LLMs (transformers at least) at this stage it seems more likely to me that current speech patterns again are mimicry of someones style rather than an organically grown/developed thing that is personal to the LLM.

replies(4): >>45948451 #>>45948470 #>>45948568 #>>45949584 #
gmzamz ◴[] No.45948451[source]
Looks like AI to me too. Em dashes (albeit nonstandard) and the ‘it’s not just x, it’s y’ ending phrases were everywhere. Harder to put into words but there’s a sense of grandiosity in the article too.

Not saying the article is bad, it seems pretty good. Just that there are indications

replies(1): >>45948699 #
lynndotpy ◴[] No.45948699[source]
It's also strange to suggest readers use ChatGPT or Claude to analyze email headers.

Might as well say "You can tell by the way it is".

replies(1): >>45949320 #
jclarkcom ◴[] No.45949320[source]
I don’t understand this comment. I’ve found AI a great tool for identifying red flags in scam emails and wanted to share that.
replies(4): >>45949799 #>>45952459 #>>45967928 #>>45973410 #
1. lynndotpy ◴[] No.45967928[source]
The content ChatGPT returns is non-deterministic (you will get different responses on the same day for the same email), and these models change over time. Even if you're an expert in your field and you can assess that the chatbot returned correct information for one entry, that's not guaranteed to be repeated.

You're staking personal reputation in the output of something you can expect to be wrong. When someone gets a suspicious email, they follow your advice, and ChatGPT incorrectly assures them that it's fine, then the scammed person would be correct in thinking you're a person with bad advice.

And if you don't believe my arguments, maybe just ask ChatGPT to generate a persuasive argument against using ChatGPT to identify scam emails.

replies(1): >>46015662 #
2. jclarkcom ◴[] No.46015662[source]
It's a good point and I should make a distinction on what models are appropriate. I think of chatGPT 4 like a college student and chatGPT 5.1 5 Pro (deep thinking model) more like a seasoned professional. I wouldn't trust non-frontier, non-thinking models with a result for this kind of question. But the determinism of the result does not scare me, the out output may vary but not directionally. The same thing would happen if you asked the foremost security expert in the world, you'd get slightly different answers on different days. One time as a I test I ran a very complex legal analysis through chat GPT pro 10 times to see how the results would vary and it was pretty consistent with ~10% variation in numbers it suggested.