←back to thread

Glubux's Powerwall (2016)

(secondlifestorage.com)
386 points bentobean | 6 comments | | HN request time: 0.001s | source | bottom
Show context
Kaytaro ◴[] No.43550650[source]
The 2nd quote is when I realized this article was written or assisted by AI. Not that it's a big deal, that's our world now. But it's interesting to notice the subtle 'accent' that gives it away.
replies(5): >>43550687 #>>43550692 #>>43550882 #>>43551747 #>>43552109 #
dartos ◴[] No.43550687[source]
What about it gives off the AI smell to you?
replies(3): >>43550700 #>>43551005 #>>43551290 #
endtime ◴[] No.43550700[source]
Nice try, ChatGPT.

More seriously, for me it's the "likely".

replies(1): >>43550717 #
ziddoap ◴[] No.43550717[source]
Using "likely" is indicative of AI now...?

Absurd.

The only thing as annoying as people using AI and passing it off as their own writing is the people who claim everything written not exactly how they are used to is AI.

replies(3): >>43550793 #>>43551020 #>>43551062 #
cyral ◴[] No.43550793[source]
> This task, which likely required a great deal of manual labor and technical knowledge, was key to making the system work effectively and sustainably.

This is obviously AI. The writer should know that it either required manual labor or it did not, not maybe (AI loves to not "commit" to an answer and rather say maybe/likely). It also loves to loop in some vague claim about X being effective, sustainable, ethical, etc without providing any information as to WHY it is.

That and it being published on some blog spam website called techoreon.

Edit: For fun, I had o1-mini produce an article from the original source (Techspot it looks like), and it produced a similar line:

> This ingenious approach likely required significant manual effort and technical expertise, but the results speak for themselves, as evidenced by the system's eight-year flawless operation.

What these sites are doing is rewriting articles from legitimate sources, and then selling SEO backlinks to their "news" website full of generated content (and worthless backlinks). It's how all those scammy fiverr link services work

replies(5): >>43550936 #>>43550944 #>>43551036 #>>43551221 #>>43551358 #
ziddoap ◴[] No.43550936[source]
At least this is a better effort at explaining why you would believe it is AI than the other poster who just says it's AI because they used the word "likely".

I still find it very annoying that in every thread about a blog post there's someone shouting "AI!" because there's an em dash, bullet points, or some common word/saying (e.g. "likely", "crucially", "in conclusion"). It's been more intrusive on my life than actual AI writing has been.

I've been accused of using AI for writing because I have used parenthesis, ellipses, various common words, because I structured a post with bullet points and a conclusion section, etc. It's wildly frustrating.

replies(3): >>43551132 #>>43551317 #>>43551615 #
zahlman ◴[] No.43551317{5}[source]
As someone who "detects" AI frequently: it's often difficult or impossible to explain where the sense comes from. It can be very much a matter of intuition, but of course it's awkward to admit that publicly. I don't fault others for coming up with an overly simple explanation.
replies(1): >>43551541 #
1. buttercraft ◴[] No.43551541{6}[source]
How do you know how accurate you are? How do you know when you're wrong?
replies(2): >>43551672 #>>43551980 #
2. ifyoubuildit ◴[] No.43551672[source]
I think this is an excellent question and one people should be asking themselves frequently. I often get the impression that commenters have not considered this.

For example, whenever someone on the internet makes a claim about "most x", e.g. most people this, most developers that. What does anyone actually know about "most" anything? I think the answer "pretty much nothing".

replies(1): >>43551744 #
3. cyral ◴[] No.43551744[source]
Yes, this is an important point. Insert the survivorship bias plane picture that always gets posted when someone makes this mistake on other platforms (Twitter). We can be accurate at detecting poor AI writing attempts, but not know how much AI writing is good enough to go undetected.
replies(1): >>43551889 #
4. numpad0 ◴[] No.43551889{3}[source]
Someone should run a double blind test app, there was an adversarially crafted one for images and still got 60% or so average accuracy. We all just can glance the data and detect AI generation like how some experts can just let logs run and say something.
5. zahlman ◴[] No.43551980[source]
If I'm being entirely honest, in the general case I don't.

But I don't particularly care, either. After a couple tries I decided it's better not to point at object examples of suspected LLM text all the time (except e.g. to report it on Stack Overflow, where it's against the rules and where moderators will use actual detection software etc. to try to verify). But I still notice that style of writing instinctively, and it still automatically flips a switch in my brain to approach the content differently. (Of course, even when I'm confident that something was written by a human, I still e.g. try to verify terminal commands with the man pages before following instructions I don't understand.)

Of course, AI writes the way it does for a reason. More worryingly, it increasingly seems like (verifiably) human writers are mimicking the style - like they see so much AI-generated text out there that sounds authoritative, that they start trying to use the same rhetorical techniques in order to gain that same air of authority.

replies(1): >>43553792 #
6. buttercraft ◴[] No.43553792[source]
> still notice that style of writing instinctively, and it still automatically flips a switch in my brain

See, this is what worries me. We have unknowable years of instinct, and none of it is tuned for what is happening now.