The 2nd quote is when I realized this article was written or assisted by AI. Not that it's a big deal, that's our world now. But it's interesting to notice the subtle 'accent' that gives it away.
The 2nd quote is when I realized this article was written or assisted by AI. Not that it's a big deal, that's our world now. But it's interesting to notice the subtle 'accent' that gives it away.
What about it gives off the AI smell to you?
Nice try, ChatGPT.
More seriously, for me it's the "likely".
Using "likely" is indicative of AI now...?
Absurd.
The only thing as annoying as people using AI and passing it off as their own writing is the people who claim everything written not exactly how they are used to is AI.
> This task, which likely required a great deal of manual labor and technical knowledge, was key to making the system work effectively and sustainably.
This is obviously AI. The writer should know that it either required manual labor or it did not, not maybe (AI loves to not "commit" to an answer and rather say maybe/likely). It also loves to loop in some vague claim about X being effective, sustainable, ethical, etc without providing any information as to WHY it is.
That and it being published on some blog spam website called techoreon.
Edit: For fun, I had o1-mini produce an article from the original source (Techspot it looks like), and it produced a similar line:
> This ingenious approach likely required significant manual effort and technical expertise, but the results speak for themselves, as evidenced by the system's eight-year flawless operation.
What these sites are doing is rewriting articles from legitimate sources, and then selling SEO backlinks to their "news" website full of generated content (and worthless backlinks). It's how all those scammy fiverr link services work
At least this is a better effort at explaining why you would believe it is AI than the other poster who just says it's AI because they used the word "likely".
I still find it very annoying that in every thread about a blog post there's someone shouting "AI!" because there's an em dash, bullet points, or some common word/saying (e.g. "likely", "crucially", "in conclusion"). It's been more intrusive on my life than actual AI writing has been.
I've been accused of using AI for writing because I have used parenthesis, ellipses, various common words, because I structured a post with bullet points and a conclusion section, etc. It's wildly frustrating.
As someone who "detects" AI frequently: it's often difficult or impossible to explain where the sense comes from. It can be very much a matter of intuition, but of course it's awkward to admit that publicly. I don't fault others for coming up with an overly simple explanation.
How do you know how accurate you are? How do you know when you're wrong?
I think this is an excellent question and one people should be asking themselves frequently. I often get the impression that commenters have not considered this.
For example, whenever someone on the internet makes a claim about "most x", e.g. most people this, most developers that. What does anyone actually know about "most" anything? I think the answer "pretty much nothing".
Yes, this is an important point. Insert the survivorship bias plane picture that always gets posted when someone makes this mistake on other platforms (Twitter). We can be accurate at detecting poor AI writing attempts, but not know how much AI writing is good enough to go undetected.
Someone should run a double blind test app, there was an adversarially crafted one for images and still got 60% or so average accuracy. We all just can glance the data and detect AI generation like how some experts can just let logs run and say something.