←back to thread

117 points LordAtlas | 1 comments | | HN request time: 0s | source
Show context
striking ◴[] No.46184861[source]
I'm excited for the AI wildfire to come and engulf these AI-written thinkpieces. At this point I'd prefer a set of bullet points over having to sift through more "it's not X (emdash) it's Y" pestilence.
replies(10): >>46184921 #>>46185095 #>>46185343 #>>46185368 #>>46185457 #>>46185739 #>>46186119 #>>46186162 #>>46186384 #>>46187306 #
nick486 ◴[] No.46185343[source]
> "it's not X (emdash) it's Y" pestilence.

I wonder for how long this will keep working. Can't be too hard to prompt an AI to avoid "tells" like this one...

replies(3): >>46185589 #>>46185773 #>>46186120 #
ben_w ◴[] No.46185773[source]
Anyone lazy enough to not check the output is also going to be lazy enough to be easy to spot.

People who put the effort into checking the output aren't necessarily checking more than style, but some of them will, so it will still help.

replies(1): >>46186092 #
1. phantasmish ◴[] No.46186092{3}[source]
The trouble is "AI" is waaaaay less of a boost to productivity if you have to actually check the output closely. My wife does a lot with AI-assisted writing and keeps running into companies that think it's going to let them fire a shitload of writers and have the editors do everything... but editing AI slop is way more work than editing the output of a half-decent human writer, let alone a good one.

If you're getting a lot of value out of LLM writing right now, your quality was already garbage and you're just using it to increase volume, or you have let your quality crater.