←back to thread

263 points josephcsible | 1 comments | | HN request time: 0s | source
Show context
tyushk ◴[] No.46178228[source]
> A BBC journalist ran the image through an AI chatbot which identified key spots that may have been manipulated.

The image is likely AI generated in this case, but this does not seem like the best strategy for finding out if an image is AI generated.

replies(11): >>46178306 #>>46178326 #>>46178446 #>>46178714 #>>46178833 #>>46178906 #>>46178907 #>>46179028 #>>46179295 #>>46179902 #>>46184661 #
skissane ◴[] No.46178833[source]
Someone I know is a high school English teacher (being vague because I don’t want to cause them trouble or embarrassment). They told me they were asking ChatGPT to tell them whether their students’ creative writing assignments were AI-generated or not-I pointed out that LLMs such as ChatGPT have poor reliability at this; classifier models trained specifically for this task perform somewhat better, yet also have their limitations. In any event, if the student has access to whatever model the teacher is using to test for AI-generation (or even comparable models), they can always respond adversarially by tinkering with an AI-generated story until it is no longer classified as AI-generated
replies(3): >>46178879 #>>46179111 #>>46179423 #
frenchtoast8 ◴[] No.46179111[source]
A New York lawyer used ChatGPT to write a filing with references to fake cases. After a human told him they were hallucinated, he asked ChatGPT if that was true (which said they were real cases). He then screenshotted that answer and submitted it to the judge with the explanation "ChatGPT ... assured the reliability of its content." https://www.courtlistener.com/docket/63107798/54/mata-v-avia... (pages 19, 41-43)
replies(1): >>46179338 #
henry2023 ◴[] No.46179338[source]
I hope he was disbarred.
replies(2): >>46179377 #>>46179721 #
1. euroderf ◴[] No.46179721[source]
Or sent to court-ordered LLM Awareness classes.