←back to thread

263 points josephcsible | 1 comments | | HN request time: 0.24s | source
Show context
mr_windfrog ◴[] No.46178827[source]
What this incident really shows is the growing gap between how easy it is to create a convincing warning and how costly it is to verify what's actually happening. Hoaxes aren't new, but generative tools make fabrication almost free and massively increase the volume.

The rail operator didn't do anything wrong. After an earthquake and a realistic-looking image, the only responsible action is to treat it as potentially real and inspect the track.

This wasn't catastrophic, but it's a preview of a world where a single person can cheaply trigger high-cost responses. The systems we build will have to adapt, not by ignoring social media reports, but by developing faster, more resilient ways to distinguish signal from noise.

replies(9): >>46178848 #>>46178936 #>>46179699 #>>46180135 #>>46180154 #>>46180642 #>>46180686 #>>46181129 #>>46185243 #
soerxpso ◴[] No.46181129[source]
Would calling and saying, "Hey, the bridge is destroyed!" without an image not have also triggered a delay? I question the safety standards of the railway if they would just ignore such a call after an earthquake. Generative AI doesn't change the situation at all. An image shouldn't be treated as carrying more weight than a statement, but the statement without the image would be the same in this situation. This has really been an issue since the popularization of the telephone, which made it sufficiently easy to communicate a lie from far away that someone might choose to do so for fun.
replies(2): >>46184786 #>>46191290 #
1. pixl97 ◴[] No.46184786[source]
This in itself is not a big deal... but there very much scenarios that could mean life or death.

Take a fast moving wildfire with one of the paths of escape blocked. There may be other lines of escape but fake images of one of those open roads showing its blocked by fire could lead to traffic jams and eventual danger on the remaining routes of escape.