The image is likely AI generated in this case, but this does not seem like the best strategy for finding out if an image is AI generated.
https://en.wikipedia.org/wiki/Russian_sabotage_operations_in...
See e.g. https://www.polskieradio.pl/395/7785/artykul/2508878,russian... (2020)
> Almost 700 schools throughout Poland were in May last year targeted by hoax bomb threats during key exams, private Polish radio broadcaster RMF FM reported.
> It cited Polish investigators it did not name as saying that a detailed analysis of internet connections and a thorough examination of the content of emails with false bomb threats turned up ties to servers in the Russian city of St. Petersburg.
From the article:
Trains were halted after a suspected AI-generated picture that seemed to show major damage to a bridge appeared on social media following an earthquake.
... Railway expert Tony Miles said due to the timing of the incident, very few passengers will have been impacted by the hoax as the services passing through at that time were primarily freight and sleeper trains.
"They generally go slow so as not to disturb the passengers trying to sleep - this means they have a bit of leeway to go faster and make up time if they encounter a delay," he said.
"It's more the fact that Network Rail will have had to mobilise a team to go and check the bridge which could impact their work for days."
Standard responsible rail maintainance is to investigate rail integrity following heavy rains, earthquakes, etc.A fake image of a stone bridge with fallen parapets prompts the same response as a phone call about a fallen stone from a bridge or (ideally !!) just the earthquake itself - send out a hi-railer for a track inspection.
The larger story here (be it the UK, the US, or AU) is track inspections .. manned or unmanned?
Currently on HN: Railroads will be allowed to reduce inspections and rely more on technology (US) https://news.ycombinator.com/item?id=46177550
https://apnews.com/article/automated-railroad-track-inspecti...
on the decision to veer toward unmanned inspections that rely upon lidar, gauge measures, crack vibration sensing etc.
Personally I veer toward manned patrols with state of the art instrumentation - for the rail I'm familiar with there are things that can happen with ballast that are best picked up by a human, for now.
They may have first ran the photo through an AI, but they also went out to verify. Or ran it after verification to understand it better, maybe
Best Korea of course. The Worst Korea could never do this kind of thing.
Having said that, if it was 2020 and you told me that making photorealistic pictures of broken bridges was harder than spoofing the signals I just described, I’d say you were crazy.
The idea that a kid could do this would have seen even less plausible (that’s not to say a kid did it, just that they could have).
Anyway, since recently-intractable things are now trivial, runbooks for hoax responses need to be updated, apparently.
Yes. That doesn't do much to detect a stone from a parapet rolling onto the line though.
Hence the need for inspection.
> runbooks for hoax responses need to be updated, apparently.
I'd argue not - whether it's an image of a damaged bridge, a phone call from a concerned person about an obstruction on the line, or just heavy rains or an earthquake .. the line should be inspected.
If anything urban rail is in a better position today as ideally camera networks should hopefully rapidly resolve whether a bridge is really damaged as per a fake image or not.
AI-Generated disinfo has been a known attack vector for the Russian regime (and their allied regimes) for years now [0][1].
[0] - https://cyberscoop.com/russia-ukraine-china-iran-information...
[1] - https://cloud.google.com/blog/topics/threat-intelligence/esp...
> "The disruption caused by the creation and sharing of hoax images and videos like this creates a completely unnecessary delay to passengers at a cost to the taxpayer," a spokesperson said.
I don't think this will work the way they think it will work. In fact, I think they just proved they're vulnerable to a type of attack that causes disruption and completely unnecessary delay to passengers at a cost to the taxpayer
Ideally? Sure.
But when someone can generate plausible disaster photos of every inch of every line of a country's rail network in mere minutes? And as soon as your inspection finishes, they do it again?
You can also just call the railroad and report the bridge as damaged.
Hoaxes and pranks and fake threats have been around forever.
A fake photo of a collapsed bridge however won't cross that criminal threshold.
Maybe.
Imo, I think the advances in AI and the hype toward generated everything will actually be the current societies digitally-obsessed course-correction back to having a greater emphases on things like theater, live music, conversing with people in-person or even strangers (the horror, I know) simply to connect/consume more meaningfully. It'll level out integrating both instead of being so digitally loop-sided as humans adapt to enjoy both.*
To me, this shows a need for more local journalism that has been decimated by the digital world. By journalism, I mean it in a more traditional sense, not bloggers and podcast (no shade some follow principled, journalistic integrity -- as some national "traditional" one don't). Local journalism is usually held to account by the community, and even though the worldwide BBC site has this story, it was the local reporters they had that were able to verify. If these AI stories/events accelerate a return to local reporting with a worldwide audience, then all the better.
* I try to be a realist, but when I err, it tends to be on the optimist side
So far we have almost no positive applications for the IP laundering machines.
The point of that technology needs to be to alert you when something is wrong not to assure you that everything is fine whenever some other telemetry indicates otherwise.
And collecting unmanned data is still such a pain. At the moment, you stick calibration gear to a train and hope it gets as much noise free data as it can. All whilst going at least 40mph over the area you want - you’re fighting vibrations, vehicle grease, too much sunlight, not enough sunlight, rain, ballast covering things, equipment not calibrated before going out etc etc.
It’s highlighted a weakness. It’s easy to disrupt national infrastructure by generating realistic hoax photos/videos with very little effort from anywhere in the world.
When I stuck train wheels on my DeLorean and rode it down the tracks it lowered the barriers automatically which caused a bit of a traffic incident in Oxnard.
Implicit in this though is the assumption that the increase in awareness of these events has more to do with an ai being involved rather than the event actually being exceptional.
This correlated with an earthquake - this is the event that should have triggered an inspection regardless.
> But when someone can generate plausible disaster photos of every inch of every line of a country's rail network in mere minutes?
In the UK (and elsewhere) a large percentage of track is covered by cameras - inspection of over the top claims can be rapidly dismissed.
> And as soon as your inspection finishes, they do it again?
Sounds like a case for cyber crimes and public nuisance.
It's also no different to endless prank calls via phone, not a new thing.
Tracks have cameras to rapidly discount big claims, in this specific case there was an actual earthquake which should (and likely did, the story doesn't drill down very deep) have triggered a manual track inspection for blockages and ballast shifts in of itself.
Plenty of disasters don't. "No earthquake, no incident" obviously can't be the logic tree.
> In the UK (and elsewhere) a large percentage of track is covered by cameras - inspection of over the top claims can be rapidly dismissed.
"Yes. That doesn't do much to detect a stone from a parapet rolling onto the line though. Hence the need for inspection."
Sounds like you now agree it's less a need?
> Sounds like a case for cyber crimes and public nuisance.
"Sorry, not much we can do." As is the case when elderly folks get their accounts drained over the phone today.
I suspect that people will be killed, because of outrage over fake stuff. Before the Ukraine invasion, some of the folks in Donbas made a fake bomb, complete with corpses from a morgue (with autopsy scars)[0]. That didn’t require any AI at all.
We can expect videos of unpopular minorities, doing horrible things, politicians saying stuff they never said, and evidence submitted to trial, that was completely made from whole cloth.
It’s gonna suck.
[0] https://www.bellingcat.com/news/2022/02/28/exploiting-cadave...
The rail operator didn't do anything wrong. After an earthquake and a realistic-looking image, the only responsible action is to treat it as potentially real and inspect the track.
This wasn't catastrophic, but it's a preview of a world where a single person can cheaply trigger high-cost responses. The systems we build will have to adapt, not by ignoring social media reports, but by developing faster, more resilient ways to distinguish signal from noise.
If that's not happening then management is playing fast and loose with legal responsibility and the risks of mass and inertia.
People tend to think that AI is like a specific kind of human which knows other AI things better. But we should expect better from people that do writing as their job.
It's also pretty common in the UK for trains to be delayed just because some passenger accidentally left their bag on the platform. Not even any malicious intent. I was on a train that stopped in a tunnel for that reason once. They're just very vulnerable to any hint of danger.
I suspect that AI was prompted to create the image, not that this was an incidental "hallucination".
Cynical-me suspects this may have been a trial run by malicious actors experimenting with disrupting critical infrastructure.
But there are people who don't want their news to be "reliably accurate", but who watch/read news to have their own opinions and prejudices validated no matter how misinformed they are. Think Fox News.
But there are way way more people who only consume "news" on algorithmically tweaked social media platforms, where driving "engagement" is the only metric that matters, and "truth" or "accuracy" is not just lower priorities but are completely irrelevant to the platform owners and hence their algorithms. Fake ragebait drives engagement which drives advertising profits.
If I were working for the train line, and bridges kept “blowing up” like this, I’d probably install a bunch of cameras and try to arrange the shots to be aesthetically pleasing, then open the network to the public.
The runbook would involve checking continuity sensors in the rail, and issuing random pan/tilt commands to the camera.
You don't need anything for anything. You can do war with long sticks. Turns out guns, planes, and firebombs work better.
Of course it's different. If I do 5 prank calls, that takes, say, 15 minutes.
In 15 minutes how many hoaxes can I generate with AI? Hundreds, maybe thousands?
This is like saying nukes are basically swords because they both kill people. We've always been able to kill people, who cares about nuclear weapons?
The problem is the scale. The scale of impact is immense and we're not ready to handle it.
It’s really incredible how the supposedly unassailable judgement of mass consumer preference consistently leads our society to produce worse shit so we can have more or it, and rewards the chief enshittifiers with mega yachts.
Those I know who lived through this issue when digital editing really became cheap seem to be more sanguine about it, while the younger generation on the opposite side side is some combination “whatever” or frustrated but accept that yet another of countless weird things has invade a reality that was never quite right to begin with.
The folks in between, I’d say about the 20 years from age 20 to 40, are the most annoyed though. The eye of the storm on the way to proving that cyberpunk lacked the required imagination to properly calibrate our sense of when things were going to really get insane.
Since you didnt ask, let me needlessly elaborate.
You can have YouTube or X or Facebook "design" a web page for you but those are always extremely lame. Just have websites in stead?? Their moderation looks more like a zombie shooter. Wikipedia has some kind of internet trial but that is so unsophisticated that it might even be worse.
It could be a simple redaction with a number of seats that can be emptied when the users request it though a random selection of jurors.
The redaction makes suggestions and eventually removes your website.
The site can still be publicly available before and after, it just doesnt live in the index.
QR leads you to a page, you upload image to page, hashes are compared, image-from-sensor confirmed.
Surely at this point we need provable ‘photography’ for the mass market.
What good has it brought us (not the billionaire owners of AI)? It made us 'more effective' and oh instead of googling something and actually going to a link reading in detail the result we can now not bother with any of that and just believe whatever the LLM outputs (hallucinations be damned).
So I guess that's an upside.
(before the AI god bros come: I am talking purely about LLMs and generative imagery and videos, not ML or AI used for research et al)
I admit I missed the joke. I read it as the usual "you hypocrite teacher, you don't want us using tools but you use them" argument I see. There's no need to be condescending towards me for that. I see now that the "joke" was about the unreliability of AI checkers and making the teacher really angry by suggesting that their impassioned email wasn't even their writing, bolstered by their insistence that checkers are reliable.
We might even have fewer than before - between Internet commentators and loss of confidence from AI, real journalism may not be as highly valued as it was before the Internet…
If I post AI generated images to twitter, and those get amplified by my followers (that might or might not be real people) enough to surface on some rail engineers feed, well, that's just me showcasing my art, no harm intended, right?
I can confirm, the trend now in enterprise CMS deployments is to push for AI based translations, and image assets generation, only pinging back into humans for final touches, thus reducing the respective team sizes.
Another area are marketing and SEO improvements, where the deal is to get AI based suggestions on those improvements, instead of getting a domain expert.
Any commercial CMS will have these AI capabilities front and centre on their website regarding why to chose them.
I presume there is established legal practice for handling these kinds of things, but for generative images the legal limits won't achieve wide awareness until some teenagers and assorted morons get hauled into court.
Expect? You can post a random image of an unpopular minority, add some caption saying they did horrible things, that is not reflected in the image at all, and tons of people will pile on. Don’t even need a fake video.
IA ain't the problem here, so called social media are.
I was listening to James O’Brien on LBC, and [IIRC] he said he was serving jury duty with a woman who was convinced that Volodymyr Zelenskyy had spent hundreds of million of dollars on a super-yacht.
He asked if she had any evidence for that claim, and she produced a picture of a boat.
He said “That’s just a picture of a boat.”
It actually had very minimal impact. An hour or two wasn't bad for an organisation which stripped staff to a bare minimum, and for the area.
And it's very much the customer's job to work for the railway these days: it's our job to report police matters we are told incessantly with announcements. It's our job to buy the right ticket as there are very few ticket staff and staff with any knowledge these days. It's our job to use third party websites during disruption and to Tweet the railway company for assistance because again there is not enough staff.
So Network Rail is not going to come out and say "it's absolutely our job to be aware of all our infrastructure at all times and our defence to this new threat is to bolster staff and CCTV and reduce our reliance on third party reports"
> Vance told world leaders that AI was "an opportunity that the Trump administration will not squander" and said "pro-growth AI policies" should be prioritised over safety.
Our PM definitely won't be adjusting his position. He's been told:
It'd be useful if commenters view this from the pragmatic real world track maintainance PoV.
Verifiable calls from the public about blocked lines made to official numbers with traceback etc. carry more weight than social media buzz.
In urban rail the bulk of AI generated images can be discounted via camera feeds and sensors (eg: there's no indication of a line break so that image is BS).
There are already procedures to sift prank calls from things that need checking, to catch serial offenders and numbnuts that push bricks from overpasses.
In the specific instance of you hypothetically "just me showcasing my art, no harm intended" .. in a UK jurisdiction that would fall to the estimation of the opinion held by a man on the Clapham omnibus as channeled by a world weary judge with an arse sore from decades of having such stories paraded before them by indolent smirking cocksures.
YMMV.
So on one end you have large scale pollution of the information commons, and on the other end we are now creating predator pipelines to generate content with all the efficiency of our vaunted AI productivity. Its creating a dark forest for normal people to navigate, driving more government efforts to bring control. This in turn puts this in conflict with freedom of speech and expression while dovetailing nicely with authoritarian tendencies.
Yes, Its heartening to hear all the people who find productivity gains from AI, but in totality it feels like we got our wishes granted by the Evil Genie.
Perhaps you could even find that specific woman leaving an outraged comment over photos of boats if you looked hard enough!
Most economic value arises from distinguishing signal from noise. All of science is distinguishing signal from noise.
Its valuable, because it is hard. It is also slow - the only way to verify something is often to have reports from someone who IS there.
The conflict arises not from verifying the easy things - searching under the illumination of street lights. Its verifying if you have a weird disease, or if people are alive in a disaster, or what is actually going on in a distant zone.
Verification is laborious. In essence, the universe is not going to open up its secrets to us, unless the effort is put in.
Content generation on the other hand, is story telling. It serves other utility functions to consumers - fulfilling emotional needs for example.
As the ratio of content to information keeps growing, or the ratio of content to verification capacity grows - we will grow increasingly overwhelmed by the situation.
It's convenient to blame the amorphous thing "social media" instead of the actual people responsible. There are only a handful of them: Elon Musk, Mark Zuckerberg, etc.
And stopping it is simple. It's a choice.
Calling directly into the railroad bypasses an authority chain. It negates the virality of it. These viral images are viral because they get shared and spread on their own just like a virus.
Telephone calls into authorities were never viral, they could never be spread. Although they may well have caused the desired reaction without spreading first! Many hoaxes back in the day were somewhat viral and did get spread, but the hoax went to the newspapers or the community first and spread there. A well crafted press release, some additional letters to the traditional media etc. A believable image makes for more believability. The hoax got spread because it was hard to debunk it as it was distributed before the debunking. Bypassing the effort to spread the hoax removes chances of effects.
Edits: my initial thought was "no trains run after midnight anyhow" as except on a few main lines its hard to find trains in the UK at night - so the cost of the bridge closure may have been very small. That with the amount and quality of the staff operating at that time of night. Taken together this leads to less of a cost of reaction, more of a chance of a knee jerk reaction from staff, less ability to consult nearby awake engineers and survey damage IRL. So while the hoaxers cannot plan an earthquake(!) it probably wouldn't have succeeded if the earthquake happened at 11am.
Now he's a Putin/Trump apologist...
It is no surprise to me that Network rail are so understaffed that any special event disrupts their work schedules for days. That is what they call 'efficiency' these days.
Edit: Aside. During a set of fire service strikes it was a relatively common opinion to say something like, 'of course they have an easy job, they get paid to just sit/lie down at the station'. I used to ask, 'what would you like them to do while waiting in case you need rescuing?'. No answer. I spoke to a fireman and he told me that in response to this kind of nonsense a bunch of pointless busy work was invented for them. When real was privatised in the UK they fired a lot of these 'inefficient' workers. After a string of rain crashes, the government had to renationalise Network Rail (the bit that maintains the infrastructure). Another case where 'efficiency' means harming people for profit.
My deeper point is that it's arguably very difficult to establish a global, socially acceptable lower threshold of trust. Parent's level is, apparently, the word of a famous Journalist in a radio broadcast. For some, the form of a message alone makes the message worthy of trust, and AI will mess with this so much.
The problem is the justice system, that is optimized to protect a criminal and to offload the costs to the society, which is happy to be distracted with identity and moral supemacy arguments.
Obviously I can only be a Putin-loving propaganda bot for saying such things.
That's not done in any European rail network I am aware of. The switches have, well, switches that confirm if the mechanical end positions have been reached, but there is no confirmation by current pulses on the actual rails themselves.
> Also, the pulses are conducted through the wheels and axles of any trains, so they can use resistance and/or timing to figure out where the trains are.
That technology is, at least in Germany, being phased out in favor of axle counters at the begin and end of each section, partially because axle counters allow speed and direction feedback, partially because it can be unsafe - a single locomotive braking with sand may yield a false-free signal when sand or leaves prevent the current passing from one rail to the other.
While manipulation of photos exist, and real photos misattributed are very common, for the most part a lot of that does happen as well. And some people are too quick to ignore or gloss over it
Non-consequential: A photo of a cat with a funny caption. I am likely to trust the caption by default, because the energy of doubting it is not worth the stakes. If the caption is a lie, it does nothing to change my worldview or any actions I will ever take. Nobody's life will be worse off for not having spent an hour debunking an amusing story fabricated over a cat photo.
Trivially consequential: Somebody relates a story about an anonymous, random person peddling misinformation based on photos with false captions on the internet. Whether I believe that specific random person did has no bearing on anything. The factor from the story that might influence your worldview is the knowledge that there are people in the world who are so easily swayed by false captions on photos, and that itself is a trivially verifiable fact, including other people consuming the exact photo and misinformation from the story.
More consequential: Somebody makes an accusation against a world leader. This has the potential to sway opinions of many people, feeding into political decisions and international relations. The stakes are higher. It is therefore prudent not to trust without evidence of the specific accusation at hand. Providence of evidence does also matter; not everything can be concretely proven beyond a shadow of a doubt. We should not trust people blindly, but people who have a history of telling the truth are more credible than people who have a history of lying, which can influence what evidence is sufficient to reach a socially acceptable threshold of trust.
Please ignore "technology" such as leaded gasoline and CFCs. No one could have known those were harmful, anyway.
AI videos of unpopular minorities already comprise an entire genre and AI political misinformation is already mainstream. I'm pretty sure every video of Donald Trump released by the WH is AI generated, to make him look less senile and frail than he really is. We're already there.
I have no doubt however that Europe (and hopefully the wider world) is less worried about that corruption than they are about Russian military aggression. And there will be some level of media focus on that – rightly so, where the focus should be on grinding the Russian kleptostate into dust as quickly and thoroughly as possible.
You're not a propaganda bot; you're just making their lives easier.
Main point is that there aren't technical difficulties in verifying the state of main infrastructure in real time (contrary to the claim of the commenter I was initially replying to), and it's more a question of priority and will than doability or cost.
It will happen but the usual way is that "it's not possible", "it's too expensive", etc until something bad enough happens and then suddenly it is doable and done.
Where I live it is not uncommon for rail to have detection for people walking on the rail, and bridges to have extra protection against jumpers. I wouldn't be that surprised if the same system can be used to verify damage.
I know for a fact via family ties that major newsrooms in Germany received instructions to tune out the corruption angle once the war started. I'm sure it's all nothing though and that Putin will find himself in Poland next year. Of course!
You also want to be able chain signing so that for example a news reporter could take a photo, then the news outlet could attest its authenticity by adding their signature on top.
Same principle could be applied to video and text.
It comes from an old culture that Ukraine is trying to remove themselves from, hence the large amount of corruption charges we see.
The same culture is incidentally what makes Russia one of the most corrupt countries in the world.
We can expect more of the same. Random unverified photo and video should not be trusted, not in 2005, not in 2015, and not today.
I believe that this "everything was fine but it's going to get really bad" narrative is just yet another attempt at regulatory capture, to outlaw open-source AI. This entire fake bridge collapse might very well be a false flag to scare senile regulators.
It's all corruption in the end so who cares, right?
> The point about the stakes is a good one. But there is an individiual factor to it.
Indeed. The so called "trivially consequential" depends on whether you're the person being "mis-informationed" about or not. You could be a black man with a white grandchild, and someone could then take a video your wife posted of you playing with your grandchild, and redistribute it calling you a pedophile, causing impact to your life and employment. Those consequences don't seem trivial to the people impacted.
True story: https://www.theguardian.com/world/2025/aug/20/family-in-fear...
There are problems in uncovering it, but the attempt to get rid of corruption is a big factor in the whole situation and one of the things Russia fears.
For Russia a corrupt system was a lot simpler to influence and Ukraine showing how a partially Russian speaking country, where people moved back and forth, fighting corruption was a threat to the system.
Modern tech annoys older tech, like birds poking at dinosaurs. Trains enabled economic progress, which gave rise to computers and AI.
Perhaps Network Rail should have a system of asserting rail integrity that is independent of social media (?!!?)
for real, pick up the phone and ask someone (??)
I mean, they did do that eventually. But if the image was convincing, then stopping the train immediately is the rational choice. Erring on the side of a small delay rather than a train disaster is the right thing to do in this situation.
2 - integrity checks can tell you that the bridge has definitely failed, but not that it definitely hasn't.
I am surprised headlines like this are only coming out now. I've been saying it for a long time, but people said i am crazy. The web as we know it will be unusable. And a new one will not solve all issues, as we have already made ourselves too dependent on the current web and tech. So the impact on the real world is gonna turn a lot if things upside down. It's gonna be a lot of fun. But sure, let's keep pretending AI can either be nothing but bullshit OR we should only fear losing jobs to robots... i don't get why no one every thinks about the societal impact... it's so obvious, still... i am baffled...
Ultimately, though, this kind of stuff is expensive (semi-bespoke safety-critical equipment every few miles across an enormous network) and doesn't reduce all risks. Landslides don't necessarily break rails (but can cause derailments), embankments and bridges can get washed out but the track remains hanging, and lots of other failure modes.
There are definitely also systems to confirm that the power lines aren't down, but unfortunately the wires can stay up and the track be damaged or vice versa, so proving one doesn't prove the other. CCTV is probably a better bet, but that's still a truly enormous number of cameras, plus running power supplies all along the railway and ensuring a data link, plus monitoring.
CCTV cameras are mostly in private ownership, those in public ownership are owned by a mass of radically different bodies who will not share access without a minimum of police involvement. Oh and of course - we rarely point the cameras at the bridges (we have so many bridges).
> Where I live it is not uncommon for rail to have detection for people walking on the rail, and bridges to have extra protection against jumpers. I wouldn't be that surprised if the same system can be used to verify damage.
This bridge just carries trains. There is no path for walking on it. Additionally jumping would be very unusual on this kind of bridge; the big suspension bridges attract that behaviour.
You mentioned twice that you are surprised by things which are quite common in the UK. I don't know where you're from, but it's worth noting that the UK has long been used as a bogeyman by American media, and this has intensified recently. You should come and visit, the pound is not so strong at the moment so you'll get a great deal to see our country.
To my mind, Network Rail is blameless for this.
Im way more concerned of this statement than whatever is reported in the title.
How fragile is a society that is unable to make a simple visual confirmation of a statement without having a multiday multi-££ impact?
We call this journalism and this is a respectable profession. /s
The reason you are not murdered today is not, because murder is hardly punished or hard to do, it is because most people aren't murders. If they were, we wouldn't be able to suppress it with force, we would simply live in hell.
Wasn't that term entirely invented by the Democratic party to dismiss videos of Biden's "senior moments"?
I'm curious if the term predates that or maybe you're not in the US?
Typically, postings that gain traction have many many reposts and though some may be deleted, there's a long tail of reverberation left behind. I can't find that at all here.
I wonder if the hoaxer just emailed it to Network Rail directly?
Here in Sweden, people walking on the rails without permission is a fairly common problem, which cause almost 4k hours of accumulated delays per year. For people who often travel by train, the announcement of reduced speed because of the system has detected people on the tracks are one of the more common ones, only second to the catch-all announcement of "signal error", which simply mean the computer says stop for a reason that the driver don't know or don't want to say.
When it comes to suicide prevention on bridges, it is not just the big bridges. Suicide by train is a fairly talked methods in the news as a work hazard for train drivers, and the protection here is for small bridges that goes above the track. Similar issues exist with bridges over roads and highways. Those methods are to my read of the statistics more common than the movie version of a person jumping from a suspension bridge.
Delaying the inspection until working hours would have caused much greater disruption. Having a track inspection team on hand 24x7 to cover all potential routes would incur much higher staffing costs.
An on-call system backed by TOIL and accepting the risk of dealing with occasional re-rostering seems like a reasonable compromise to me.
Take a fast moving wildfire with one of the paths of escape blocked. There may be other lines of escape but fake images of one of those open roads showing its blocked by fire could lead to traffic jams and eventual danger on the remaining routes of escape.
Its like certain societies enjoy the rigidity they are in.
But i guess in a country where a "retweet" of the wrong opinion can get you in legal trouble it just easier to say that fabricating and propagating ai slop is also ilegal
[0] https://ichef.bbci.co.uk/news/1024/cpsprodpb/5e92/live/bc1e9...
Reminds me of the attacker vs defender dilemma in cybersecurity - attackers just need one attack to succeed while a defender must spend resources considering and defending against all the different possibilities.
Sure, "just follow the process" is a lot less exciting than coming up with an ad-hoc response - but when you're dealing with safety-critical infrastructure at scale, it makes a lot more sense than cowboying it and hoping for the best.
Trust seems to have been completely erroded on the internet. Majority of the mainstream sites feed users "news" posts from unverified unknown accounts. Its bad, we need a way to get back a base level of trust.
We had a big government inquest into suicide in 2018 which included asking national rail to justify it position and actions. Of the 30k rail bridges in the UK only the hotspots have any modern measures of suicide prevention; and the hotspots are mostly but not exclusively suspension bridges.
However, from your comment, I see that you might be meaning pedestrian bridges across tracks, which almost always have metal rails higher than an adult man here. Our older stone road bridges (which are very common) have thick and tall walls on the edge which serve a similar function if not as effectively.
However, I think to hark back to the original image and post - the bridge depicted is a train bridge going over a road. More like a viaduct tbh. Its highly unlikely that there is any normalised pedestrian access so it won't tank highly for assigning prevention and detection measures for either suicid, and its easily assessed from the busy public road so I doubt it makes the priority list for automated collapse detection.
"Hi please don't - we've had three different trains go through there already. There is no loss of signaling in the area, electrical and infrastructural connections are responding appropriately. We will be sure to contact other drivers and let them know about this"
Despite using Claude Code almost daily and finding it a useful tool, on balance I think that AI is a net negative to society.
But it's basically just a nice idea in theory.
The British public doesn't have that kind of apetite for risk. Take a look at responses to existing high profile incidents.
We are proud of our approach to safety and safety records (trains, road etc) and I don't see that changing any time soon. Personally, I think we are too risk averse in too many areas.
AIUI there is a mandate at Network Rail to take all reports made by the public very seriously, which came off the back of a previous incident.
Plus there is of course the huge logistical challenge - the GB rail network is not small.
It is, and it will increase as it becomes more accessible.
> This is the first case I've even heard of.
That's because you're both not paying much attention and this is under-reported.
I would wager maybe a quarter of all content on the internet is bot generated. I'm not the first to propose this.
> Seems like you're already proven wrong, unless you're counting on some future change that isn't here yet?
I kind of am, notably AI both becoming better and more accessible. You're right, it might not.
Not sure why you're trying to pretend that the idea of fake videos is some anti-GOP conspiracy.
Ai isn't destroying the internet. People are.