←back to thread

388 points pseudolus | 1 comments | | HN request time: 0.364s | source
Show context
carabiner ◴[] No.43485544[source]
If you look at fields like mechanical engineering, no, it hasn't, because there isn't much training data for that type of work available on the internet. (CAD isn't engineering.) It's locked up in corporations like Ford, SpaceX, Toyota. There isn't open source mechanical analysis work available at a professional level, which might be an FEA output that references 5 messy spreadsheets that gets written up in a 20 page report with charts, and that references 20 other internal docs, specs. And every company does analysis differently so I'm not sure you could adequately train a model that can generalize it well.
replies(1): >>43486140 #
mywittyname ◴[] No.43486140[source]
I think this is a flawed outlook.

My wife works in healthcare and they just announced that AI would generate 50% of their documents. According to my wife, every in every review she's done of the work, there are critical errors in logic or reasoning (hallucinations), but leadership doesn't care because they a "grading" the AI on things like grammar and spelling. So, yay, it spelled "fusospirochetosis" correctly, but unfortunately it attributed that to a 23 year old woman who came in for a broken arm.

I think we need to be prepared for "vibe coding" taking over every industry. Leadership is going to blindly implement broken AI to replace workers.

Now, you might think that threat of lawsuits, regulations, and ethics standards will save your industry. And maybe that's true, but I personally don't think so. I personally think we are going to see country-scale enshitification of everything as lazy people use AI to generate everything which is then reviewed by lazy people who also use AI to review everything and no one notices or cares until it's too late (and maybe not even then).

I hope you're right, but I'm not convinced.

replies(6): >>43486506 #>>43486572 #>>43487094 #>>43488180 #>>43489345 #>>43494286 #
1. carabiner ◴[] No.43487094[source]
Profit and a competitive market are powerful incentives that keep people from dying in the industry of human transport, that don't exist as much in hospitals. Also, if you think "threat of lawsuits, regulations, and ethics standards" don't encourage safe cars or airplanes currently, what do you think does? In engineering currently there are 1,000 ways to cheat (and sometimes they do, like Volkswagen dieselgate), but reports are actually checked rigorously multiple times by other engineers at a company and by federal regulators (who are also engineers). Things get rejected and changed based on these reviews. Why would AI engineering make this different?

If AI could cause a fiasco on the level of Toyota's unintended acceleration debacle from 2008, which caused the largest auto recall in history, that is something that would give pause to using AI, no? If AI leads to bad products that kill people, companies will lose money and they will not want to use AI.