Safety problems are almost never about one evil / dumb person and frequently involve confusing lines of responsibility.
Which makes me very nervous about AI generated code and people who don’t clam human authorship. The bug that creeps in where we scapegoat the AI isn’t gonna cut it in a safety situation.