The progress-care trade-off is a difficult one to navigate, and is clearly more important with AI. I've seen people draw analogies to companies, which have often caused harm in pursuit of greater profits, both purposefully and simply as byproducts: oil-spills, overmedication, pollution, ecological damage, bad labor conditions, hazardous materials, mass lead poisoning. Of course, the profit seeking company as an invention has been one of the best humans have ever made, but that doesn't mean we shouldn't take "corp safety" seriously. We pass various laws on how corps can operate and what they can and can not do to limit harms and _align_ them with the goals of society.
So it is with AI. Except, corps are made of people that work on people speeds, and have vague morals and are tied to society in ways AI might not be. AI might also be able to operate faster and with less error. So extra care is required.