Not sure the analogy is the same. A driverless car if it does something wrong can cause death - the cost of failure is very high. If my code doesn't come out right, as other posts here have said, I can just "re-roll the slot machine" until it does come out as acceptable - the cost is extremely low. Most of the "reasoning" models just increase the probability that the "re-run" will more likely be to preference of most people with RL to make it viable - tools like new agents know how to run tools, etc to give more data for the probability to be viable within a good timeframe and not run into an endless loop most of the time. Software until it is running after all is just text on a page.
Sure I have to be sure what I'm committing and running is good, especially in critical domains. The cheap cost of iteration before actual commit IMO is the one reason why LLM's are disruptive in software and other "generative" domains in the digital world. Conversely real-time requirements, software that needs to be relied on (e.g. a life support system?), things that post opinions in my name online etc will probably, even if written by a LLM, will need someone accountable and verifying the output.
Again as per many other posts "I want to be wrong" given I'm a senior in my career and would find it hard to change now given age. I don't like how our career is concentrating to the big AI labs/companies rather than our own intelligence/creativity. But rationally its hard to see how software continues to be the same career going forward and if I don't adapt I might die. I will most likely going forward, similar to what I do with my current team, just define and verify.