Whenever people see old systems still in production (say things that are over 30 years old) the assumption is that management refused to fund the replacement. But if you look at replacement projects so many of them are such dismal failures that's management's reluctance to engage in fixing stuff is understandable.
From the outside, decline always looks like a choice, because the exact form the decline takes was chosen. The issue is that all the choices are bad.
The problem with replacement projects is when and why they're usually started. They're usually started once there's a fixed deadline on some technology ceasing to exist, creating the appropriate urgency.
Usually the people that wrote that original software have long gone, the last few people that were able to maintain it are also nearing retirement age or already gone as well, you have some ancient technologies used for which it's hard to get documentation on the internet today.
Now you're tasked with writing a replacement, and everything that doesn't work on day 1 is deemed a failure. It might have worked if you started earlier. Because if your original codebase is COBOL and assembly written for mainframe, it's really hard to find anyone that can understand what it does fully and rewrite it now cleanly.
If you had updated from COBOL and mainframe assembly to C, and from C to 90s Java, and from 90s Java to modern Java/Go/Rust/Node, you'd have plenty of institutional knowledge available at each step, and you would have people that know the old and the new world at each step. Jumping half a century in computing techonology is harder than doing small jumps every 10-15 years.