But then all of this has been known for decades. There are plenty of well-known techniques for how to do all that. If they haven't actually done it by now, it's a management problem, and no AI tech is going to fix that.
But then all of this has been known for decades. There are plenty of well-known techniques for how to do all that. If they haven't actually done it by now, it's a management problem, and no AI tech is going to fix that.
COBOL's main value is in maintaining a pile of legacy codebases, mostly in fintech and insurance that are so large and so old that rewriting them is an absolute no-go. These attempts at cross compiling are a way to get off the old toolchain but they - in my opinion - don't really solve the problem, instead they add another layer of indirection (code generation). But at least you'll be able to run your mangled output on the JVM for whatever advantage that gives you.
With some luck you'll be running a hypervisor that manages a bunch of containers that run multiple JVM instances each that run Java that was generated from some COBOL spaghetti that nobody fully understands. If that stops working I hope I will be far, far away from the team that has to figure out what causes the issue.
It is possible that someone somewhere is doing greenfield COBOL development but I would seriously question their motivations.
Rewriting and expecting 100% feature-parity (and bug-parity, since any bugs/inconsistencies are most likely relied upon by now) is realistically impossible.
However, new banking/insurance startups proved you can build this stuff from scratch using modern tooling, so the migration path would be to create your own "competitor" and then move your customers onto it.
The problem I see is that companies that still run these legacy systems also have a legacy culture fundamentally incompatible with what's needed to build and retain a competent engineering team. Hell, there's probably also a lot of deadweight whose jobs are to make up for the shortcomings of the legacy system and who'd have every incentive to sabotage the migration/rebuild project.
And of course, if you start a bank today you'd do the whole cycle all over again, shiny new tech, that in a decade or two is legacy that nobody dares to touch. Because stuff like this is usually industry wide: risk adversity translates into tech debt in the long term.
I suspect that the only thing that will cure this is for technology to stop being such a moving target. Once we reach that level we can maybe finally call it engineering, accept some responsibility (and liability) and professionalize. Until then this is how it will be.
Individual organizations can consciously choose to slow down. Which works for a while in terms of boosting quality and productivity. But over the long run they inevitably fall behind and an upstart competitor with a new business model enabled by new software technology eventually eats their lunch.
It is worth noting that we now have much better processes and tooling than software developers had in the 60s. Some Cobol systems predate the invention of SQL or database normalization (3NF, BCNF, etc). Never mind the prevalence of unit testing and integration testing, automating those tests in CI, the idea of code coverage and the tooling to measure it, etc. Programming languages have also come a long way, in terms of allowing enforceable separation of concerns within a codebase, testability, refactorability, etc.
Sure, you can still write yourself into a codebase nobody wants to touch. Especially in languages that aren't as good in some of the things I listed (say PHP, Python or JS). But it's now much easier to write codebases that can evolve, or can have parts swapped out for something new that fits new requirements (even if that new part is now in a new language)
Companies should think of their software the way automakers or aircraft manufacturers think of their platforms. Once new feature requests are piling up that are just more and more awkward to bolt onto the old system, you have another department that's already been designing a whole new frame and platform for the next decade. Constantly rolling at a steady pace prevents panic. Where this breaks down is where you get things like the 737 MAX.
For an example of this happening in a field, look at the glacial pace of advancement in theoretical physics for the last few decades, compared to 1900s. Or at the pace of development in physics in general in the centuries before.
(1) Start with experienced programmers who know how to write code
(2) Have them establish good culture: unit testing, end-to-end testing, code coverage requirements, CI (including one on PRs), sane external package policy, single main branch, code reviews, etc...
(3) Make sure that there are experienced programmers have final word over what code goes in, and enough time to review a large part of incoming PRs.
Then you can start hiring other programmers and they will eventually be producing good code (or they'll get frustrated with "old-timers not letting me do stuff" and leave). You can have amazing code which can be refactored fearlessly or upgraded. You could even let interns work on prod systems and not worry about breaking it (although they will take some time to merge their PRs...)
The critical step of course is (3)... If there are no experienced folks guiding the process, or if they have no time, or if they are overridden by management so project can ship faster then the someone disables coverage check or adds crappy PRs which don't actually verify anything. And then formerly-nice project slowly starts to rot...