Most active commenters
  • jacquesm(4)
  • noduerme(3)

←back to thread

127 points Anon84 | 13 comments | | HN request time: 0.713s | source | bottom
Show context
ufmace ◴[] No.38509082[source]
The article title is clickbaity, but the actual point is the proposal of using LLMs to translate large amounts of legacy COBOL systems to more modern languages like Java. Doesn't seem terribly useful to me. I expect you could get a 90% solution faster, but the whole challenge with these projects is how to get that last bit of correctness, and how to be confident enough in the correctness of it to actually use it in Production.

But then all of this has been known for decades. There are plenty of well-known techniques for how to do all that. If they haven't actually done it by now, it's a management problem, and no AI tech is going to fix that.

replies(11): >>38509198 #>>38509418 #>>38509802 #>>38509995 #>>38510231 #>>38510273 #>>38510431 #>>38511157 #>>38511186 #>>38512486 #>>38512716 #
matthewdgreen ◴[] No.38509198[source]
How hard is it to actually learn COBOL? It seems like a fairly simple language to pick up, but maybe the idiomatic COBOL used in these legacy systems is particularly nasty for some reason.
replies(5): >>38509221 #>>38509476 #>>38509483 #>>38510105 #>>38510187 #
jacquesm ◴[] No.38510187[source]
COBOL is pretty easy to learn. The problem is that it is so full of archaic nonsense (less so with the more recent versions) that you will be tearing your hair out and wishing for something more modern.

COBOL's main value is in maintaining a pile of legacy codebases, mostly in fintech and insurance that are so large and so old that rewriting them is an absolute no-go. These attempts at cross compiling are a way to get off the old toolchain but they - in my opinion - don't really solve the problem, instead they add another layer of indirection (code generation). But at least you'll be able to run your mangled output on the JVM for whatever advantage that gives you.

With some luck you'll be running a hypervisor that manages a bunch of containers that run multiple JVM instances each that run Java that was generated from some COBOL spaghetti that nobody fully understands. If that stops working I hope I will be far, far away from the team that has to figure out what causes the issue.

It is possible that someone somewhere is doing greenfield COBOL development but I would seriously question their motivations.

replies(2): >>38510508 #>>38512334 #
Nextgrid ◴[] No.38510508[source]
> that rewriting them is an absolute no-go

Rewriting and expecting 100% feature-parity (and bug-parity, since any bugs/inconsistencies are most likely relied upon by now) is realistically impossible.

However, new banking/insurance startups proved you can build this stuff from scratch using modern tooling, so the migration path would be to create your own "competitor" and then move your customers onto it.

The problem I see is that companies that still run these legacy systems also have a legacy culture fundamentally incompatible with what's needed to build and retain a competent engineering team. Hell, there's probably also a lot of deadweight whose jobs are to make up for the shortcomings of the legacy system and who'd have every incentive to sabotage the migration/rebuild project.

replies(3): >>38510763 #>>38511195 #>>38512426 #
1. jacquesm ◴[] No.38510763[source]
That happens, but what also happens is that everybody is painfully aware of the situation and they do the best they can. Just like you or I would.

And of course, if you start a bank today you'd do the whole cycle all over again, shiny new tech, that in a decade or two is legacy that nobody dares to touch. Because stuff like this is usually industry wide: risk adversity translates into tech debt in the long term.

I suspect that the only thing that will cure this is for technology to stop being such a moving target. Once we reach that level we can maybe finally call it engineering, accept some responsibility (and liability) and professionalize. Until then this is how it will be.

replies(3): >>38511884 #>>38512547 #>>38512694 #
2. nradov ◴[] No.38511884[source]
Why would software technology ever stop moving? To a first approximation it is unconstrained by physical reality (unlike other engineering disciplines) so I expect it will keep moving at roughly the same rate. Maybe even accelerate in some areas.

Individual organizations can consciously choose to slow down. Which works for a while in terms of boosting quality and productivity. But over the long run they inevitably fall behind and an upstart competitor with a new business model enabled by new software technology eventually eats their lunch.

replies(1): >>38514608 #
3. wongarsu ◴[] No.38512547[source]
> if you start a bank today you'd do the whole cycle all over again

It is worth noting that we now have much better processes and tooling than software developers had in the 60s. Some Cobol systems predate the invention of SQL or database normalization (3NF, BCNF, etc). Never mind the prevalence of unit testing and integration testing, automating those tests in CI, the idea of code coverage and the tooling to measure it, etc. Programming languages have also come a long way, in terms of allowing enforceable separation of concerns within a codebase, testability, refactorability, etc.

Sure, you can still write yourself into a codebase nobody wants to touch. Especially in languages that aren't as good in some of the things I listed (say PHP, Python or JS). But it's now much easier to write codebases that can evolve, or can have parts swapped out for something new that fits new requirements (even if that new part is now in a new language)

replies(1): >>38512607 #
4. jacquesm ◴[] No.38512607[source]
We have better processes but we don't necessarily have better programmers, and better tooling is no substitute for that.
replies(4): >>38512766 #>>38513424 #>>38515870 #>>38522829 #
5. noduerme ◴[] No.38512694[source]
Tech debt is can come from risk adversity or from taking risks on new shiny things. I think you're right that as long as technology is a moving target, it's always going to be there. To me, the trick is not cornering yourself in a situation where your whole ecosystem is essentially abandoned and not rewriting for the sake of chasing the latest craze. That means parallel re-development, from scratch, of all the existing features, on something like a 10- or 15-year cycle. You want to pick a technology you're certain won't sunset in the next 15 years (with upgrades and further development along the way, of course), then spend a couple years rewriting everything in parallel while still running your old system, test it in every way possible, then blue/green it. I've done this three times in my life for one company on the same piece of large business software.

Companies should think of their software the way automakers or aircraft manufacturers think of their platforms. Once new feature requests are piling up that are just more and more awkward to bolt onto the old system, you have another department that's already been designing a whole new frame and platform for the next decade. Constantly rolling at a steady pace prevents panic. Where this breaks down is where you get things like the 737 MAX.

replies(1): >>38513276 #
6. noduerme ◴[] No.38512766{3}[source]
One thing the parent's post suggests, though, is that we do have better standardization and interoperability. Data normalization is a problem that has largely been solved. So is reading and writing data at scale. Once you've gone to SQL, you presumably won't ever need to go to some other database language or drastically restructure your data in order to rewrite your backend or frontend next time. Similarly, there aren't fifty different schemes for serializing data anymore. JSON is "good enough", every language can now parse it and probably will for the next hundred years. So parts have become more interchangeable. The burden on experienced programmers is less, and it's easier for novice programmers with a shallower base of knowledge to work on pieces of a project even if they don't understand the whole thing. In this sense, tech has become less of a moving target.
7. jacquesm ◴[] No.38513276[source]
That makes perfect sense. Extra points if you designed the system to be replaced in time.
replies(1): >>38514233 #
8. bradleyjg ◴[] No.38513424{3}[source]
We have significantly worse programmers, that’s what better languages and better processes enable. People that would have washed out when the only option was bit twiddling and pointer chasing can now be productive and add value. That’s what progress looks like.
9. noduerme ◴[] No.38514233{3}[source]
Hahah. The last one was a close call, since the entire front end of the system from 2009-2020 was a responsive single page app written in Actionscript 3, to replace the old PHP page system... but we saw the deadline looming about a year in advance and accelerated it.
10. tsimionescu ◴[] No.38514608[source]
Software technology moves when we figure out new ways of doing software that bring some kind of advantage. If no one is finding new ways to do software that have any purpose, technology will stop moving. Physical reality doesn't really have anything to do with it - we're limited by human ingenuity, and possibly by the mathematical space of algorithms (though that's likely to be much larger).

For an example of this happening in a field, look at the glacial pace of advancement in theoretical physics for the last few decades, compared to 1900s. Or at the pace of development in physics in general in the centuries before.

replies(1): >>38514840 #
11. pharmakom ◴[] No.38514840{3}[source]
Software trends seems to repeat themselves as we forget the lessons learned a decade ago. It’s more like fashion, in that sense.
12. GoblinSlayer ◴[] No.38515870{3}[source]
You don't need better programmers to maintain a js project, you only need hireable programmers.
13. theamk ◴[] No.38522829{3}[source]
Tooling and processing definitely helps, I have seen it with my own eyes!

(1) Start with experienced programmers who know how to write code

(2) Have them establish good culture: unit testing, end-to-end testing, code coverage requirements, CI (including one on PRs), sane external package policy, single main branch, code reviews, etc...

(3) Make sure that there are experienced programmers have final word over what code goes in, and enough time to review a large part of incoming PRs.

Then you can start hiring other programmers and they will eventually be producing good code (or they'll get frustrated with "old-timers not letting me do stuff" and leave). You can have amazing code which can be refactored fearlessly or upgraded. You could even let interns work on prod systems and not worry about breaking it (although they will take some time to merge their PRs...)

The critical step of course is (3)... If there are no experienced folks guiding the process, or if they have no time, or if they are overridden by management so project can ship faster then the someone disables coverage check or adds crappy PRs which don't actually verify anything. And then formerly-nice project slowly starts to rot...