Most active commenters
  • jacquesm(7)
  • rubyfan(5)
  • IshKebab(3)
  • (3)
  • noduerme(3)

←back to thread

127 points Anon84 | 79 comments | | HN request time: 1.375s | source | bottom
1. ufmace ◴[] No.38509082[source]
The article title is clickbaity, but the actual point is the proposal of using LLMs to translate large amounts of legacy COBOL systems to more modern languages like Java. Doesn't seem terribly useful to me. I expect you could get a 90% solution faster, but the whole challenge with these projects is how to get that last bit of correctness, and how to be confident enough in the correctness of it to actually use it in Production.

But then all of this has been known for decades. There are plenty of well-known techniques for how to do all that. If they haven't actually done it by now, it's a management problem, and no AI tech is going to fix that.

replies(11): >>38509198 #>>38509418 #>>38509802 #>>38509995 #>>38510231 #>>38510273 #>>38510431 #>>38511157 #>>38511186 #>>38512486 #>>38512716 #
2. matthewdgreen ◴[] No.38509198[source]
How hard is it to actually learn COBOL? It seems like a fairly simple language to pick up, but maybe the idiomatic COBOL used in these legacy systems is particularly nasty for some reason.
replies(5): >>38509221 #>>38509476 #>>38509483 #>>38510105 #>>38510187 #
3. the_only_law ◴[] No.38509221[source]
Learning COBOL is the easy part. My understanding is the hard part is becoming familiar with insanely expensive, proprietary mainframe platform that’s you’ll find in most COBOL work. I know IBM has some sort of self training material, but I’m not sure if it’s enough to go from zero to qualified. Most work I see in the area seems to want established domain experts, not hackers who learn just enough to be dangerous.
replies(3): >>38509507 #>>38511265 #>>38516948 #
4. lozenge ◴[] No.38509418[source]
Like in the video.. they are interpreting the COBOL and Java code with enough test cases and comparing their behaviour.
5. vbezhenar ◴[] No.38509476[source]
Language is easy, spaghetti code written without any discipline 60 years ago and modified in haste since is hard.
replies(3): >>38510023 #>>38510254 #>>38510732 #
6. SoftTalker ◴[] No.38509483[source]
Not hard. It's a bit old-fashioned and sort of verbose but it's nothing difficult especially if you already know any other imperative languages. My first job out of school in the early 1990s was with one of the "big" consulting firms. We learned COBOL in a four-week boot camp and were then dispatched to a client site to write code.
replies(1): >>38511476 #
7. briHass ◴[] No.38509507{3}[source]
Not really much different from today: sure, you can 'learn' a new language in a few days, but you won't know the build tooling, deployment strategies, environment specifics, convention over config, and typical patterns/practices that will allow others to understand what you wrote.
8. drewcoo ◴[] No.38509802[source]
> I expect you could get a 90% solution faster, but the whole challenge with these projects is how to get that last bit of correctness, and how to be confident enough in the correctness of it to actually use it in Production.

Is the goal to get working systems or to generate support activity? Or to tank the systems and replace them?

> it's a management problem, and no AI tech is going to fix that

What if we replace middle managers with LLMs?

replies(1): >>38511056 #
9. IshKebab ◴[] No.38509995[source]
I'm pretty sure there's already a system to transpile COBOL to Java without resorting to LLMs.
replies(3): >>38510301 #>>38511197 #>>38516136 #
10. timbit42 ◴[] No.38510023{3}[source]
COBOL code I saw in the 80's and 90's wasn't spaghetti code. COBOL is pretty structured. If it's not, that would be a prior step in porting it to another language.
11. setr ◴[] No.38510105[source]
I don’t think cobol is that difficult; the problem is that it runs on the mainframe, and that’s a whole different beast. Everything in the mainframe world is both expensive and vendor-supplied, so it’s difficult to learn outside the company, and sufficiently proprietary that you’re probably not going to transfer it well elsewhere.

The language itself also encourages troublesome patterns, like all variables essentially being globally defined and untyped, procedure line numbers matter because of things like PERFORM A THROUGH B (which will execute all logic found between paragraph A and paragraph B)

replies(1): >>38510213 #
12. jacquesm ◴[] No.38510187[source]
COBOL is pretty easy to learn. The problem is that it is so full of archaic nonsense (less so with the more recent versions) that you will be tearing your hair out and wishing for something more modern.

COBOL's main value is in maintaining a pile of legacy codebases, mostly in fintech and insurance that are so large and so old that rewriting them is an absolute no-go. These attempts at cross compiling are a way to get off the old toolchain but they - in my opinion - don't really solve the problem, instead they add another layer of indirection (code generation). But at least you'll be able to run your mangled output on the JVM for whatever advantage that gives you.

With some luck you'll be running a hypervisor that manages a bunch of containers that run multiple JVM instances each that run Java that was generated from some COBOL spaghetti that nobody fully understands. If that stops working I hope I will be far, far away from the team that has to figure out what causes the issue.

It is possible that someone somewhere is doing greenfield COBOL development but I would seriously question their motivations.

replies(2): >>38510508 #>>38512334 #
13. jacquesm ◴[] No.38510213{3}[source]
PERFORM PERFORM UNTIL ...

The reason for that is that originally the 'WITH TEST BEFORE' bit wasn't there so what looked like the test was done afterwards actually would just exit the loop immediately without executing the loop body at all.

So the syntax totally wrong-footed you into believing that your loop body would always be executed at least once but never did...

14. rubyfan ◴[] No.38510231[source]
> If they haven't actually done it by now, it's a management problem, and no AI tech is going to fix that.

This. Further, it’s a failure to continue to disincentivize roles that will support or port this business critical logic to something else. I worked at a large insurer where they slowly laid off mainframe talent over the last decade. Those mainframe salaries were counter to the narrative they were promoting around cloud being the future. Unfortunately in their haste to create optics they failed to migrate any of the actual code or systems from mainframe to cloud.

replies(2): >>38510542 #>>38513728 #
15. rubyfan ◴[] No.38510254{3}[source]
I don’t buy that vintage of code is an indicator of its quality.
16. itsoktocry ◴[] No.38510273[source]
>If they haven't actually done it by now, it's a management problem

How could you possibly know that? Do you think businesses have so few problems to deal with that moving off of Cobol should always be the priority, even if it functions and they are managing it?

It's not possible to resolve everything you have to do all at once.

17. rubyfan ◴[] No.38510301[source]
Judging by the branding this is just an attempt to capitalize on the mindshare around LLM and GPT. Recall about 5-8 years ago they tried to sell the notion of huge cost savings replacing humans with the jeopardy champion and tech executives ate it up for a while.
replies(1): >>38512515 #
18. victor106 ◴[] No.38510431[source]
> Skyla Loomis, IBM’s Vice President of IBM Z Software adds, “But you have to remember that this is a developer assistant tool. It's AI assisted, but it still requires the developer. So yes, the developer is involved with the tooling and helping the customers select the services.” Once the partnership between man and machine is established, the AI steps in and says, ‘Okay, I want to transform this portion of code. The developer may still need to perform some minor editing of the code that the AI provides, Loomis explains. “It might be 80 or 90 percent of what they need, but it still requires a couple of changes. It’s a productivity enhancement—not a developer replacement type of activity.”
replies(2): >>38510757 #>>38510835 #
19. Nextgrid ◴[] No.38510508{3}[source]
> that rewriting them is an absolute no-go

Rewriting and expecting 100% feature-parity (and bug-parity, since any bugs/inconsistencies are most likely relied upon by now) is realistically impossible.

However, new banking/insurance startups proved you can build this stuff from scratch using modern tooling, so the migration path would be to create your own "competitor" and then move your customers onto it.

The problem I see is that companies that still run these legacy systems also have a legacy culture fundamentally incompatible with what's needed to build and retain a competent engineering team. Hell, there's probably also a lot of deadweight whose jobs are to make up for the shortcomings of the legacy system and who'd have every incentive to sabotage the migration/rebuild project.

replies(3): >>38510763 #>>38511195 #>>38512426 #
20. credit_guy ◴[] No.38510542[source]
> it's a management problem, and no AI tech is going to fix that.

There are no absolute "management problems". Something that is a management problem when the effort required is 1000 man-years, may stop being so when it's only 100 man-days.

replies(4): >>38511912 #>>38512414 #>>38514071 #>>38516240 #
21. JackFr ◴[] No.38510732{3}[source]
That is not any COBOL I’ve seen. Straightforward, well documented and comprehensively specified and tested.

When we needed changes (this was back office clearing stuff for a bank) they wouldn’t even talk to us until we specced out the changes we wanted in writing and often the specs we submitted would come back with requests for clarification. This was like the opposite of agile, but I don’t recall any bugs or defects making it into production.

replies(1): >>38511913 #
22. koenigdavidmj ◴[] No.38510757[source]
That seems to be the spot humans are weakest at—reviewing something where we think the computer did a good job 90% of the time, but quickly noticing when something goes wrong. Similar to level 3 self-driving—requiring full attention, able to instantly snap into full unassisted driving.
replies(1): >>38510941 #
23. jacquesm ◴[] No.38510763{4}[source]
That happens, but what also happens is that everybody is painfully aware of the situation and they do the best they can. Just like you or I would.

And of course, if you start a bank today you'd do the whole cycle all over again, shiny new tech, that in a decade or two is legacy that nobody dares to touch. Because stuff like this is usually industry wide: risk adversity translates into tech debt in the long term.

I suspect that the only thing that will cure this is for technology to stop being such a moving target. Once we reach that level we can maybe finally call it engineering, accept some responsibility (and liability) and professionalize. Until then this is how it will be.

replies(3): >>38511884 #>>38512547 #>>38512694 #
24. ◴[] No.38510835[source]
25. ◴[] No.38510941{3}[source]
26. dun44 ◴[] No.38511056[source]
90% of middle management could be replaced with simple shell scripts; LLM would be a vast overkill.
27. keep_reading ◴[] No.38511157[source]
I'll never get tired of these overly confident armchair expert comments on HN
28. Cthulhu_ ◴[] No.38511186[source]
AI is just one tech, there's been "language X to Y" converters for a long time, including Cobol to Java. To the point where it will compile and at least seem to do the same thing, but... that's the thing with these codebases, verifying that it does the same is the challenge.

I have 0 experience in this field, but I'm willing to take a guess that the majority of a Cobol to X developer's work is not (re)writing code, but figuring out what the original code does, what it's supposed to do, and verify that the new code does the same thing. More testing than programming.

29. Cthulhu_ ◴[] No.38511195{4}[source]
This is the way, however, integrating with legacy systems then becomes a challenge; a bank's software is never isolated, it has to interface with others, cough up reports for the authorities, etc etc etc.

The green field isn't everything.

30. grammie ◴[] No.38511197[source]
Heirloom computing where I am cto does this using transpilers with 100% automated transpilation. Using LLMs for an entirely deterministic domain borders on the insane. This is just marketing bs but we get asked about it and what our plan is to counter it all the time. Explaining that using Gen-ai and LLMs for what is a well understood compiler/transpiler problem that is already solved just seems to be too difficult for some people to understand.
replies(2): >>38511253 #>>38512820 #
31. IshKebab ◴[] No.38511253{3}[source]
In fairness I imagine an LLM could maybe transpile to more idiomatic code. For example when you transpile FORTRAN to C you get a load of +1s and -1s everywhere to deal with FORTRAN's 1-based indexing. An LLM could avoid that.

But I agree, it doesn't make sense to risk bugs just for that.

replies(1): >>38511603 #
32. kochbeck ◴[] No.38511265{3}[source]
It’s not that the mainframe is hard to learn. In fact, the environment is pretty easy to understand once you get past the archaic naming (but let’s not kid ourselves: on the POSIX side we’re still running t[ape]ar[chive] and other archaic tools too).

In a way, the ease IS the problem: the runtime environment for COBOL (and other stuff on the mainframe) assumes that the underlying platform and OS deal with the really hard stuff like HA and concurrent data access and resource cost management. Which, on the mainframe, they do.

Now, contrast that with doing the same thing in, say, a Linux container on AWS. From the stock OS, can you request a write that guarantees lockstep execution across multiple cores and cross-checks the result? No. Can you request multisite replication of the action and verified synchronous on-processor execution (not just disk replication) at both sites such that your active-active multisite instance is always in sync? No. Can you assume that anything written will also stream to tape / cold storage for an indelible audit record? No. Can you request additional resources from the hypervisor that cost more money from the application layer and signal the operator for expense approval? No. (Did I intentionally choose features that DHT technology could replace one day? Yes, I did, and thanks for noticing.)

On the mainframe, these aren’t just OS built-ins. They’re hardware built-ins. Competent operators know how to both set them up and maintain them such that application developers and users never even have to ask for them (ideally). Good shops even have all the runtime instrumentation out there too—no need for things like New Relic or ServiceNow. Does it cost omg so much money? Absolutely. Omg you could hire an army for what it costs. But it’s there and has already been working for decades.

God knows it’s not a panacea—if I never open another session of the 3270 emulator, it’ll be too soon. And a little piece of me died inside every time I got dropped to the CICS command line. And don’t even get me started on the EBCDIC codepage.

Folks are like, “But wait, I can do all of that in a POSIX environment with these modern tools. And UTF-8 too dude. Stop crying.” Yup, you sure can. I’ve done it too. But when we’re talking about AI lifting and shifting code from the mainframe to a POSIX environment, the 10% it can’t do for you is… all of that. It can’t make fundamental architectural decisions for you. Because AI doesn’t (yet) have a way to say, “This is good and that is bad.” It has no qualitative reasoning, nor anticipatory scenario analysis, nor decision making framework based on an existing environment. It’s still a ways away from even being able to say, “If I choose this architecture, it’ll blow the project budget.” And that’s a relatively easy, computable guardrail.

If you want to see a great example of someone who built a whole-body architectural replacement for a big piece of the mainframe, check out Fiserv’s Finxact platform. In this case, they replaced the functionality (but not the language) of the MUMPS runtime environment rather than COBOL, but the theory is the same. It took them 3 companies to get it right. More than $100mm in investment. But now it has all the fire-and-forget features that banks expect on the mainframe. Throw it a transaction entry, and It Just Works(tm).

And Finxact screams on AWS which is the real miracle because, if you’ve only ever worked on general-purpose commodity hardware like x86-based Linux machines, you have no clue how much faster purpose-built transaction processors can be.

You know that GPGPU thing you kids have been doing lately? Imagine you’d been working on that since the 1960s and the competing technology had access to all the advances you had but had zero obligation to service workloads other than the ones it was meant for. That’s the mainframe. You’re trying to compete with multiple generations of very carefully tuned muscle memory PLUS every other tech advancement that wasn’t mainframe-specific PLUS it can present modern OSes as a slice of itself to make the whole thing more approachable (like zLinux) PLUS just in case you get close to beating it, it has the financial resources of half the banks, brokerages, transportation companies, militaries, and governments in the world to finance it. Oh, and there’s a nearly-two century old company with a moral compass about 1% more wholesome than the Devil whose entire existence rests on keeping a mortal lock on this segment of systems and has received either first- or second-most patents every year of any company in the world for decades.

It’s possible to beat but harder than people make it out to be. It makes so many of the really hard architectural problems “easy” (for certain definitions of the word easy that do not disallow for “and after I spin up a new instance of my app, I want to drink poison on the front lawn of IBM HQ while blasting ‘This Will End in Tears’ because the operator console is telling me to buy more MIPs but my CIO is asking when we can migrate this 40-year old pile of COBOL and HLASM to the cloud”).

Mainframes aren’t that hard. Nearly everyone who reads HN would be more than smart enough to master the environment, including the ancient languages and all the whackado OS norms like simulating punchcard outputs. But they’re also smart enough to not want to. THAT is the problem that makes elimination of the mainframe intractable. The world needs this level of built-in capability, but you have to be a bit nuts to want to touch the problem.

I have been to this hill. I can tell you I am not signing up to die on it, no matter how valuable it would be if we took the hill.

replies(3): >>38511861 #>>38514038 #>>38516855 #
33. foobarian ◴[] No.38511476{3}[source]
Haha that explains a lot :-)
34. dun44 ◴[] No.38511603{4}[source]
You could do it with a trivial C macro instead.
replies(2): >>38512172 #>>38514564 #
35. hnlmorg ◴[] No.38511861{4}[source]
Well said! This echos my, admittedly somewhat limited, experiences as well.

I remember one mainframe I supported, there was an explosion on the same block which took out most of the buildings in the area. It was bad enough that the building which housed the mainframe was derelict. But that mainframe chugged along like nothing happened. I can't remember what hardware nor TSS it was running but I woul guarantee that none of the platforms I've supported since would have faired nearly as well (though I did have some SPARC boxes in one company that survived more than 10 years continual use and zero downtime -- they were pretty special machines too).

36. nradov ◴[] No.38511884{5}[source]
Why would software technology ever stop moving? To a first approximation it is unconstrained by physical reality (unlike other engineering disciplines) so I expect it will keep moving at roughly the same rate. Maybe even accelerate in some areas.

Individual organizations can consciously choose to slow down. Which works for a while in terms of boosting quality and productivity. But over the long run they inevitably fall behind and an upstart competitor with a new business model enabled by new software technology eventually eats their lunch.

replies(1): >>38514608 #
37. hobs ◴[] No.38511912{3}[source]
Given the decision would shrink a kingdom or put someone powerful out of a job it cannot be solved with AI making it easier.
replies(1): >>38512099 #
38. hnlmorg ◴[] No.38511913{4}[source]
This was my experience too.

Modern software engineers would hate that kind of red tape because we've been conditioned to want shorter feedback loops. Heck, I hated it back then too and I wasn't even accustomed to seeing my results instantly like I am now. It takes a special kind of person to enjoy the laborious administrative overhead of writing detailed specs before you write even a single line of code.

replies(1): >>38512991 #
39. janalsncm ◴[] No.38512099{4}[source]
That kingdom is shrinking already as people retire. Converting COBOL into Java means your kingdom can expand.
40. wokkel ◴[] No.38512172{5}[source]
I have the pleasure of supporting a transpiled rpg (system i) to Java codebase. Shit that's come up: almost everything is a global. State machine are used for crud logic which is implicit in the rpg runtime and explicit in the Java codebase. Magic constants all over the place. Magic blobs of screen configuration mapping. Transpiling and directly supporting the transpiled codebase is basically masochism.
41. phasetransition ◴[] No.38512334{3}[source]
Son of a COBOL dev... All this virtualization mess, minus the extra Java layer, started back in the 90s, courtesy of Unisys. I remember my dad pulling his hair out when I was in High School, though I did not understand why back then.
replies(1): >>38512622 #
42. ahejfjarjnt ◴[] No.38512414{3}[source]
It is not that difficult a problem code wise. It's a very difficult problem culturally though, in that the people that know mainframes and also C/C++ or Java or whatnot and could help do the migration slowly and safely and mostly in-place, are not the people that want to or are able to touch the COBOL. And outside of the systems programming crowd (who mostly don't do COBOL), the applications people at a lot of mainframe shops tend to wind up being heavily siloed, often attached directly to a particular business unit, with one foot in the Computer Science/IT fields and one foot in Finance. As in, they might literally only have ever written COBOL, and might not understand the high level architecture well enough to actually reconstruct anything. It would be like working on a single app that gets packaged in a docker container your whole career, and then somebody comes and starts asking questions about how you would reimplement Kubernetes and container runtimes. That's really how a system like CICS (where a lot of application developer code ends up, and is mostly transparent to them) actually functions. It was containers way before they were called containers, and usually involved some pretty strict restrictions imposed on the runtime environment.

So you've got a situation at a lot of places where you'd need to replace 50%+ of your staff if you wanted to convert to modern tooling, while at the same time it is harder and harder to replace the staff that leave and know the old set of tools. That cannot continue indefinitely. And until you cross some invisible threshold in the number of people that are comfortable in a modern software development environment, things can continue like they are for a long time.

Ultimately this is driven by the strange (for the tech industry) demographics of mainframe teams, in that they are often still dominated by people that entered the industry before microcomputers were ubiquitous. They may only know "the mainframe way" to do things, because they entered the industry the old-school way by stacking tapes in a data center or moving up from accounting, back before CS degrees and LeetCode and all that. It's only like that in mainframe shops because everywhere else has had massive growth (as in from 0 -> however many programmers they have now) and skews much younger as a result of computing not even having been adopted in any significant way until (for example) the late 80s/early 90s.

It's this cultural aspect of Mainframes (due to the history of being the oldest computing platform still running in production) that remains mostly unexamined and is not well understood by those on the outside. These are unique challenges that mostly don't have to do with the well known and arguably solved problem of "how can I convert some COBOL code into logically equivalent <any other language> code".

One final point is that COBOL is ruthlessly efficient (equivalent in speed to C, sometimes with more efficient use of memory for particular kinds of programs it is specialized for) in a way Java is not. Java is only popular on mainframes due to the licensing advantages IBM was forced to give out due to the much higher resource usage, and because their JVM is actually a very good implementation and well-liked by people on non-mainframe teams. If everything starts moving to Java though, I bet those licensing advantages probably go away. So I think more needs to be invested in C/C++ and possibly Go, rather than Java, at least initially. It is possible to write both of those portably and not suffer the huge performance penalty implicit in moving to a Java stack. I suppose this in particular may just be due to me being several years removed from web development, and some of the mainframe attitudes towards code have started to rub off on me.

43. taneq ◴[] No.38512426{4}[source]
If you need 100% parity, that’s a transpiler job.
44. kolinko ◴[] No.38512486[source]
As far as I understand, the issue with Cobol specifically is that the code is riddled with gotos and global variables - and untangling all that mess is the real issue, not converting into Java itself.

Using traditional algorithms you end up with literal exponential complexity very fast. You also need a human's ability to figure out which new abstractions to create - otherwise you will end up with a code that is just as difficult to maintain as the original.

replies(1): >>38513933 #
45. kolinko ◴[] No.38512515{3}[source]
Well, they essentially had the same idea as OpenAI with GPT, just they failed to really build it because transformers weren't invented yet.

But their business cases were similar to what we see now with LLMs.

46. wongarsu ◴[] No.38512547{5}[source]
> if you start a bank today you'd do the whole cycle all over again

It is worth noting that we now have much better processes and tooling than software developers had in the 60s. Some Cobol systems predate the invention of SQL or database normalization (3NF, BCNF, etc). Never mind the prevalence of unit testing and integration testing, automating those tests in CI, the idea of code coverage and the tooling to measure it, etc. Programming languages have also come a long way, in terms of allowing enforceable separation of concerns within a codebase, testability, refactorability, etc.

Sure, you can still write yourself into a codebase nobody wants to touch. Especially in languages that aren't as good in some of the things I listed (say PHP, Python or JS). But it's now much easier to write codebases that can evolve, or can have parts swapped out for something new that fits new requirements (even if that new part is now in a new language)

replies(1): >>38512607 #
47. jacquesm ◴[] No.38512607{6}[source]
We have better processes but we don't necessarily have better programmers, and better tooling is no substitute for that.
replies(4): >>38512766 #>>38513424 #>>38515870 #>>38522829 #
48. jacquesm ◴[] No.38512622{4}[source]
I think you get it now though. I've seen this whole industry up close for the last 40 years or so and it's absolutely incredible how we went from a machine with 32 M of RAM and 300 M of storage sufficient to serve 1400 branch offices of a bank to a phone with a very large multiple of that, that can barely serve a single user.
replies(2): >>38514201 #>>38514543 #
49. noduerme ◴[] No.38512694{5}[source]
Tech debt is can come from risk adversity or from taking risks on new shiny things. I think you're right that as long as technology is a moving target, it's always going to be there. To me, the trick is not cornering yourself in a situation where your whole ecosystem is essentially abandoned and not rewriting for the sake of chasing the latest craze. That means parallel re-development, from scratch, of all the existing features, on something like a 10- or 15-year cycle. You want to pick a technology you're certain won't sunset in the next 15 years (with upgrades and further development along the way, of course), then spend a couple years rewriting everything in parallel while still running your old system, test it in every way possible, then blue/green it. I've done this three times in my life for one company on the same piece of large business software.

Companies should think of their software the way automakers or aircraft manufacturers think of their platforms. Once new feature requests are piling up that are just more and more awkward to bolt onto the old system, you have another department that's already been designing a whole new frame and platform for the next decade. Constantly rolling at a steady pace prevents panic. Where this breaks down is where you get things like the 737 MAX.

replies(1): >>38513276 #
50. rodgerd ◴[] No.38512716[source]
I once did some work looking into whether I could help move some subsystems off a zOS system with the help of an HP pitch that was a combo of auto-translation plus contractors checking. It was an attractive idea. I've also looked at CICS/COBOL environments that run on *ix systems, with a similar view.

The problem is that once you start working through your system, it's a lot harder than people making this sort of pitch would like you to believe. People writing CICS/COBOL now will likely be writing well-structured code with sensible names, taking advantage of threading and with proper interfaces between different programs. They're shipping data on and off the mainframe with maybe REST maybe SOAP maybe MQ. They're storing their data in DB2.

But people writing the parts of the codebase 20 years ago were still regularly dropping down to 390 assembler to get it to run fast enough and, guess what, that code is still there and still being extended. Maybe they were using DB2, maybe they were using VSAM files. They were using maybe MQ, maybe SNA, maybe TCP sockets with bespoke protocols to interact with the outside world. Programs might have just been talking to each other by banging on memory regions.

If they were working in the 80s and 90s, they had all that, but they were probably experimenting with 4GLs that promised to let anyone build programs. Some of those 4GLs are still around, probably not running CICS. And there was a lot of assembler. Maybe some non-CICS workloads. Passing things around via memory regions would be as often as not.

Oh, and the people who understood any of the business processes that got turned into code are long, long gone. They got laid off to justify the computerisation. The only people who know why things happen a particular way are the people who wrote the code, if they're still alive and working for you.

And so on and so forth. The reason mainframe code bases are hard to pick apart are because they're 50 years of different languages and runtimes and databases and performance tradeoffs and opinions about what good practise is. This is hard. An "AI" COBOL to Java translator isn't 80 or 90% of the job and honestly IBM should be embarrassed to be the ones suggesting that it is.

51. noduerme ◴[] No.38512766{7}[source]
One thing the parent's post suggests, though, is that we do have better standardization and interoperability. Data normalization is a problem that has largely been solved. So is reading and writing data at scale. Once you've gone to SQL, you presumably won't ever need to go to some other database language or drastically restructure your data in order to rewrite your backend or frontend next time. Similarly, there aren't fifty different schemes for serializing data anymore. JSON is "good enough", every language can now parse it and probably will for the next hundred years. So parts have become more interchangeable. The burden on experienced programmers is less, and it's easier for novice programmers with a shallower base of knowledge to work on pieces of a project even if they don't understand the whole thing. In this sense, tech has become less of a moving target.
52. tootie ◴[] No.38512820{3}[source]
It's almost sad. Watson defeated Ken Jennings at Jeopardy 12 years ago and today IBM are nowhere in the AI race. They bet the farm on the exact right domain ahead of the competition and still failed.
replies(1): >>38514245 #
53. nunez ◴[] No.38512991{5}[source]
Kind of reminds me of taking pure CS101 (I.e. no programming language; just theory and sacrifice)
54. jacquesm ◴[] No.38513276{6}[source]
That makes perfect sense. Extra points if you designed the system to be replaced in time.
replies(1): >>38514233 #
55. bradleyjg ◴[] No.38513424{7}[source]
We have significantly worse programmers, that’s what better languages and better processes enable. People that would have washed out when the only option was bit twiddling and pointer chasing can now be productive and add value. That’s what progress looks like.
56. GenerWork ◴[] No.38513728[source]
Did the remaining mainframers demand a higher salary to stay on, or did the insurance company have to hire outside contractors at exorbiant rates to migrate everything over?
replies(2): >>38514695 #>>38515997 #
57. dzhiurgis ◴[] No.38513933[source]
Isn’t global variables at some point a requirement for performance? Or is there patterns that can do both?
replies(2): >>38513966 #>>38513968 #
58. ◴[] No.38513966{3}[source]
59. vardump ◴[] No.38513968{3}[source]
Global variables usually hinder performance on modern hardware. Harder to use multithreading and (sometimes) higher probability for false sharing.
60. StillBored ◴[] No.38514038{4}[source]
100% agree, but AFAIK the HW isn't doing lockstep on modern zseries. The nonstop was also HW lockstep many years ago and they converted to a checkpoint restart model, which AFAIK is the same on zos/etc with help from the "HW" which is just "software" running in the LPAR/etc.

Regardless, there have been various clustering/SSI/etc software layers designed to make windows/linux/etc behave as though it were lock stepped as well via software/hypervisor checkpoint restart.

So it is not impossible, but you don't get it out of the box because most of these applications have moved the fault tolerance into an application + database transactional model where the higher level operations aren't completed until there is a positive acknowledgment that a transaction is committed (and the DB configured for replication, logging, whatever if needed, blocks the commit until its done).

So, yes that model requires more cognitive overhead for the developer than Cobol batch jobs, which tend to be individually straightforward (or the ones i've seen the complexity is how they interact). But the results can be the same without all the fancy HW/OS layers if the DB is clustered/etc.

61. sublinear ◴[] No.38514071{3}[source]
Management problems have a tendency to snowball and land on the next manager.

Higher authority doesn't understand it and the subjects don't care.

62. snotrockets ◴[] No.38514201{5}[source]
That phone does a lot more than the bank mainframe used to do.
replies(2): >>38514345 #>>38514457 #
63. noduerme ◴[] No.38514233{7}[source]
Hahah. The last one was a close call, since the entire front end of the system from 2009-2020 was a responsive single page app written in Actionscript 3, to replace the old PHP page system... but we saw the deadline looming about a year in advance and accelerated it.
64. snotrockets ◴[] No.38514245{4}[source]
IBM are very good at doing that.
65. g805 ◴[] No.38514345{6}[source]
"Busy" and "useful" are entirely different things. Maybe all that extra capability would be better spent elsewhere.
66. jazzyjackson ◴[] No.38514457{6}[source]
> iphone hangs while trying to match "blu" up with "bluetooth settings"

So many background tasks, much computation

67. mrweasel ◴[] No.38514543{5}[source]
That is some thing that I can't stop thinking about. I get that banking software does more, like online banking, larger transaction volume, more account and loan types, but why is it that we can not run a small to medium bank on a single modern CPU and 1TB of memory?
replies(1): >>38522587 #
68. IshKebab ◴[] No.38514564{5}[source]
The classic HN "trivial" :-D
69. tsimionescu ◴[] No.38514608{6}[source]
Software technology moves when we figure out new ways of doing software that bring some kind of advantage. If no one is finding new ways to do software that have any purpose, technology will stop moving. Physical reality doesn't really have anything to do with it - we're limited by human ingenuity, and possibly by the mathematical space of algorithms (though that's likely to be much larger).

For an example of this happening in a field, look at the glacial pace of advancement in theoretical physics for the last few decades, compared to 1900s. Or at the pace of development in physics in general in the centuries before.

replies(1): >>38514840 #
70. coldtea ◴[] No.38514695{3}[source]
>or did the insurance company have to hire outside contractors at exorbiant rates to migrate everything over

It would be appropriatingly ironic if some or all of those contractors were the same people previously fired.

71. pharmakom ◴[] No.38514840{7}[source]
Software trends seems to repeat themselves as we forget the lessons learned a decade ago. It’s more like fashion, in that sense.
72. GoblinSlayer ◴[] No.38515870{7}[source]
You don't need better programmers to maintain a js project, you only need hireable programmers.
73. rubyfan ◴[] No.38515997{3}[source]
Kind of the later I guess. Not much migrated really. When I left they were grappling with which major migration to do next, each to the tune of tens of millions each with no business benefit. In that industry there is momentum toward specialty tech platforms to save the day. Out of the frying pan into the fryer.
74. danmaz74 ◴[] No.38516136[source]
I was thinking the same. It looks to me like a much safer process could be:

* use some kind of deterministic transpilation that creates ugly code which for sure reproduces the same behaviors

* add tests to cover all those behaviors

* refactor the ugly code

From my experience with copilot I guess that a LLM could help a lot with steps 2 and 3, but I wouldn't trust it at all for step 1

75. rubyfan ◴[] No.38516240{3}[source]
I’m not convinced the proposed solution is that. This pitch has been around for a long time, before LLM it was some unnamed proprietary technology and after LLM it’ll be something new. The reality is this stuff is hard even when 80% can be automatically converted.
76. kjellsbells ◴[] No.38516855{4}[source]
If there was an HN equivalent of Reddit's /r/bestof, this comment would deserve to be there. Encapsulates everything about why this problem is so hard.
77. specialist ◴[] No.38516948{3}[source]
I briefly helped with some mainframe/legacy modernization work.

Their hard part was determining which feeds and datasets were still needed by someone, somewhere. Over the decades, 100s were created as needed (ad hoc). No inventory. No tooling (monitoring, logging) to help. It's likely only a handful were still needed, but no telling which handful.

The bosses were extremely risk adverse. eg "We can't turn that off, someone might be using it."

I suggested slowly throttling all the unclaimed (unknown) stuff over time. Wouldn't break anything outright. But eventually someone would squeal once they noticed their stuff started lagging. Then incrementally turn things off. See if anyone notices. Then outright remove it.

Nope. No can do. Too risky.

78. jacquesm ◴[] No.38522587{6}[source]
Eye candy. Endless abstraction layers.
79. theamk ◴[] No.38522829{7}[source]
Tooling and processing definitely helps, I have seen it with my own eyes!

(1) Start with experienced programmers who know how to write code

(2) Have them establish good culture: unit testing, end-to-end testing, code coverage requirements, CI (including one on PRs), sane external package policy, single main branch, code reviews, etc...

(3) Make sure that there are experienced programmers have final word over what code goes in, and enough time to review a large part of incoming PRs.

Then you can start hiring other programmers and they will eventually be producing good code (or they'll get frustrated with "old-timers not letting me do stuff" and leave). You can have amazing code which can be refactored fearlessly or upgraded. You could even let interns work on prod systems and not worry about breaking it (although they will take some time to merge their PRs...)

The critical step of course is (3)... If there are no experienced folks guiding the process, or if they have no time, or if they are overridden by management so project can ship faster then the someone disables coverage check or adds crappy PRs which don't actually verify anything. And then formerly-nice project slowly starts to rot...