Many things in IT start making a whole lot more sense once you reexamine your beliefs about their purpose.
Also, at this point, if you are running on a dead platform and language and you know it, and haven’t addressed it, then it’s on you. I’ve been seeing these COBOL articles since the 2000s I believe.
There is no "shortage of COBOL programmers." Businesses are simply choosing what they will pay, and the market is responding appropriately. And this is also not just a case of naively shouting "duh, just pay them more" for a market that can't bear the increase in costs (i.e. some industries can only survive if there are available low cost workers - most of us aren't willing to pay $40 for a fast food burger, for example. Whether those industries should be around if they can only survive on low cost labor is another story...) It's not exactly like the financial world is scrounging for money. Businesses can either choose to pay their CEOs a hundred million or so, or they can choose to spread some of that money to COBOL programmers. The choice is theirs, but there is no "shortage".
This is just one of many examples of organizations squeezing an impressive but foolhardy lifespan out of some systems. Some CEO or another should have come to the conclusion (sooner) that they run a business, not a museum.
"If it ain't broke - don't fix it." does not mean "don't maintain the things you own - just let them run until they break - then fix them."
Not moving forward with the software industry is a weird kind of conceit that separates these companies from the mainstream by far more than money.
Since the '80s for me. And certainly the '90s, since these sort of zero-content articles were very popular in the runup to Y2K: "the entire world is going to end because there are no Cobol programmers anywhere in the world! Everyone panic!". It's still bullshit.
Just because some low-knowledge 'journalist' in a low-content rag decorates their ad-delivery with a low-effort 'you should have outrage about something' articles doesn't mean you should buy into it.
But then all of this has been known for decades. There are plenty of well-known techniques for how to do all that. If they haven't actually done it by now, it's a management problem, and no AI tech is going to fix that.
Those COBOL applications will keep working until the sun burns up or inflation makes the numbers so long they no longer fit in bounds.
The java applications they write should fill you with terror.
The issue with COBOL is any old idiot can't write it. If someone is touching the COBOL they are going in and making minor changes/expansions to an application that is older than the average HN user and those changes and expectations are well defined.
With Java and/or language of the day your cheapest run of the mill contractor given exceptionally poor instructions will crank out X LOC per day where X is the required output for them to keep their job. Maintaining that code will be some other bastards problem.
If it's so valuable to the industry that they have good people who know it, that should surely be reflected in the salary / comp for roles... And if that were high, people would learn it, but it's not high, it's arguably worse than just learning a little JavaScript
And we're supposed to believe that no one in the world can figure out a batch-processed COBOL accounting system? Please. The talent is there, it's just that no one responsible for these systems wants to hire at that level. But in extremis they will, and the world will survive.
You will have to eventually relearn regardless what you pick. And I can't believe having some COBOL in the CV will be an issue for anybody.
But it will take much more than that to learn the gnarly codebase in use in those shops...
And that is a skill that is certainly not transferable.
The prime issue (imho) is that it's less expensive and better for everyone if you rewrite these systems from scratch, but that cost is still so prohibitive that it might not make sense to keep operating the company.
The second issue is hardware availability, a bizarre omission on the author's part. Your org lifespan might entirely, predictably depend on how many spares a forward-thinking dev bought in the 70's.
The third issue is fintech. Someone else has already raised the cash needed to rewrite from scratch, and what you can do about it is sell them your client list.
From a personal perspective COBOL is such a pain in the ass to work with that I'd need a ruinous salary to be enthusiastic about the job.
Is the goal to get working systems or to generate support activity? Or to tank the systems and replace them?
> it's a management problem, and no AI tech is going to fix that
What if we replace middle managers with LLMs?
And I'm sure some consultants can earn vast sums fixing old COBOL code, but that's just because they're consultants. Consultants always earn vast sums.
The language itself also encourages troublesome patterns, like all variables essentially being globally defined and untyped, procedure line numbers matter because of things like PERFORM A THROUGH B (which will execute all logic found between paragraph A and paragraph B)
COBOL's main value is in maintaining a pile of legacy codebases, mostly in fintech and insurance that are so large and so old that rewriting them is an absolute no-go. These attempts at cross compiling are a way to get off the old toolchain but they - in my opinion - don't really solve the problem, instead they add another layer of indirection (code generation). But at least you'll be able to run your mangled output on the JVM for whatever advantage that gives you.
With some luck you'll be running a hypervisor that manages a bunch of containers that run multiple JVM instances each that run Java that was generated from some COBOL spaghetti that nobody fully understands. If that stops working I hope I will be far, far away from the team that has to figure out what causes the issue.
It is possible that someone somewhere is doing greenfield COBOL development but I would seriously question their motivations.
The reason for that is that originally the 'WITH TEST BEFORE' bit wasn't there so what looked like the test was done afterwards actually would just exit the loop immediately without executing the loop body at all.
So the syntax totally wrong-footed you into believing that your loop body would always be executed at least once but never did...
This. Further, it’s a failure to continue to disincentivize roles that will support or port this business critical logic to something else. I worked at a large insurer where they slowly laid off mainframe talent over the last decade. Those mainframe salaries were counter to the narrative they were promoting around cloud being the future. Unfortunately in their haste to create optics they failed to migrate any of the actual code or systems from mainframe to cloud.
How could you possibly know that? Do you think businesses have so few problems to deal with that moving off of Cobol should always be the priority, even if it functions and they are managing it?
It's not possible to resolve everything you have to do all at once.
The developer experience is terrible, it's a completely parallel world that diverged about 25 years ago from what we're used to, and it's not particularly good. Furthermore, it's so obscenely priced that it will be never used for anything new (and forget personal projects), so career-wise it's a dead-end and confines you to only ever work in existing legacy deployments.
The "corporate experience" for the lack of a better word is terrible too. Companies that run this have zero engineering culture (if they did they would've finished their migration off COBOL long ago) and developer positions have about the same amount of respect and political influence as the janitor. There are much better options pretty much anywhere else, so the only remaining people are too mediocre to get those better opportunities, so it's not a good environment to learn from either.
Migrating off COBOL (or any legacy system) is possible - after all, startups have built similar systems from scratch. The problem is that this requires a competent engineering team and you won't get that without a good engineering culture and embracing engineering as one of the key pillars of your business and giving it the resources and respect it deserves.
If nobody can understand the application, it cannot be modified.
If an application cannot be modified, its value will continuously decrease with time.
That decrease in value, as well as the projected cost of building a replacement, is tantamount to accruing interest on a debt.
They could have reduced that debt by moving to functionally equivalent but more commonly taught languages back when their team could still understand the system.
It wasn’t necessarily wrong for them to accrue this debt - after all, banks and these monolithic institutions are masters at effectively utilizing debt. However, the practice may have become habitual and it looks like they might be in pretty deep at this point.
Rewriting and expecting 100% feature-parity (and bug-parity, since any bugs/inconsistencies are most likely relied upon by now) is realistically impossible.
However, new banking/insurance startups proved you can build this stuff from scratch using modern tooling, so the migration path would be to create your own "competitor" and then move your customers onto it.
The problem I see is that companies that still run these legacy systems also have a legacy culture fundamentally incompatible with what's needed to build and retain a competent engineering team. Hell, there's probably also a lot of deadweight whose jobs are to make up for the shortcomings of the legacy system and who'd have every incentive to sabotage the migration/rebuild project.
There are no absolute "management problems". Something that is a management problem when the effort required is 1000 man-years, may stop being so when it's only 100 man-days.
Not having ever seen this kind of software, I would assume that the existing COBOL applications are good simply because all the bad COBOL code was probably scrapped decades ago, an example of survivorship bias.
What you’re describing is a management and leadership failure.
In US tech we have an overabundance of Type A people in it for the money, but there are plenty of smart Type Bs who just want stability and a life, for a fraction of the going SV engineering pay.
You have to wonder why the latter is impossible for American companies to achieve.
When we needed changes (this was back office clearing stuff for a bank) they wouldn’t even talk to us until we specced out the changes we wanted in writing and often the specs we submitted would come back with requests for clarification. This was like the opposite of agile, but I don’t recall any bugs or defects making it into production.
1. The mainframe costs and support. These can be mitigated with migration to a platform like Microfocus to emulate it, but be careful you don’t replace your ultra reliable mainframe with some flakey Windows servers.
2. The embedded business logic. Within the 50-60 years of code there’s a ton a specific edge cases encoded for long forgotten business reasons. These make rewrites really hard as bugs and edge cases have long since become features or behaviors dependent apps are coded to rely on. It takes a ton of extra analysis work to understand what to keep and what to trash.
3. The cobol apps run in a challenging environment that’s also not well understood today. All the jobs in JCL, ISPF to manage things, and a system like CICS for input/output. It’s a huge change beyond just writing code in a regular IDE.
And of course, if you start a bank today you'd do the whole cycle all over again, shiny new tech, that in a decade or two is legacy that nobody dares to touch. Because stuff like this is usually industry wide: risk adversity translates into tech debt in the long term.
I suspect that the only thing that will cure this is for technology to stop being such a moving target. Once we reach that level we can maybe finally call it engineering, accept some responsibility (and liability) and professionalize. Until then this is how it will be.
There's a comment in this thread from a former consultant that completed a 4 week COBOL bootcamp before being sent to a client to write code!
But if you already have an old house that's working it's worth considering survivorship bias if you're building a new house to replace it. That new house is much less likely to survive that bias filter. Getting the new house as good as the old house will likely take you far more time and money than you expected.
And the same is with software. For example including outside libraries in your application makes the code much easier to write. You don't have to reproduce functionality. But libraries typically work like kitchen sinks. You may not have wanted a garbage disposal, but your library came with it and for the next X years your application is around you have to make sure that disposal doesn't catch fire. From the security side you have to make sure there is no code interaction with disposal or any of the other 'household' functions its including that you're not testing for.
At least in what I do for a living which is in the field to code security reviews (I don't do these reviews myself, but I work with the teams that do), the amount of security issues that get caught in these new applications in a multi round process, at least to me is staggering. It has to be a multi round process as we see the issues being reintroduced we'd call out in previous sessions. Quite often in these larger organizations the programming teams are so unstable that between our first calls and last calls the entire team working on it will have rotated out.
The actual product name for this one is "watsonx Code Assistant for Z", as the author could have found with a simple web search:
https://www.google.com/search?q=watsonx+cobol+java
https://newsroom.ibm.com/2023-08-22-IBM-Unveils-watsonx-Gene...
Disclosure: I work for IBM in "watsonx Orders", an automated order taker for restaurant drive-thrus.
I bet there are plenty of shops that have software they’d love to run on a cheap cloud server, so that they could retire their hyperexpensive big iron.
I have 0 experience in this field, but I'm willing to take a guess that the majority of a Cobol to X developer's work is not (re)writing code, but figuring out what the original code does, what it's supposed to do, and verify that the new code does the same thing. More testing than programming.
The green field isn't everything.
But I agree, it doesn't make sense to risk bugs just for that.
In a way, the ease IS the problem: the runtime environment for COBOL (and other stuff on the mainframe) assumes that the underlying platform and OS deal with the really hard stuff like HA and concurrent data access and resource cost management. Which, on the mainframe, they do.
Now, contrast that with doing the same thing in, say, a Linux container on AWS. From the stock OS, can you request a write that guarantees lockstep execution across multiple cores and cross-checks the result? No. Can you request multisite replication of the action and verified synchronous on-processor execution (not just disk replication) at both sites such that your active-active multisite instance is always in sync? No. Can you assume that anything written will also stream to tape / cold storage for an indelible audit record? No. Can you request additional resources from the hypervisor that cost more money from the application layer and signal the operator for expense approval? No. (Did I intentionally choose features that DHT technology could replace one day? Yes, I did, and thanks for noticing.)
On the mainframe, these aren’t just OS built-ins. They’re hardware built-ins. Competent operators know how to both set them up and maintain them such that application developers and users never even have to ask for them (ideally). Good shops even have all the runtime instrumentation out there too—no need for things like New Relic or ServiceNow. Does it cost omg so much money? Absolutely. Omg you could hire an army for what it costs. But it’s there and has already been working for decades.
God knows it’s not a panacea—if I never open another session of the 3270 emulator, it’ll be too soon. And a little piece of me died inside every time I got dropped to the CICS command line. And don’t even get me started on the EBCDIC codepage.
Folks are like, “But wait, I can do all of that in a POSIX environment with these modern tools. And UTF-8 too dude. Stop crying.” Yup, you sure can. I’ve done it too. But when we’re talking about AI lifting and shifting code from the mainframe to a POSIX environment, the 10% it can’t do for you is… all of that. It can’t make fundamental architectural decisions for you. Because AI doesn’t (yet) have a way to say, “This is good and that is bad.” It has no qualitative reasoning, nor anticipatory scenario analysis, nor decision making framework based on an existing environment. It’s still a ways away from even being able to say, “If I choose this architecture, it’ll blow the project budget.” And that’s a relatively easy, computable guardrail.
If you want to see a great example of someone who built a whole-body architectural replacement for a big piece of the mainframe, check out Fiserv’s Finxact platform. In this case, they replaced the functionality (but not the language) of the MUMPS runtime environment rather than COBOL, but the theory is the same. It took them 3 companies to get it right. More than $100mm in investment. But now it has all the fire-and-forget features that banks expect on the mainframe. Throw it a transaction entry, and It Just Works(tm).
And Finxact screams on AWS which is the real miracle because, if you’ve only ever worked on general-purpose commodity hardware like x86-based Linux machines, you have no clue how much faster purpose-built transaction processors can be.
You know that GPGPU thing you kids have been doing lately? Imagine you’d been working on that since the 1960s and the competing technology had access to all the advances you had but had zero obligation to service workloads other than the ones it was meant for. That’s the mainframe. You’re trying to compete with multiple generations of very carefully tuned muscle memory PLUS every other tech advancement that wasn’t mainframe-specific PLUS it can present modern OSes as a slice of itself to make the whole thing more approachable (like zLinux) PLUS just in case you get close to beating it, it has the financial resources of half the banks, brokerages, transportation companies, militaries, and governments in the world to finance it. Oh, and there’s a nearly-two century old company with a moral compass about 1% more wholesome than the Devil whose entire existence rests on keeping a mortal lock on this segment of systems and has received either first- or second-most patents every year of any company in the world for decades.
It’s possible to beat but harder than people make it out to be. It makes so many of the really hard architectural problems “easy” (for certain definitions of the word easy that do not disallow for “and after I spin up a new instance of my app, I want to drink poison on the front lawn of IBM HQ while blasting ‘This Will End in Tears’ because the operator console is telling me to buy more MIPs but my CIO is asking when we can migrate this 40-year old pile of COBOL and HLASM to the cloud”).
Mainframes aren’t that hard. Nearly everyone who reads HN would be more than smart enough to master the environment, including the ancient languages and all the whackado OS norms like simulating punchcard outputs. But they’re also smart enough to not want to. THAT is the problem that makes elimination of the mainframe intractable. The world needs this level of built-in capability, but you have to be a bit nuts to want to touch the problem.
I have been to this hill. I can tell you I am not signing up to die on it, no matter how valuable it would be if we took the hill.
Honestly, this should be the top comment in the thread.
The issue isn't COBOL being a hard language to learn or to translate to Java or not enough programmers or companies not being willing to pay people enough to work with it.
The issue is the 50 years worth of business logic, added incrementally, over the years, with no documentation, blended into the original source, for reasons no one still working there remembers, as you stated. It's IF-ELSE statements all the way down and no one wants to touch a single one of them for fear of breaking something whose conditions might not even manifest themselves for months or years with no real way of even regression testing it.
COBOL is used still for far more reasons than technical debt, there is good reason for the language and I doubt Java is even capable of replacing it. Even if an AI could write a 100% perfect Java version of COBOL, Java would fall flat on its face. COBOL and other languages like it are very performant languages, optimized over more than a half century.
So transpilers do work and are in production but our biggest competitor is inertia.
Why is this input file loaded and rechecked 3 times? Because 30 years ago a file load failed, breaking end of quarter reports. This was the fix: if we can read that file three times and it doesn’t change then we know it’s good
I’d certainly want to be paid more to deal with the LLM output than the original source code.
I don’t know cobol and I do know java, but that doesn’t enter into it from my perspective.
Shouldn't it though? Maybe you (as an individual) would charge (or expect to be paid) more to maintain LLM code than bespoke code, banks know they won't have to pay as much to you (as a member of the collective of java engineers) than they would to a cobol dev.
Really the implicit assumption is that cobol devs are fewer and more expensive than java devs.
At least this is how we did it at my old work where we were replacing a mainframe system.
In principle though, this is stuff that an LLM could do - assist with extracting some of this business logic and speed this process up (while porting it to a language that is easier to work with). While the LLM doesn't necessarily understand the business context, it can do basic analysis which will speed it up massively. It can also generate test cases across the whole codebase.
I remember one mainframe I supported, there was an explosion on the same block which took out most of the buildings in the area. It was bad enough that the building which housed the mainframe was derelict. But that mainframe chugged along like nothing happened. I can't remember what hardware nor TSS it was running but I woul guarantee that none of the platforms I've supported since would have faired nearly as well (though I did have some SPARC boxes in one company that survived more than 10 years continual use and zero downtime -- they were pretty special machines too).
Individual organizations can consciously choose to slow down. Which works for a while in terms of boosting quality and productivity. But over the long run they inevitably fall behind and an upstart competitor with a new business model enabled by new software technology eventually eats their lunch.
Modern software engineers would hate that kind of red tape because we've been conditioned to want shorter feedback loops. Heck, I hated it back then too and I wasn't even accustomed to seeing my results instantly like I am now. It takes a special kind of person to enjoy the laborious administrative overhead of writing detailed specs before you write even a single line of code.
Yeah. Not good enough. For the life of me, I cannot remember the name, but I remember we used another IBM tool to translate cobol to Java, and what it generated was horrific dogshit Java that also was 80-90% of the way there. Thing is that, unless you have very comprehensive testing laid out (lol. Find me a mainframe shop with comprehensive testing and I’ll smash my testicles in a hydraulic press), it’s impossible to know what it missed, and thus the 80-90% of the way there was effectively 0.
We never did push forward with this tooling, but I strongly suspect that had we, the ROI would have been vastly net negative after horrendous business impacts.
I don't understand why people are claiming this sets your career to only ever write COBOL. I know plenty of people who started their careers with perl, but have been Java, JavaScript, TypeScript, Go, or Rust programmers at different points as their careers progressed.
Someone overseeing a project migrating a COBOL monolith over time to a set of decoupled services in some other language would easily have a strong story for the timeless need of improving application architecture.
So you've got a situation at a lot of places where you'd need to replace 50%+ of your staff if you wanted to convert to modern tooling, while at the same time it is harder and harder to replace the staff that leave and know the old set of tools. That cannot continue indefinitely. And until you cross some invisible threshold in the number of people that are comfortable in a modern software development environment, things can continue like they are for a long time.
Ultimately this is driven by the strange (for the tech industry) demographics of mainframe teams, in that they are often still dominated by people that entered the industry before microcomputers were ubiquitous. They may only know "the mainframe way" to do things, because they entered the industry the old-school way by stacking tapes in a data center or moving up from accounting, back before CS degrees and LeetCode and all that. It's only like that in mainframe shops because everywhere else has had massive growth (as in from 0 -> however many programmers they have now) and skews much younger as a result of computing not even having been adopted in any significant way until (for example) the late 80s/early 90s.
It's this cultural aspect of Mainframes (due to the history of being the oldest computing platform still running in production) that remains mostly unexamined and is not well understood by those on the outside. These are unique challenges that mostly don't have to do with the well known and arguably solved problem of "how can I convert some COBOL code into logically equivalent <any other language> code".
One final point is that COBOL is ruthlessly efficient (equivalent in speed to C, sometimes with more efficient use of memory for particular kinds of programs it is specialized for) in a way Java is not. Java is only popular on mainframes due to the licensing advantages IBM was forced to give out due to the much higher resource usage, and because their JVM is actually a very good implementation and well-liked by people on non-mainframe teams. If everything starts moving to Java though, I bet those licensing advantages probably go away. So I think more needs to be invested in C/C++ and possibly Go, rather than Java, at least initially. It is possible to write both of those portably and not suffer the huge performance penalty implicit in moving to a Java stack. I suppose this in particular may just be due to me being several years removed from web development, and some of the mainframe attitudes towards code have started to rub off on me.
5 years later, IBM has a few containers (physical containers) in this company's parking lot, adapted as offices, full of engineers doing maintenance and new features on the Clipper code base.
Using traditional algorithms you end up with literal exponential complexity very fast. You also need a human's ability to figure out which new abstractions to create - otherwise you will end up with a code that is just as difficult to maintain as the original.
It is worth noting that we now have much better processes and tooling than software developers had in the 60s. Some Cobol systems predate the invention of SQL or database normalization (3NF, BCNF, etc). Never mind the prevalence of unit testing and integration testing, automating those tests in CI, the idea of code coverage and the tooling to measure it, etc. Programming languages have also come a long way, in terms of allowing enforceable separation of concerns within a codebase, testability, refactorability, etc.
Sure, you can still write yourself into a codebase nobody wants to touch. Especially in languages that aren't as good in some of the things I listed (say PHP, Python or JS). But it's now much easier to write codebases that can evolve, or can have parts swapped out for something new that fits new requirements (even if that new part is now in a new language)
Disclaimer: Started career in COBOL on mainframes. Hold Sun Certified Java Architect and Java Programmer certs. Have worked in professional Open Source (mostly Java middleware) for over a decade.)
I'm not sure the Java will be more readable. Java's dependency/library spiderweb is much more complex and fragile. COBOL on the mainframe is maintained from a central position. COBOL won't have the attack surface Java will (see previous point about dependencies).
I think it might be better to just train newbies on COBOL.
Caveat: Maybe cloud economics can make a compelling budgetary argument.
The truth is, IBM has been bean counted to death. Their turnover rates are incredibly high, employees have to deal with utilization targets that they can't meaningfully do anything about and lines of communication are incredibly dysfunctional. Their global headcount has been quietly but steadily declining since the dawn of the 2000s and their whole EMEA business is riding on having good ties to the right people in the public sector.
The result is that IBM's customers are increasingly unhappy. I do not see how this company is going to survive in its current form.
Companies should think of their software the way automakers or aircraft manufacturers think of their platforms. Once new feature requests are piling up that are just more and more awkward to bolt onto the old system, you have another department that's already been designing a whole new frame and platform for the next decade. Constantly rolling at a steady pace prevents panic. Where this breaks down is where you get things like the 737 MAX.
The problem is that once you start working through your system, it's a lot harder than people making this sort of pitch would like you to believe. People writing CICS/COBOL now will likely be writing well-structured code with sensible names, taking advantage of threading and with proper interfaces between different programs. They're shipping data on and off the mainframe with maybe REST maybe SOAP maybe MQ. They're storing their data in DB2.
But people writing the parts of the codebase 20 years ago were still regularly dropping down to 390 assembler to get it to run fast enough and, guess what, that code is still there and still being extended. Maybe they were using DB2, maybe they were using VSAM files. They were using maybe MQ, maybe SNA, maybe TCP sockets with bespoke protocols to interact with the outside world. Programs might have just been talking to each other by banging on memory regions.
If they were working in the 80s and 90s, they had all that, but they were probably experimenting with 4GLs that promised to let anyone build programs. Some of those 4GLs are still around, probably not running CICS. And there was a lot of assembler. Maybe some non-CICS workloads. Passing things around via memory regions would be as often as not.
Oh, and the people who understood any of the business processes that got turned into code are long, long gone. They got laid off to justify the computerisation. The only people who know why things happen a particular way are the people who wrote the code, if they're still alive and working for you.
And so on and so forth. The reason mainframe code bases are hard to pick apart are because they're 50 years of different languages and runtimes and databases and performance tradeoffs and opinions about what good practise is. This is hard. An "AI" COBOL to Java translator isn't 80 or 90% of the job and honestly IBM should be embarrassed to be the ones suggesting that it is.
I know for sure that Avis's reservation system runs on mainframe.
To me, that means that the business logic of rental car checkout and return is so complicated and/or nuanced, it is cheaper for rental car companies to find/retain mainframe developers to keep these running than it would be to re-platform onto commodity hardware.
Also, given that basically every industry is powered by a handful of mainframes, it is surprising that COBOL/Fortran developers aren't making insane money like I thought they were.
Regardless, there have been various clustering/SSI/etc software layers designed to make windows/linux/etc behave as though it were lock stepped as well via software/hypervisor checkpoint restart.
So it is not impossible, but you don't get it out of the box because most of these applications have moved the fault tolerance into an application + database transactional model where the higher level operations aren't completed until there is a positive acknowledgment that a transaction is committed (and the DB configured for replication, logging, whatever if needed, blocks the commit until its done).
So, yes that model requires more cognitive overhead for the developer than Cobol batch jobs, which tend to be individually straightforward (or the ones i've seen the complexity is how they interact). But the results can be the same without all the fancy HW/OS layers if the DB is clustered/etc.
So many background tasks, much computation
For an example of this happening in a field, look at the glacial pace of advancement in theoretical physics for the last few decades, compared to 1900s. Or at the pace of development in physics in general in the centuries before.
I felt that I could make an exception for the minor correction of the product name.
It's probably not appropriate for me to comment here on how I like the job.
I would be happy to discuss it privately with you or anyone else who is curious. My email address is in my profile; feel free to drop me a note and I will follow up with you.
There are plenty of new mobility startups managing just fine without any Cobol or other legacy software. The issue companies like Avis have isn't their software but their lack of competitiveness and ambition level. The car rental business is always changing and they have to somehow adapt with it. And they struggle with that. It's not their software that holds them back but their dependence on things not changing. The software is just a reflection of how uncompetitive they are.
You want to translate 'idiomatic COBOL soup' which has been added to over decades into clean structured (possibly OOP) code covered by a robust library of tests.
* use some kind of deterministic transpilation that creates ugly code which for sure reproduces the same behaviors
* add tests to cover all those behaviors
* refactor the ugly code
From my experience with copilot I guess that a LLM could help a lot with steps 2 and 3, but I wouldn't trust it at all for step 1
Their hard part was determining which feeds and datasets were still needed by someone, somewhere. Over the decades, 100s were created as needed (ad hoc). No inventory. No tooling (monitoring, logging) to help. It's likely only a handful were still needed, but no telling which handful.
The bosses were extremely risk adverse. eg "We can't turn that off, someone might be using it."
I suggested slowly throttling all the unclaimed (unknown) stuff over time. Wouldn't break anything outright. But eventually someone would squeal once they noticed their stuff started lagging. Then incrementally turn things off. See if anyone notices. Then outright remove it.
Nope. No can do. Too risky.
appreciate the comment
this exchange is what i show young people to demonstrate why you should never work for such companies (not signaling out yours specifically)
human want to be human
(tho a buddy of mine worked there with great benefits & $)
The actual problem is the missing knowledge from the engineers the CEO downsized and sent their work overseas. This boils down to three things:
1) figuring out the call patterns
2) figuring out the architecture
3) figuring out the business logic
This is also why cowboy coders are dangerous in your organization, while having corporate standards and making your codebase self-consistent (aka "Clean Code") is the most valuable thing you can spend your time on.
source: I made a bunch of dollar bills working on Java / EIS COBOL systems that needed to interact at wire speed, while not paying Big Blue for their EIS Gateway Software.
During the last major attempt to remove the system, they got close. A team of good engineers worked over two years painstakingly updating documentation, transferring and rebuilding systems, and got to the Go-No-Go meeting. They explained everything to the single SME in their 60's who knew best where the bodies were buried, and she seemed satisfied.
When they got to the end of the Go-No-Go, she asked, "Where did you put our payroll?"
COBOL literally sent everyone their paychecks. It was enough of a deathblow to find something that important had been overlooked, the project was abandoned.
*Oracle Toad was in there too, but it felt snappy instead of squatty compared to the COBOL.
(1) Start with experienced programmers who know how to write code
(2) Have them establish good culture: unit testing, end-to-end testing, code coverage requirements, CI (including one on PRs), sane external package policy, single main branch, code reviews, etc...
(3) Make sure that there are experienced programmers have final word over what code goes in, and enough time to review a large part of incoming PRs.
Then you can start hiring other programmers and they will eventually be producing good code (or they'll get frustrated with "old-timers not letting me do stuff" and leave). You can have amazing code which can be refactored fearlessly or upgraded. You could even let interns work on prod systems and not worry about breaking it (although they will take some time to merge their PRs...)
The critical step of course is (3)... If there are no experienced folks guiding the process, or if they have no time, or if they are overridden by management so project can ship faster then the someone disables coverage check or adds crappy PRs which don't actually verify anything. And then formerly-nice project slowly starts to rot...