I imagine it's the one place where LLMs would absolutely shine. COBOL jobs are usually very verbose, lots of boilerplate, but what they do is mostly very straightforward batch processing. It's ripe for automation with LLMs.
The flip side is that banks are usually very conservative about technology (for good reason).
For example, if I prompt ChatGPT: "Write me a BF program that produces the alphabet, but inverts the position of J & K" it will deterministically fail. I've never even seen one that produces the alphabet the normal way. I can run a GP algorithm over an example of the altered alphabet string and use simple MSE to get it to evolve a BF program that actually emits the expected output.
The BPE tokenizer seems like a big part of the problem when considering the byte-per-instruction model, but fundamentally I don't think there is a happy path even if we didn't need to tokenize the corpus. The expressiveness of the language is virtually non-existent. Namespaces, type names, member names, attributes, etc., are a huge part of what allows for a LLM to lock on to the desired outcome. Getting even one byte wrong is catastrophic for the program's meaning. You can get a lot of bytes wrong in C/C++/C#/Java/Go/etc. (e.g. member names) and still have the function do exactly the same thing.