It should be able to make an OS. It should be able to write drivers. It should be able to port code to new platforms. It should be able to transpile compiled binaries (which are just languages of a different language) across architectures.
Sure seems we are very far from that, but really these are breadth-based knowledge with extensive examples / training sources. It SHOULD be something LLMs are good at, not new/novel/deep/difficult problems. What I described are labor-intensive and complicated, but not "difficult".
And would any corporate AI allow that?
We should be pretty paranoid about centralized control attempts, especially in tech. This is a ... fragile ... time.
You can feed it assembly listings, or bytecode that the decompiler couldn't handle, and get back solid results.
And corporate AIs don't really have a fuck to give, at least not yet. You can sic Claude on obvious decompiler outputs, or a repo of questionable sources with a "VERY BIG CORPO - PROPRIETARY AND CONFIDENTIAL" in every single file, and it'll sift through it - no complaints, no questions asked. And if that data somehow circles back into the training eventually, then all the funnier.
I haven't heard much from the major projects yet, but I'm not ear-to-the-ground.
I guess that is what is disappointing. It's all (to quote n-gage) webshit you see being used for this, and corpo-code so far, to your point.