Calling big endian "commonly used by modern processor types" when s390x is really the only one left is a bit of a stretch ;D
(Comments about everyone's favorite niche/dead BE architecture in 3… 2… 1…)
Calling big endian "commonly used by modern processor types" when s390x is really the only one left is a bit of a stretch ;D
(Comments about everyone's favorite niche/dead BE architecture in 3… 2… 1…)
If you can't live without knowing, sure, my stake in dismissing big endian architectures is that I can't in fact dismiss BE architectures because I have users on it. And it's incredibly painful to test because while my users have such hardware, actually buying a good test platform or CI system is close to impossible. (It ended up being Freescale T4240-QDS devkits off eBay. Not a good sign when the best system you can get is from a company that doesn't exist anymore.)
And at some point it's a question about network protocols/encodings being designed to a "network byte order" determined in the 80s to be big endian. When almost everything is LE, maybe new protocols should just stick with LE as well.
qemu CPU emulation exists too, but that's painfully slow for an actual CI run, and I'm not sure I trust it enough with e.g. AF_NETLINK translation to use the "-user" variant on top of an LE host rather than booting a full Linux (or even BSD).
And in the very best case, proper testing would pit BE and LE systems "against" each other; if I run tests on BE against itself there's a good risk of mis-encodings on send being mis-decoded back on receive and thus not showing as breakage…
… really, it's just a pain to deal with. Even the beauty (in my eyes) of these T4240 ppc64 systems doesn't bridge that :(