←back to thread

288 points Twirrim | 4 comments | | HN request time: 0.83s | source
1. bcoates ◴[] No.41876138[source]
I have mixed feelings about this. On the one hand, it's obviously correct--there is no meaningful use for CHAR_BIT to be anything other than 8.

On the other hand, it seems like some sort of concession to the idea that you are entitled to some sort of just world where things make sense and can be reasoned out given your own personal, deeply oversimplified model of what's going on inside the computer. This approach can take you pretty far, but it's a garden path that goes nowhere--eventually you must admit that you know nothing and the best you can do is a formal argument that conditional on the documentation being correct you have constructed a correct program.

This is a huge intellectual leap, and in my personal experience the further you go without being forced to acknowledge it the harder it will be to make the jump.

That said, there seems to be an increasing popularity of physical electronics projects among the novice set these days... hopefully read the damn spec sheet will become the new read the documentation

replies(2): >>41876249 #>>41877226 #
2. joelignaatius ◴[] No.41876249[source]
As with any highly used language you end up running into what I call the COBOL problem. It will work for the vast majority of cases except where there's a system that forces an update and all of a sudden a traffic control system doesn't work or a plane falls out of the sky.

You'd have to have some way of testing all previous code in the compilation (pardon my ignorance if this is somehow obvious) to make sure this macro isn't already used. You also risk forking the language with any kind of breaking changes like this. How difficult it would be to test if a previous code base uses a charbit macro and whether it can be updated to the new compiler sounds non obvious. What libraries would then be considered breaking? Would interacting with other compiled code (possibly stupid question) that used charbit also cause problems? Just off the top of my head.

I agree that it sounds nonintuitive. I'd suggest creating a conversion tool first and demonstrating it was safe to use even in extreme cases and then make the conversion. But that's just my unenlightened opinion.

replies(1): >>41876505 #
3. bcoates ◴[] No.41876505[source]
That's not really the problem here--CHAR_BIT is already 8 everywhere in practice, and all real existing code[1] handles CHAR_BIT being 8.

The question is "does any code need to care about CHAR_BIT > 8 platforms" and the answer of course is no, its just should we perform the occult standards ceremony to acknowledge this, or continue to ritually pretend to standards compliant 16 bit DSPs are a thing.

[1] I'm sure artifacts of 7, 9, 16, 32, etc[2] bit code & platforms exist, but they aren't targeting or implementing anything resembling modern ISO C++ and can continue to exist without anyone's permission.

[2] if we're going for unconventional bitness my money's on 53, which at least has practical uses in 2024

4. technion ◴[] No.41877226[source]
And yet every time I run an autoconf script I watch as it checks the bits in a byte and saves the output in config.h as though anyone planned to act on it.