←back to thread

182 points Twirrim | 1 comments | | HN request time: 0s | source
Show context
WalterBright ◴[] No.41875254[source]
D made a great leap forward with the following:

1. bytes are 8 bits

2. shorts are 16 bits

3. ints are 32 bits

4. longs are 64 bits

5. arithmetic is 2's complement

6. IEEE floating point

and a big chunk of wasted time trying to abstract these away and getting it wrong anyway was saved. Millions of people cried out in relief!

Oh, and Unicode was the character set. Not EBCDIC, RADIX-50, etc.

replies(3): >>41875486 #>>41875539 #>>41875878 #
cogman10 ◴[] No.41875539[source]
Yeah, this is something Java got right as well. It got "unsigned" wrong, but it got standardizing primitive bits correct

byte = 8 bits

short = 16

int = 32

long = 64

float = 32 bit IEEE

double = 64 bit IEEE

replies(2): >>41875597 #>>41875634 #
josephg ◴[] No.41875634[source]
Yep. Pity about getting chars / string encoding wrong though. (Java chars are 16 bits).

But it’s not alone in that mistake. All the languages invented in that era made the same mistake. (C#, JavaScript, etc).

replies(3): >>41875696 #>>41876204 #>>41876445 #
paragraft ◴[] No.41875696[source]
What's the right way?
replies(2): >>41875771 #>>41875782 #
1. WalterBright ◴[] No.41875771[source]
UTF-8

When D was first implemented, circa 2000, it wasn't clear whether UTF-8, UTF-16, or UTF-32 was going to be the winner. So D supported all three.