←back to thread

420 points gnabgib | 1 comments | | HN request time: 0s | source
Show context
drewg123 ◴[] No.44000124[source]
I tend to be of the opinion that for modern general purpose CPUs in this era, such micro-optimizations are totally unnecessary because modern CPUs are so fast that instructions are almost free.

But do you know what's not free? Memory accesses[1]. So when I'm optimizing things, I focus on making things more cache friendly.

[1] http://gec.di.uminho.pt/discip/minf/ac0102/1000gap_proc-mem_...

replies(14): >>44000191 #>>44000255 #>>44000266 #>>44000351 #>>44000378 #>>44000418 #>>44000430 #>>44000433 #>>44000478 #>>44000639 #>>44000687 #>>44001113 #>>44001140 #>>44001975 #
PaulKeeble ◴[] No.44000378[source]
The thing is about these optimisations (assuming they test as higher performance) is that they can get applied in a library and then everyone benefits from the speedup that took some hard graft to work out. Very few people bake their own date API nowadays if they can avoid it since it already exists and techniques like this just speed up every programme whether its on the critical path or not.
replies(1): >>44000391 #
codexb ◴[] No.44000391[source]
That's basically compilers these days. It used to be that you could try and optimize your code, inline things here and there, but these days, you're not going to beat the compiler optimization.
replies(6): >>44000550 #>>44000584 #>>44000692 #>>44000889 #>>44000980 #>>44001055 #
matheusmoreira ◴[] No.44000889[source]
These days optimizing compilers are your number one enemy.

They'll "optimize" your code by deleting it. They'll "prove" your null/overflow checks are useless and just delete them. Then they'll "prove" your entire function is useless or undefined and just "optimize" it to a no-op or something. Make enough things undefined and maybe they'll turn the main function into a no-op.

In languages like C, people are well advised to disable some problematic optimizations and explicitly force the compiler to assume some implementation details to make things sane.

replies(1): >>44001068 #
ryao ◴[] No.44001068[source]
If they prove a NULL check is always false, it means you have dead code.

For example:

  if (p == NULL) return;

  if (p == NULL) doSomething();
It is safe to delete the second one. Even if it is not deleted, it will never be executed.

What is problematic is when they remove something like memset() right before a free operation, when the memset() is needed to sanitize sensitive data like encryption keys. There are ways of forcing compilers to retain the memset(), such as using functions designed not to be optimized out, such as explicit_bzero(). You can see how we took care of this problem in OpenZFS here:

https://github.com/openzfs/zfs/pull/14544

replies(2): >>44001488 #>>44002944 #
1. Dylan16807 ◴[] No.44002944{3}[source]
If they prove a check is always false, it means you have dead code or you made a mistake.

It is very very hard to write C without mistakes.

When not-actually-dead code gets removed, the consequences of many mistakes get orders of magnitudes worse.