But do you know what's not free? Memory accesses[1]. So when I'm optimizing things, I focus on making things more cache friendly.
[1] http://gec.di.uminho.pt/discip/minf/ac0102/1000gap_proc-mem_...
But do you know what's not free? Memory accesses[1]. So when I'm optimizing things, I focus on making things more cache friendly.
[1] http://gec.di.uminho.pt/discip/minf/ac0102/1000gap_proc-mem_...
They'll "optimize" your code by deleting it. They'll "prove" your null/overflow checks are useless and just delete them. Then they'll "prove" your entire function is useless or undefined and just "optimize" it to a no-op or something. Make enough things undefined and maybe they'll turn the main function into a no-op.
In languages like C, people are well advised to disable some problematic optimizations and explicitly force the compiler to assume some implementation details to make things sane.
For example:
  if (p == NULL) return;
  if (p == NULL) doSomething();
What is problematic is when they remove something like memset() right before a free operation, when the memset() is needed to sanitize sensitive data like encryption keys. There are ways of forcing compilers to retain the memset(), such as using functions designed not to be optimized out, such as explicit_bzero(). You can see how we took care of this problem in OpenZFS here:
  char *allocate_a_string_please(int n)
  {
      if (n + 1 < n)
          return 0; // overflow
      return malloc(n + 1); // space for the NUL
  }
Unfortunately, we cannot have nice things because of optimizing compilers and the holy C standard.
The compiler "knows" that signed integer overflow is undefined. In practice, it just assumes that integer overflow cannot ever happen and uses this "fact" to "optimize" this program. Since signed integers "cannot" overflow, it "proves" that the condition always evaluates to false. This leads it to conclude that both the condition and the consequent are dead code.
Then it just deletes the safety check and introduces potential security vulnerabilities into the software.
They had to add literal compiler builtins to let people detect overflow conditions and make the compiler actually generate the code they want it to generate.
Fighting the compiler's assumptions and axioms gets annoying at some point and people eventually discover the mercy of compiler flags such as -fwrapv and -fno-strict-aliasing. Anyone doing systems programming with strict aliasing enabled is probably doing it wrong. Can't even cast pointers without the compiler screwing things up.
No one does this
Here's a 2018 example.
https://github.com/mruby/mruby/commit/180f39bf4c5246ff77ef71...
https://github.com/mruby/mruby/issues/4062
  while (l >= bsiz - blen) {
      bsiz *= 2;
      if (bsiz < 0)
          mrb_raise(mrb, E_ARGUMENT_ERROR, "too big specifier");
  }
> However with -O2 the mrb_raise is never triggered, since bsiz is a signed integer.
> Signed integer overflows are undefined behaviour and thus gcc removes the check.
People have even categorized this as a compiler vulnerability.
https://www.kb.cert.org/vuls/id/162289
> C compilers may silently discard some wraparound checks
And they aren't wrong.
The programmer wrote reasonable code that makes sense and perfectly aligns with their mental model of the machine.
The compiler took this code and screwed it up because it violates compiler assumptions about some abstract C machine nobody really cares about.