←back to thread

125 points todsacerdoti | 10 comments | | HN request time: 0.215s | source | bottom
1. gwd ◴[] No.45038833[source]
Kind of weird that NOP actually slows down the pipeline, as I'd think that would be the easiest thing to optimize out of the pipeline, unless instruction fetch is one of the main limiting factors. Is it architecturally defined that NOP will slow down execution?
replies(5): >>45039006 #>>45039234 #>>45039381 #>>45039410 #>>45039488 #
2. IcePic ◴[] No.45039006[source]
I think so, as in "make sure all other stuff has run before calling the NOP finished". Otherwise, it would just skip past it and it would have no effect if placed in a loop, so it would be eating memory for no use at all.
replies(2): >>45039160 #>>45039320 #
3. bob1029 ◴[] No.45039160[source]
Eating memory alone may have the desired effect. The memory bandwidth of a cpu is not infinite.
4. Someone ◴[] No.45039234[source]
I think it would be easy, but still not worth the transistors. Think of it: what programs contain lots of NOPs? Who, desiring to write a fast program, sprinkles their code with NOPs?

It’s not worth optimizing for situations that do not occur in practice.

The transistors used to detect register clearing using XOR foo,foo, on the other hand, are worth it, as lots of code has that instruction, and removing the data dependency (the instruction technically uses the contents of the foo register, but its result is independent of its value) can speed up code a lot.

replies(1): >>45039617 #
5. motorest ◴[] No.45039320[source]
> I think so, as in "make sure all other stuff has run before calling the NOP finished".

Is this related to speculative execution? The high level description sounds like NOP works as sync points.

6. ◴[] No.45039381[source]
7. pkhuong ◴[] No.45039410[source]
Yeah, just decode. But that's nice because the effect is independent of the backend's state.
8. adrian_b ◴[] No.45039488[source]
It depends on the CPU. On some CPUs a NOP might take the same time as an ADD and it might have the same throughput per clock cycle as ADD.

However, there are CPUs among the Intel/AMD CPUs that can execute up to a certain number of consecutive NOPs in zero time, i.e. they are removed from the instruction stream before reaching the execution units.

In general, no instruction set architecture specifies the time needed to execute an instruction. For every specific CPU model you must search its manual to find the latency and throughput for the instruction of interest, including for NOPs.

Some CPUs, like the Intel/AMD CPUs, have multiple encodings for NOP, with different lengths in order to facilitate instruction alignment. In that case the execution time may be not the same for all kinds of NOPs.

9. adrian_b ◴[] No.45039617[source]
On CPUs with variable instruction length, like the Intel/AMD CPUs, many programs have a lot of NOPs, which are inserted by the compiler for instruction alignment.

However those NOPs are seldom executed frequently, because most are outside of loop bodies. Nevertheless, there are cases when NOPs may be located inside big loops, in order to align some branch targets to cache line boundaries.

That is why many recent Intel/AMD CPUs have special hardware for accelerating NOP execution, which may eliminate the NOPs before reaching the execution units.

replies(1): >>45042976 #
10. kevin_thibedeau ◴[] No.45042976{3}[source]
MIPS required NOP insertion for data hazards and the branch delay slot if a productive instruction couldn't be placed in the slot.