https://en.wikipedia.org/wiki/Delay_slot
I'm surprised by how many other architectures use it.
https://en.wikipedia.org/wiki/Delay_slot
I'm surprised by how many other architectures use it.
Stanford MIPS was extremely influential, which was undoubtedly a major factor in many RISC architectures copying the delay-slot feature, including SPARC, the PA-RISC, and the i860. But the delay slot really only simplifies a particular narrow range of microarchitectures, those with almost exactly the same pipeline structure as the original. If you want to lengthen the pipeline, either you have to add the interlocks back in, or you have to add extra delay slots, breaking binary compatibility. So delay slots fell out of favor fairly quickly in the 80s. Maybe they were never a good tradeoff.
One of the main things pushing people to RISC in the 80s was virtual memory, specifically, the necessity of being able to restart a faulted instruction after a page fault. (See Mashey's masterful explanation of why this doomed the VAX in https://yarchive.net/comp/vax.html.) RISC architectures generally didn't have multiple memory accesses or multiple writes per instruction (ARM being a notable exception), so all the information you needed to restart the failed instruction successfully was in the saved program counter.
But delay slots pose a problem here! Suppose the faulting instruction is the delay-slot instruction following a branch. The next instruction to execute after resuming that one could either be the instruction that was branched to, or the instruction at the address after the delay-slot instruction, depending on whether the branch was taken or not. That means you need to either take the fault before the branch, or the fault handler needs to save at least the branch-taken bit. I've never programmed a page-fault handler for MIPS, the SPARC, PA-RISC, or the i860, so I don't know how they handle this, but it seems like it implies extra implementation complexity of precisely the kind Hennessy was trying to weasel out of.
The WP page also mentions that MIPS had load delay slots, where the datum you loaded wasn't available in the very next instruction. I'm reminded that the Tera MTA actually had a variable number of load delay slots, specified in a field in the load instruction, to allow the compiler to allow as many instructions as it could for the memory reference to come back from RAM over the packet-switching network. (The CPU would then stall your thread if the load took longer than the allotted number of instructions, but the idea was that a compiler that prefetched enough stuff into your thread's huge register set could make such stalls very rare.)
One more thing about branch delay slots: It seems original SuperH went for very minimal solution. It prevents interrupts being taken between branch and delay slot, and not much else. PC-relative accesses are relative to the branch target, and faults are also reported with branch target address. As far I can see this makes faults in branch delay slots unrecoverable. In SH-3 they patched that by reporting faults in delay slots for taken branches with branch address itself, so things can be fixed up in the fault handler.
As for SH2, ouch! So SH2 got pretty badly screwed by delay slots, eh?
As for SuperH I don't think they cared too much. Primary use of handling faults is memory paging, and MMU was added only in SH-3, so that's probably the reason they also fixed delay slot fault recovery. Before that faults were either illegal opcodes or alignment violations, probably the answer for that was "don't do that".
I didn't remember that the SH2 didn't support virtual memory (perhaps because I've never used SuperH). That makes sense, then.
I think that, for the ways people most commonly use CPUs, it's acceptable if the value you read from a register in a load delay slot is nondeterministic, for example depending on whether you resumed from a page fault or not, or whether you had a cache miss or not. It could really impede debugging if it happened in practice, and it could impede reverse-engineering of malware, but I believe that such things are actually relatively common. (IIRC you could detect the difference between an 8086 and an 8088 by modifying the next instruction in the program, which would have been already loaded by the 8086 but not the 8088. But I'm guessing that under a single-stepping debugger the 8086 would act like an 8088 in this case.) The solution would probably be "Stop lifting your arm like that if it hurts;" it's easy enough to not emit the offending instruction sequences from your compiler in this case.
The case where people really worry about nondeterminism is where it exposes information in a security-violating way, as in Spectre, which isn't even nondeterminism at the register-contents level, just the timing level.
Myself, I have a strong preference for strongly deterministic CPU semantics, and I've been working on a portable strongly deterministic (but not for timing) virtual machine for archival purposes. But clearly strong determinism isn't required for a usable CPU.
Apparently so. Maybe the logic is that it is available one instruction later if it's a hit, but when it's a miss it's stalls entire pipeline anyway, and resumes only when result is available.
One source of non-determinism that stayed for long time in various architectures were LL/SC linked atomics. It mostly didn't matter but eg. rr recording debugger on AArch64 doesn't work on applications using these instead of newer CAS extension atomics.
Switch(mipsarch): Case 1: Nop.
Case 2: Noop.
Case 10: Noooooooooop.