←back to thread

72 points rbanffy | 4 comments | | HN request time: 0.353s | source
Show context
blakepelton ◴[] No.45313835[source]
The article quotes the Intel docs: "Instruction ordering: Instructions following a SYSCALL may be fetched from memory before earlier instructions complete execution, but they will not execute (even speculatively) until all instructions prior to the SYSCALL have completed execution (the later instructions may execute before data stored by the earlier instructions have become globally visible)."

More detail here would be great, especially using the terms "issue" and "commit" rather than execute.

A barrier makes sense to me, but preventing instructions from issuing seems like too hard of a requirement, how could anyone tell?

replies(2): >>45315382 #>>45319204 #
1. convolvatron ◴[] No.45315382[source]
it might have more to do with the difficult in separating out the contexts of the two execution streams across the rings. someone may have looked at the cost and complexity of all that accounting and said 'hell no'
replies(3): >>45316395 #>>45317540 #>>45318409 #
2. BobbyTables2 ◴[] No.45316395[source]
And given Intel’s numerous speculation related vulnerabilities, it must have been quite a rare moment!!!
3. blakepelton ◴[] No.45317540[source]
Yeah, I would probably say the same. It is a bit strange to document this as part of the architecture (rather than leaving it open as a potential future microarchitectural optimization). Is there some advantage an OS has knowing that the CPU flushes the pipeline on each system call?
4. codedokode ◴[] No.45318409[source]
Is it that difficult, add a "ring" bit to every instruction in instruction queue? Sorry I never made a OoO CPU before.