Why isn't the level of preemption a property of the specific event, rather than of some global mode? Some events need to be handled with less latency than others.
Why isn't the level of preemption a property of the specific event, rather than of some global mode? Some events need to be handled with less latency than others.
Though when it comes to gaming, there is a delicate balance as game performance should be prioritized but not be allowed to cause the system to lock up for multitasking purposes.
Either way, considering this is mostly for idle tasks. It has little importance to allow it to be automated beyond giving users a simple command for scripting purposes that users can use for toggling various behaviors.
But yeah thanks for making that distinction. Forgot to touch on the differences
Do no syscalls. Timer tick. Kernel takes over and does whatever as well.
No_HZ_FULL, isolated cpu cores, interrupts on some other core and you can spin using 100% cpu forever on a core. Do games do anything like this?
I haven't heard of it being done with PC games. I doubt the environment would be predictable enough. On consoles tho..?
Even for non-rendering systems those still usually run at game tick-rates since running those full-tilt can starve adjacent cores depending on false sharing, cache misses, bus bandwidth limits and the like.
I can't think of a single title I worked on that did what you describe, embedded stuff for sure but that's a whole different class that is likely not even running a kernel.
From what I recall we mostly did it for predictability so that things that may go long wouldn't interrupt deadline sensitive things(audio, physics, etc).