←back to thread

218 points chmaynard | 1 comments | | HN request time: 0s | source
Show context
amelius ◴[] No.41886918[source]
> A higher level of preemption enables the system to respond more quickly to events; whether an event is the movement of a mouse or an "imminent meltdown" signal from a nuclear reactor, faster response tends to be more gratifying. But a higher level of preemption can hurt the overall throughput of the system; workloads with a lot of long-running, CPU-intensive tasks tend to benefit from being disturbed as little as possible. More frequent preemption can also lead to higher lock contention. That is why the different modes exist; the optimal preemption mode will vary for different workloads.

Why isn't the level of preemption a property of the specific event, rather than of some global mode? Some events need to be handled with less latency than others.

replies(7): >>41887042 #>>41887125 #>>41887151 #>>41887348 #>>41887690 #>>41888316 #>>41889749 #
acters ◴[] No.41887042[source]
Mostly because such a system would install in fighting among programs that all will want to be prioritized as important. tbf it will mostly be larger companies who will take advantage of it for "better" user experience. Which is kind of important to either reduce to a minimal amount of running applications or simply control it manually for the short burst most users will experience. If anything cpu intensive tasks are more likely to be bad code than some really effective use of resources.

Though when it comes to gaming, there is a delicate balance as game performance should be prioritized but not be allowed to cause the system to lock up for multitasking purposes.

Either way, considering this is mostly for idle tasks. It has little importance to allow it to be automated beyond giving users a simple command for scripting purposes that users can use for toggling various behaviors.

replies(1): >>41887105 #
biorach ◴[] No.41887105[source]
You're talking about user-space preemption. The person you're replying to, and the article are about kernel preemption.
replies(2): >>41887285 #>>41887298 #
withinboredom ◴[] No.41887285[source]
Games run in a tight loop, they don’t (typically) yield execution. If you don’t have preemption, a game will use 100% of all the resources all the time, if given the chance.
replies(3): >>41887340 #>>41887455 #>>41889403 #
Tomte ◴[] No.41887340{3}[source]
Games run in user space. They don't have to yield (that's cooperative multitasking), they are preempted by the kernel. And don't have a say about it.
replies(1): >>41887487 #
harry8 ◴[] No.41887487{4}[source]
Make a syscall for io. Now the kernel takes over and runs whatever it likes for as long as it likes.

Do no syscalls. Timer tick. Kernel takes over and does whatever as well.

No_HZ_FULL, isolated cpu cores, interrupts on some other core and you can spin using 100% cpu forever on a core. Do games do anything like this?

replies(1): >>41887526 #
biorach ◴[] No.41887526{5}[source]
Pinning on a core like this is done in areas like HPC and HFT. In general you want a good assurance that your hardware matches your expectations and some kernel tuning.

I haven't heard of it being done with PC games. I doubt the environment would be predictable enough. On consoles tho..?

replies(2): >>41887998 #>>41889427 #
vvanders ◴[] No.41889427{6}[source]
We absolutely pinned on consoles, anywhere where you have fixed known hardware tuning for that specific hardware usually nets you some decent benefits.

From what I recall we mostly did it for predictability so that things that may go long wouldn't interrupt deadline sensitive things(audio, physics, etc).

replies(1): >>41889548 #
1. biorach ◴[] No.41889548{7}[source]
Nice, thank you