←back to thread

218 points chmaynard | 1 comments | | HN request time: 0.384s | source
Show context
amelius ◴[] No.41886918[source]
> A higher level of preemption enables the system to respond more quickly to events; whether an event is the movement of a mouse or an "imminent meltdown" signal from a nuclear reactor, faster response tends to be more gratifying. But a higher level of preemption can hurt the overall throughput of the system; workloads with a lot of long-running, CPU-intensive tasks tend to benefit from being disturbed as little as possible. More frequent preemption can also lead to higher lock contention. That is why the different modes exist; the optimal preemption mode will vary for different workloads.

Why isn't the level of preemption a property of the specific event, rather than of some global mode? Some events need to be handled with less latency than others.

replies(7): >>41887042 #>>41887125 #>>41887151 #>>41887348 #>>41887690 #>>41888316 #>>41889749 #
1. ajross ◴[] No.41888316[source]
> Why isn't the level of preemption a property of the specific event, rather than of some global mode? Some events need to be handled with less latency than others.

How do you know which thread is needed to "handle" this particular "event" though? I mean, maybe you're about to start a high priority video with low latency requirements[1]. And due to a design mess your video player needs to contact some random auth server to get a DRM cookie for the stream.

How does the KERNEL know that the auth server is on the critical path for the backup camera? That's a human-space design issue, not a scheduler algorithm.

[1] A backup camera in a vehicle, say.