←back to thread

218 points chmaynard | 1 comments | | HN request time: 0.226s | source
Show context
amelius ◴[] No.41886918[source]
> A higher level of preemption enables the system to respond more quickly to events; whether an event is the movement of a mouse or an "imminent meltdown" signal from a nuclear reactor, faster response tends to be more gratifying. But a higher level of preemption can hurt the overall throughput of the system; workloads with a lot of long-running, CPU-intensive tasks tend to benefit from being disturbed as little as possible. More frequent preemption can also lead to higher lock contention. That is why the different modes exist; the optimal preemption mode will vary for different workloads.

Why isn't the level of preemption a property of the specific event, rather than of some global mode? Some events need to be handled with less latency than others.

replies(7): >>41887042 #>>41887125 #>>41887151 #>>41887348 #>>41887690 #>>41888316 #>>41889749 #
AtlasBarfed ◴[] No.41887690[source]
Aren't we in the massively multi-core era?

I guess it's nice to keep Linux relevant to older single CPU architectures, especially with regards to embedded systems.

But if Linux is going to be targeted towards modern cpu architectures primarily, accidentally basically assume that there is a a single CPU available to evaluate priority and leave the CPU intensive task bound to other cores?

I mean this has to be what high low is for, outside of mobile efficiency.

replies(2): >>41887805 #>>41889649 #
1. yndoendo ◴[] No.41889649[source]
Massive multi-core is subjective. Most industrial computers are still low core. You are more likely to find dual or quad core in these environments. Multi-core costs more money and increases the cost of automation. Look at Advantech computers specifications for this economic area.

Software PLCs will bind to a core which is not exposed to the OS environment and will show a dual core is a single or a quad core as a tri core.