Most active commenters
  • asveikau(4)
  • orbisvicis(3)
  • winrid(3)

←back to thread

345 points splitbrain | 28 comments | | HN request time: 0.001s | source | bottom
Show context
OsrsNeedsf2P ◴[] No.41837682[source]
I love how simple this is- Barely 100 lines or C++ (ignoring comments). That's one thing that makes me prefer X11 over Wayland.
replies(8): >>41837906 #>>41838181 #>>41838339 #>>41838393 #>>41838489 #>>41838500 #>>41838693 #>>41844644 #
asveikau ◴[] No.41838339[source]
The code is a little weird. There is no XLib event loop. It calls sleep(100) in a loop until it hits SIGINT. That will have high cpu usage for no reason.
replies(3): >>41838378 #>>41844664 #>>41848003 #
1. diath ◴[] No.41838378[source]
It will not, even adding just a 1ms sleep in a loop will drop CPU usage to barely noticeable levels, 10 wakes a second is barely anything for any CPU from the past 3 decades.
replies(5): >>41838399 #>>41839735 #>>41840345 #>>41845898 #>>41848081 #
2. asveikau ◴[] No.41838399[source]
Not my experience at all. Granted I haven't tried writing a loop like this in 20ish years, because once you spot that mistake you don't tend to make it again, and CPUs are better now.

Another thing to note is when you call sleep with a low value it may decide not to sleep at all, so this loop just might be constantly doing syscalls in a tight loop.

replies(1): >>41838510 #
3. diath ◴[] No.41838510[source]
> Not my experience at all. Granted I haven't tried writing a loop like this in 20ish years, because once you spot that mistake you don't tend to make it again, and CPUs are better now.

You can trivially verify it by running the following, I have personally been using "sleep for 1ms in a loop to prevent CPU burn" for years and never noticed it having any impact, it's not until I go into microseconds when I can start noticing my CPU doing more busy work.

    // g++ -std=c++20 -osleep sleep.cpp
    #include <thread>
    #include <chrono>

    int main(int, char **)
    {
     while (true) {
     std::this_thread::sleep_for(std::chrono::milliseconds {1});
     }
     return 0;
    }
> Another thing to note is when you call sleep with a low value it may decide not to sleep at all, so this loop just might be constantly doing syscalls in a tight loop.

On what system? AFAIK, if your sleep time is low enough, it will round up to whatever is the OS clock resolution multiple, not skip the sleep call completely. On Linux, it will use nanosleep(2) and I cannot see any mention of the sleep not suspending the thread at all with low values.

replies(2): >>41839461 #>>41843456 #
4. asveikau ◴[] No.41839461{3}[source]
If memory serves, Windows treats a sleep under the scheduler quantum length as a yield. It may take you off the cpu if there's something else to run but it may not. Meanwhile burning up cycles may prevent low power states.

At any rate, back to the code at hand, there are many ways to block on SIGINT without polling. But it's also hugely odd that this code does not read events from the X11 socket while it does so. This is code smell, and a poorly behaved X client.

replies(1): >>41841429 #
5. thwarted ◴[] No.41839735[source]
This is what the pause(2) syscall was made for, waiting for a signal forever.
6. Too ◴[] No.41840345[source]
It’s a good way to drain your battery on mobile devices, even if usage looks low.

Not that this matters for this particular tool.

replies(1): >>41840661 #
7. erickj ◴[] No.41840661[source]
> Not that this matters for this particular tool.

Then the code is perfectly appropriate.

replies(1): >>41840806 #
8. quotemstr ◴[] No.41840806{3}[source]
It's a bad example for others and a bad habit to get into. If every program did this, we'd have trouble getting CPUs into deep idle states.
replies(1): >>41841156 #
9. enriquto ◴[] No.41841156{4}[source]
It's an irrelevant implementation detail. This is for a live call. You are streaming video at the same time, so there's no point in worrying about idling.

I'd even say that it's a good example for others, because the equivalent code with the event loop would be slightly more complicated (maybe 5 more lines?). Striving for "doing things right" when the wrong thing is perfectly appropriate would be a bad example.

replies(3): >>41841764 #>>41843788 #>>41845979 #
10. orbisvicis ◴[] No.41841429{4}[source]
I thought that Linux behaved the same, but I'm not finding any proof in `man 2 nanosleep`...
replies(2): >>41842283 #>>41847422 #
11. asveikau ◴[] No.41841764{5}[source]
My guess is that somebody coded that event-loop-less X client not really familiar with the language and how to write Xlib apps. I partially assume this because C, C++ and especially Xlib are becoming less popular over time, so finding skilled practitioners to write it idiomatically is relatively rare now. This basic event loop stuff is something that maybe belongs in a library. So they just wrote library grade functionality themselves, badly. The commentary here is getting defensive about doing things the wrong way, coming up with lots of post hoc justification.
12. eqvinox ◴[] No.41842283{5}[source]
You can't find that proof because Linux does the opposite. Unless your task is SCHED_REALTIME, all timers have a little bit of slack at the end that allows the kernel to group wakeup events. You can configure this (for non-RT tasks) with prctl(PR_SET_TIMERSLACK).

https://lxr.linux.no/#linux+v6.7.1/kernel/time/hrtimer.c#L20...

https://www.man7.org/linux/man-pages/man2/PR_SET_TIMERSLACK....

replies(1): >>41845644 #
13. 01HNNWZ0MV43FF ◴[] No.41843456{3}[source]
> never noticed

I'd love to see numbers with a Kill-A-watt between the PC and the wall

replies(1): >>41844219 #
14. drdaeman ◴[] No.41843788{5}[source]
> You are streaming video at the same time, so there's no point in worrying about idling.

I'd argue it's completely opposite of this. You're streaming video, already putting some significant stress on the system. No reason to waste time (even if it's a minuscule amount) to make things worse.

> Striving for "doing things right" when the wrong thing is perfectly appropriate would be a bad example.

And that's how we ended with e.g. modern IoT that kinda sorta works but accumulation of minor bad decisions (and some less minor bad decisions for sure) ends up making the whole thing a hot mess.

replies(1): >>41848024 #
15. winrid ◴[] No.41844219{4}[source]
Why? Running an empty loop a thousand times a second is literally almost nothing to any cpu released in the past 20yrs at least
replies(2): >>41844714 #>>41850243 #
16. EasyMark ◴[] No.41844714{5}[source]
Running an empty loop with no sleep or other yield type operation will peg one of your cores if you pin it to that core.

  Int main() {
    while(1) {}
  }
compile that with -O0 and see what happens.
replies(2): >>41845249 #>>41846966 #
17. winrid ◴[] No.41845249{6}[source]
When did we say anything about no sleep or yield? That's completely different. Read the thread.
18. orbisvicis ◴[] No.41845644{6}[source]
Sorry! I was looking for documentation that on Linux sleep(0) yields.
replies(1): >>41846639 #
19. funcDropShadow ◴[] No.41845898[source]
Whether the CPU is busy because of a loop with a sleep depends on the ration of the sleep time and the time to perform the rest of one loop iteration. Doing stuff in a loop iteration that takes 1min and then adding a ms sleep will not drop CPU usage a measurable amount.
replies(1): >>41848001 #
20. eqvinox ◴[] No.41846639{7}[source]
There is no code in nanosleep that converts it into a yield, and in fact a nanosleep(0) is a nanosleep(50µs) with the default timer slack value. If you want to yield, call sched_yield() …
replies(1): >>41847428 #
21. lupusreal ◴[] No.41846966{6}[source]
"A thousand times a second" obviously implies sleep or yield unless your computer is old enough to be your grandfather.
22. gpderetta ◴[] No.41847422{5}[source]
It used to be the case that glibc implemented nanosleep for small values below the scheduling quantum with a spin loop. It was explicitly documented to do so.

This was changed sometimes in the last 20 years, probably with battery powered devices becoming more prevalent and CPUs implementing more advanced sleep states.

23. orbisvicis ◴[] No.41847428{8}[source]
I looked into this a bit further and it seems to be a range from [0,50]. [1] explains that if there is a pre-existing timer interrupt at 0, then the queue will be resumed at 0. But yes - given no other timers it will resume at 50.

1. https://people.kernel.org/joelfernandes/on-workings-of-hrtim...

24. account42 ◴[] No.41848001[source]
The question is about waiting, i.e. when you have no real work to do. If you have significant work to do then there is no point in sleeping until that work is done.
25. account42 ◴[] No.41848024{6}[source]
Sleeping for 100ms between checking for events will not produce a noticeable CPU load. The only reason this would drain the battery is because it can prevent the CPU from entering deeper powersaving states - but even for that 100ms is an eternity and video streaming will prevent that anyway.
26. larschdk ◴[] No.41848081[source]
Sure, if that is the only program, but it is not. This kind of thinking drains batteries faster than necessary, drains the cache, and reduces CPU efficiency. sleep() is a wasteful system call, a kludge at best, and is never the correct solution to a synchronization problem.
27. 01HNNWZ0MV43FF ◴[] No.41850243{5}[source]
For science, literal proving hypotheses science
replies(1): >>41871409 #
28. winrid ◴[] No.41871409{6}[source]
Well, I already run my setup through a kilowatt. I don't see a difference with Python, which you can argue should be 10-100x less efficient:

import time

while True:

    time.sleep(0.001)

also the script itself it bounces between 0% and 1% cpu usage