←back to thread

setBigTimeout

(evanhahn.com)
210 points cfj | 10 comments | | HN request time: 0.588s | source | bottom
1. bufferoverflow ◴[] No.41885979[source]
setTimeout is stranger than you think.

We recently had a failed unit test because setTimeout(fn, 1000) triggered at 999ms. That test had ran more than a hundred times before just fine. Till one day it didn't.

replies(5): >>41885984 #>>41886191 #>>41886471 #>>41886483 #>>41886563 #
2. jonathanlydall ◴[] No.41885984[source]
Interesting.

Maybe the system clock did a network time synchronisation during the setTimeout window.

3. gregoriol ◴[] No.41886191[source]
I don't think there is any guarantee that setTimeout will run at exactly 1000. Though didn't expect it to run earlier, it definitely could run later.
replies(1): >>41893177 #
4. _flux ◴[] No.41886471[source]
I wonder if your 999ms was measured using wall-clock time or a monotonic time source? I imagine a wee time correction at an inopportune time could make this happen.
5. steve_adams_86 ◴[] No.41886483[source]
Why does your unit test need to wait one second? Or are you controlling the system time, but it still had that error?
replies(1): >>41893165 #
6. xnorswap ◴[] No.41886563[source]
setTimeout has no guarantees, and even if it did, your unit tests shouldn't depend on it.

Flaky unit tests are a scourge. The top causes of flaky unit tests in my experience:

    - wall clock time ( and timezones )
    - user time ( and timeouts )
    - network calls
    - local I/O
These are also, generally speaking, a cause of unnecessarily slow unit tests. If your unit test is waiting 1000ms, then it's taking 1000ms longer than it needs to.

If you want to test that your component waits, then mock setTimeout and verify it's called with 1000 as a parameter.

If you want to test how your component waiting interacts with other components, then schedule, without timers, the interactions of effects as a separate test.

Fast reliable unit tests are difficult, but a fast reliable unit test suite is like having a super-power. It's like driving along a windy mountainside road and the difference between one with a small gravel trap and one lined with armco barriers. Even though in both cases you can the safe driving speed may be the same, having the barriers there will give you the confidence to actually go at that speed.

Doing every you can to improve the reliably and speed of your unit test suite will pay off in developer satisfaction. Every time a test suite fails because of a test failing that had nothing to do with the changes under test, a bit more of a resume gets drafted.

replies(1): >>41886921 #
7. jffhn ◴[] No.41886921[source]
>Fast reliable unit tests are difficult

Not difficult if you build your code (not just the test suite) around scheduling APIs (and queues implementations, etc.) that can be implemented using virtual time instead of CPU/wall clock time (I call that soft vs hard time).

Actually I find it a breeze to create such fast and deterministic unit tests.

8. bufferoverflow ◴[] No.41893165[source]
How else would you test if something happens after 1 second or not?
replies(1): >>41896686 #
9. bufferoverflow ◴[] No.41893177[source]
Same. I expected it could take a few ms longer. But less? Apparently that's a thing.
10. steve_adams_86 ◴[] No.41896686{3}[source]
By mocking the system time and manually progressing it a set amount of time. Otherwise your tests actually take seconds rather than a few milliseconds.

Not a great example, but here is something I did recently that tests time based state (thousands of seconds) but the suite passes in tens of milliseconds.

It also uses a random number generator with a deterministic mode to allow random behaviour in production, but deterministic results in tests (unrelated but also handy)

https://github.com/steveadams/minesweeper-store/blob/main/sr...

Some writing about the repo in case you’re curious: https://steve-adams.me/building-minesweeper-with-xstate-stor...