Yes, it's nice and flexible - but also introduces some dangerous subtle bugs.
const attackerControlled = ...;
if (attackerControlled < 60_000) {
throw new Error("Must wait at least 1min!");
}
setTimeout(() => {
console.log("Surely at least 1min has passed!");
}, attackerControlled);
The attacker could set the value to a comically large number and the callback would execute immediately. This also seems to be true for NaN. The better solution (imo) would be to throw an error, but I assume we can't due to backwards compatibility.And no, they're not all that. There's a bunch that are 2^32 such as this timeout, apparently, plus all the bit shift operations.
The problem here is having an attacker control a security sensitive timer in the first place.
But I totally understand it not being a priority if the situation is: setTimeout(() => {}, 500000000) not working in X years.
https://git.sr.ht/~evanhahn/setBigTimeout/tree/main/item/mod...
https://git.sr.ht/~evanhahn/setBigTimeout/tree/main/item/mod...
I thought all numbers in JavaScript were basically some variation of double precision floating points, if so, why is setTimeout limited to a smaller 32bit signed integer?
If this is true, then if I pass something like "0.5", does it round the number when casting it to an integer? Or does it execute the callback after half a millisecond like you would expect it would?
If your code would misbehave outside a certain range of values and you're input might span a larger range, you should be checking your input against the range that's valid. Your sample code simply doesn't do that, and that's why there's a bug.
That the bug happens to involve a timer is irrelevant.
Except for the fact that this behaviour is surprising.
> you should be checking your input against the range that's valid. Your sample code simply doesn't do that, and that's why there's a bug.
Indeed, so why doesn't setTimeout internally do that?
Given that `setTimeout` is a part of JavaScript's ancient reptilian brain, I wouldn't be surprised it doesn't do those checks just because there's some silly compatibility requirement still lingering and no one in the committees is brave enough to make a breaking change.
(And then, what should setTimeout do if delay is NaN? Do nothing? Call immediately? Throw an exception? Personally I'd prefer it to throw, but I don't think there's any single undeniably correct answer.)
Given the trend to move away from the callbacks, I wonder why there is no `async function sleep(delay)` in the language, that would be free to sort this out nicely without having to be compatible with stuff from '90s. Or something like that.
Welcome to Node.js v22.7.0.
Type ".help" for more information.
> setTimeout(() => console.log('reached'), 3.456e9)
Timeout { <contents elided> }
> (node:64799) TimeoutOverflowWarning: 3456000000 does not fit into a 32-bit signed integer.
Timeout duration was set to 1.
(Use `node --trace-warnings ...` to show where the warning was created)
reached
I'm surprised to see that setTimeout returns an object - I assume at one point it was an integer identifying the timer, the same way it is on the web. (I think I remember it being so at one point.)Maybe not contrived but definitely insecure by definition. Allowing user control of rates is definitely useful & a power devs will need to grant but it should never be direct control.
No matter how many layers of abstraction you put in between, you're still eventually going to be passing a value to the setTimeout function that was computed based on something the user inputted, right?
If you're not aware of these caveats about extremely high timeout values, how do any layers of abstraction in between help you prevent this? As far as I can see, the only prevention is knowing about the caveats and specifically adding validation for them.
Or maybe I'm missing your point.
Or comes from a set of known values. This stuff isn't that difficult.
This doesn't require prescient knowledge of high timeout edge cases. It's generally accepted good security practice to limit business logic execution based on user input parameters. This goes beyond input validation & bounds on user input (both also good practice but most likely to just involve a !NaN check here), but more broadly user input is data & timeout values are code. Data should be treated differently by your app than code.
To generalise the case more, another common case of a user submitting a config value that would be used in logic would be string labels for categories. You could validate against a known list of categories (good but potentially expensive) but whether you do or not it's still good hygiene to key the user submitted string against a category hashmap or enum - this cleanly avoids using user input directly in your executing business logic.
Rate limits are implemented with e.g., token buckets which fill to a limit at a fixed rate. Timed tasks would then on run try to take a token, and if none is present wait for one. This would then be dutifully enforced regardless of the current state of scheduled tasks.
Only consideration for the timer itself would be to always add random jitter to avoid having peak loads coalesce.
I've had too many sleep functions not work as they should to still rely on this, especially on mobile devices and webpages where background power consumption is a concern. It doesn't excuse new bad implementations but it's also not exactly surprising
[1]: https://github.com/DvdGiessen/virtual-clock/blob/master/src/...
To be clear, I am not trying to be mean, I'm just curious to hear why I would pick this over cf.
Sometimes the wait is over before I find the responsible code, and sometimes it does check server-side, but that's just part of the fun...
That doesn't mean it's fine to wait and leave it until the last minute, but we have quite a few last minutes left at this point.
Pre-GitHub, one of the most popular web git viewers (cgit) used "tree" in this way. Never found that to be confusing.
(In git, the listing of the files and directories at a particular commit is called a "tree". So it's correct. Just not as intuitive as you, personally, would like.)
although not the OP this is what I would mean by indirect control.
pseudo if userAccountType === "free" then rate = longRate
if userAccountType === "base" then rate = infrequentRate
if userAccountType === "important" then rate = frequentRate
obviously rate determination would probably be more complicated than just userAccountType
We recently had a failed unit test because setTimeout(fn, 1000) triggered at 999ms. That test had ran more than a hundred times before just fine. Till one day it didn't.
Maybe the system clock did a network time synchronisation during the setTimeout window.
[1] https://html.spec.whatwg.org/multipage/timers-and-user-promp...
As I understand it, the precision of such timers has been limited a bit in browsers to mitigate some Spectre attacks (and maybe others), but I imagine it would still be fine for this purpose.
This is fun, though. JS is a bucket of weird little details like this.
Flaky unit tests are a scourge. The top causes of flaky unit tests in my experience:
- wall clock time ( and timezones )
- user time ( and timeouts )
- network calls
- local I/O
These are also, generally speaking, a cause of unnecessarily slow unit tests. If your unit test is waiting 1000ms, then it's taking 1000ms longer than it needs to.If you want to test that your component waits, then mock setTimeout and verify it's called with 1000 as a parameter.
If you want to test how your component waiting interacts with other components, then schedule, without timers, the interactions of effects as a separate test.
Fast reliable unit tests are difficult, but a fast reliable unit test suite is like having a super-power. It's like driving along a windy mountainside road and the difference between one with a small gravel trap and one lined with armco barriers. Even though in both cases you can the safe driving speed may be the same, having the barriers there will give you the confidence to actually go at that speed.
Doing every you can to improve the reliably and speed of your unit test suite will pay off in developer satisfaction. Every time a test suite fails because of a test failing that had nothing to do with the changes under test, a bit more of a resume gets drafted.
The browser devs have decided it's acceptable to change the behaviour of setTimeout in some situations.
https://developer.chrome.com/blog/timer-throttling-in-chrome...
C’est l’histoire d’un homme qui tombe d’un immeuble de 50 étages.
Le mec, au fur et à mesure de sa chute, il se répète sans cesse pour se rassurer:
"Jusqu’ici tout va bien."
"Jusqu’ici tout va bien."
"Jusqu’ici tout va bien..."
Mais l’important c’est pas la chute, c’est l’atterrissage.
Tx'd: There's this story of a man falling off a 50 floor building. Along his fall the guy repeats to himself in comfort:
"So far, so good"
"So far, so good"
"So far, so good..."
What matters though is not the fall, but the landing.
- Hubert, in La Haine (1995), Mathieu KassovitzNot difficult if you build your code (not just the test suite) around scheduling APIs (and queues implementations, etc.) that can be implemented using virtual time instead of CPU/wall clock time (I call that soft vs hard time).
Actually I find it a breeze to create such fast and deterministic unit tests.
Just JS being JS: setTimeout(()=>{}, Infinity) executes immediately
At least if your definition of “correct” is “does the thing most similar to the thing I’m extending/replicating”. In fact you might believe it’s a bug to do otherwise, and JS (I’m no expert) doesn’t give a way to run off the event loop anyway (in all implementations). Although I’d be amused to see someone running even a 90 day timer in the browser. :)
I’ve think a very precise timeout would want a different name, to distinguish it from setTimeout’s behavior.
The problem is that when setBigTimeout is invoked with a floating-point number (and numbers are floating-point in JS by default), it keeps computing the time left till trigger in floating point. But FP numbers are weird:
> 1e16 - 1 == 1e16
true
At some point, they don't have enough precision to represent exact differences, so they start rounding, and this gets extremely more inaccurate as the value increases. For correct behavior, remainingDelay needs to be stored in BigInt.Of course, this problem is mostly theoretical, as it starts happening at around 2^83 milliseconds, which doesn't even fit in a 64-bit time_t, and it's not like humanity will exist by then. But still!
What's funny is you think that about the caller of setTimeout but not setTimeout itself :)
As a side note, why do you use this weird non-Github, non-Gitlab, non-Bitbucket sketchy looking git host? I can see the code obviously, but it makes me worry about supply chain security.
It’s made by Drew Devault who is mostly well-respected in the hacker community, and it’s made exactly to be an alternative to BigCo-owned source hosts like GitHub, Gitlab and Bitbucket.
It’s definitely suboptimal though, even if it is documented.
Latest news is that he authored/published a controversial character assassination on Richard Stallman while trying and failing to stay anonymous. Then some further digging after this unmasking found he's into pedophilic anime. Sitting on his computer uploading drawings of scantily-clad children to NSFW subreddits.
No-one with any decency can respect that behavior, it's disgusting.
Not a great example, but here is something I did recently that tests time based state (thousands of seconds) but the suite passes in tens of milliseconds.
It also uses a random number generator with a deterministic mode to allow random behaviour in production, but deterministic results in tests (unrelated but also handy)
https://github.com/steveadams/minesweeper-store/blob/main/sr...
Some writing about the repo in case you’re curious: https://steve-adams.me/building-minesweeper-with-xstate-stor...