Most active commenters
  • arghwhat(3)

←back to thread

setBigTimeout

(evanhahn.com)
210 points cfj | 11 comments | | HN request time: 0.002s | source | bottom
Show context
n2d4 ◴[] No.41880898[source]
The default behaviour of setTimeout seems problematic. Could be used for an exploit, because code like this might not work as expected:

    const attackerControlled = ...;
    if (attackerControlled < 60_000) {
      throw new Error("Must wait at least 1min!");
    }

    setTimeout(() => {
      console.log("Surely at least 1min has passed!");
    }, attackerControlled);

The attacker could set the value to a comically large number and the callback would execute immediately. This also seems to be true for NaN. The better solution (imo) would be to throw an error, but I assume we can't due to backwards compatibility.
replies(6): >>41881042 #>>41881074 #>>41881774 #>>41882110 #>>41884470 #>>41884957 #
1. arghwhat ◴[] No.41881042[source]
A scenario where an attacker can control a timeout where having the callback run sooner than one minute later would lead to security failures, but having it set to run days later is perfectly fine and so no upper bound check is required seems… quite a constructed edge case.

The problem here is having an attacker control a security sensitive timer in the first place.

replies(2): >>41881665 #>>41888952 #
2. a_cardboard_box ◴[] No.41881665[source]
The exploit could be a DoS attack. I don't think it's that contrived to have a service that runs an expensive operation at a fixed rate, controlled by the user, limited to 1 operation per minute.
replies(2): >>41882211 #>>41883672 #
3. lucideer ◴[] No.41882211[source]
> I don't think it's that contrived to have a service that runs an expensive operation at a fixed rate, controlled by the user

Maybe not contrived but definitely insecure by definition. Allowing user control of rates is definitely useful & a power devs will need to grant but it should never be direct control.

replies(1): >>41882396 #
4. shawnz ◴[] No.41882396{3}[source]
Can you elaborate on what indirect control would look like in your opinion?

No matter how many layers of abstraction you put in between, you're still eventually going to be passing a value to the setTimeout function that was computed based on something the user inputted, right?

If you're not aware of these caveats about extremely high timeout values, how do any layers of abstraction in between help you prevent this? As far as I can see, the only prevention is knowing about the caveats and specifically adding validation for them.

replies(2): >>41882973 #>>41885728 #
5. lucideer ◴[] No.41882973{4}[source]
> that was computed

Or comes from a set of known values. This stuff isn't that difficult.

This doesn't require prescient knowledge of high timeout edge cases. It's generally accepted good security practice to limit business logic execution based on user input parameters. This goes beyond input validation & bounds on user input (both also good practice but most likely to just involve a !NaN check here), but more broadly user input is data & timeout values are code. Data should be treated differently by your app than code.

To generalise the case more, another common case of a user submitting a config value that would be used in logic would be string labels for categories. You could validate against a known list of categories (good but potentially expensive) but whether you do or not it's still good hygiene to key the user submitted string against a category hashmap or enum - this cleanly avoids using user input directly in your executing business logic.

6. arghwhat ◴[] No.41883672[source]
A minimum timing of an individual task is not a useful rate limit. I could schedule a bunch of tasks to happen far into the future but all at once for example.

Rate limits are implemented with e.g., token buckets which fill to a limit at a fixed rate. Timed tasks would then on run try to take a token, and if none is present wait for one. This would then be dutifully enforced regardless of the current state of scheduled tasks.

Only consideration for the timer itself would be to always add random jitter to avoid having peak loads coalesce.

replies(1): >>41885533 #
7. lemagedurage ◴[] No.41885533{3}[source]
I don't think it's that far fetched that a developer implements a rate limiter with setTimeout, where a task can only be executed if a timeout is not already running. The behaviour in the article is definitely a footgun in this scenario.
replies(1): >>41886626 #
8. bryanrasmussen ◴[] No.41885728{4}[source]
>Can you elaborate on what indirect control would look like in your opinion?

although not the OP this is what I would mean by indirect control.

pseudo if userAccountType === "free" then rate = longRate

if userAccountType === "base" then rate = infrequentRate

if userAccountType === "important" then rate = frequentRate

obviously rate determination would probably be more complicated than just userAccountType

9. ◴[] No.41886626{4}[source]
10. chacham15 ◴[] No.41888952[source]
I would imagine the intent behind this would be that the attacker has indirect control over the timeout. E.g. a check password input which delays you in between attempts doubling the length of time you have to wait in between each failed attempt. With this bug in place, the attacker would simply wait all the timeouts until the timeout exceeded 25 days at which point they could brute force the password check back to back.
replies(1): >>41890611 #
11. arghwhat ◴[] No.41890611[source]
A login back off should be capped to a number of hours rather than be allowed to grow to a month though. I also have a hard time seeing this implemented as setTimeouts for every failed login attempt instead of storing a last login attempt time and counter in a user database with a time comparison when login is called.

It’s definitely suboptimal though, even if it is documented.