←back to thread

27 points roggenbuck | 2 comments | | HN request time: 0.001s | source

I wanted a safer alternative to RegExp for TypeScript that uses a linear-time engine, so I built Regolith.

Why: Many CVEs happen because TypeScript libraries are vulnerable to Regular Expression Denial of Service attacks. I learned about this problem while doing undergraduate research and found that languages like Rust have built-in protection but languages like JavaScript, TypeScript, and Python do not. This library attempts to mitigate these vulnerabilities for TypeScript and JavaScript.

How: Regolith uses Rust's Regex library under the hood to prevent ReDoS attacks. The Rust Regex library implements a linear-time Regex engine that guarantees linear complexity for execution. A ReDoS attack occurs when a malicious input is provided that causes a normal Regex engine to check for a matching string in too many overlapping configurations. This causes the engine to take an extremely long time to compute the Regex, which could cause latency or downtime for a service. By designing the engine to take at most a linear amount of time, we can prevent these attacks at the library level and have software inherit these safety properties.

I'm really fascinated by making programming languages safer and I would love to hear any feedback on how to improve this project. I'll try to answer all questions posted in the comments.

Thanks! - Jake Roggenbuck

Show context
semiquaver ◴[] No.45035198[source]

  > Regolith attempts to be a drop-in replacement for RegExp and requires minimal (to no) changes to be used instead
vs

  > Since Regolith uses Rust bindings to implement the Rust Regex library to achieve linear time worst case, this means that backreferences and look-around aren't available in Regolith either.
Obviously it cannot be a drop-in replacement if the regex dialect differs. That it has a compatible API is not the only relevant factor. I’d recommend removing the top part from the readme.

Another thought: since backreferences and lookaround are the features in JS regexes which _cause_ ReDOS, why not just wrap vanilla JS regex, rejecting patterns including them? Wouldn’t that achieve the same result in a simpler way?

replies(4): >>45035253 #>>45035264 #>>45035460 #>>45035828 #
bawolff ◴[] No.45035253[source]
> Another thought: since backreferences and lookaround are the features in JS regexes which _cause_ ReDOS,

This is incorrect. Other features can cause ReDOS.

The other problematic features have linear time algorithms that could be used, but generally are not used (i assume for better average case performance)

replies(2): >>45035309 #>>45035608 #
1. thomasmg ◴[] No.45035608[source]
Right. An example regex that can be slow is CSV parsing [1]:

.*,.*,.*,.*,.* etc.

I believe a timeout is a better (simpler) solution than to try to prevent 'bad' patterns. I use this approach in my own (tiny, ~400 lines) regex library [2]. I use a limit at most ~100 operations per input byte. So, without measuring wall clock time, which can be inaccurate.

[1]: https://stackoverflow.com/questions/2667015/is-regex-too-slo... [2]: https://github.com/thomasmueller/bau-lang/blob/main/src/test...

replies(1): >>45044056 #
2. bawolff ◴[] No.45044056[source]
PHP tended towards this approach too. It did lead to security vulns though where people interpreted a timeout the same as not matching, so attackers made the input complicated to skip the security check (part of this is on php for making the difference between timeout and no match be null vs false, instead of just throwing an exception)