←back to thread

Pitfalls of Safe Rust

(corrode.dev)
168 points pjmlp | 2 comments | | HN request time: 0s | source
Show context
nerdile ◴[] No.43603402[source]
Title is slightly misleading but the content is good. It's the "Safe Rust" in the title that's weird to me. These apply to Rust altogether, you don't avoid them by writing unsafe Rust code. They also aren't unique to Rust.

A less baity title might be "Rust pitfalls: Runtime correctness beyond memory safety."

replies(1): >>43603739 #
burakemir ◴[] No.43603739[source]
It is consistent with the way the Rust community uses "safe": as "passes static checks and thus protects from many runtime errors."

This regularly drives C++ programmers mad: the statement "C++ is all unsafe" is taken as some kind of hyperbole, attack or dogma, while the intent may well be to factually point out the lack of statically checked guarantees.

It is subtle but not inconsistent that strong static checks ("safe Rust") may still leave the possibility of runtime errors. So there is a legitimate, useful broader notion of "safety" where Rust's static checking is not enough. That's a bit hard to express in a title - "correctness" is not bad, but maybe a bit too strong.

replies(5): >>43603865 #>>43603876 #>>43603929 #>>43604918 #>>43605986 #
quotemstr ◴[] No.43603876[source]
Safe Rust code doesn't have accidental remote code execution. C++ often does. C++ people need to stop pretending that "safety" is some nebulous and ill-defined thing. Everyone, even C++ people, shows perfectly damn well what it means. C++ people are just miffed that Rust built it while they slept.
replies(2): >>43604117 #>>43604960 #
surajrmal ◴[] No.43604117{3}[source]
Accidental remote code execution isn't limited to just memory safety bugs. I'm a huge rust fan but it's not good to oversell things. It's okay to be humble.
replies(1): >>43604340 #
dymk ◴[] No.43604340{4}[source]
RCEs are almost exclusively due to buffer overruns, sure there are examples where that’s not the case but it’s not really an exaggeration or hyperbole when you’re comparing it to C/C++
replies(1): >>43604711 #
thayne ◴[] No.43604711{5}[source]
Almost exclusively isn't the same as exclusively.

Notably the log4shell[1] vulnerability wasn't due to buffer overruns, and happened in a memory safe language.

[1]: https://en.m.wikipedia.org/wiki/Log4Shell

replies(2): >>43605126 #>>43605170 #
FreakLegion ◴[] No.43605170{6}[source]
In fact "exclusively" doesn't belong in the statement at all. A very small number of successful RCE attacks use exploits at all, and of those, most target (often simple command) injection vulnerabilities like Log4Shell.

If you think back to the big breaches over the last five years, though -- SolarWinds, Colonial Pipeline, Uber, Okta (and through them Cloudflare), Change Healthcare, etc. -- all of these were basic account takeovers.

To the extent that anyone has to choose between investing in "safe" code and investing in IT hygiene, the correct answer today is IT hygiene.

replies(1): >>43605676 #
surajrmal ◴[] No.43605676{7}[source]
Can you back up your 'very small number " with some data? I don't think it lines up with my own experience here. It's really not an either or matter. Good security requires a multifaceted approach. Memory safety is definitely a worthwhile investment.
replies(1): >>43606732 #
1. FreakLegion ◴[] No.43606732{8}[source]
What do you count as data? I can keep naming big breaches that didn't involve exploits, like the Caesars and MGM ransomware attacks, or Russia getting deep into Microsoft. There aren't good public data sets, though.

As an example of a bad data set for this conversation, the vast majority of published CVEs have never been used by an attacker. CISA's KEVs give a rough gauge of this, with a little north of 1300 since 2021, and that includes older CVEs that are still in use, like EternalBlue. Some people point to the cardinality of CVE databases as evidence of something, but that doesn't hold up to scrutiny of actual attacks. And this is all before filtering down to memory safety RCE CVEs.

Probably the closest thing to a usable data set here would be reports from incident response teams like Verizon's, but their data is of course heavily biased towards the kinds of incidents that require calling in incident response teams. Last year they tagged something like 15% of breaches as using exploits, and even that is a wild overestimate.

> Memory safety is definitely a worthwhile investment.

In a vacuum, sure, but Python, Java, Go, C#, and most other popular languages are already memory safe. How much software is actively being written in unsafe languages? Back in atmosphere, there's way more value in first making sure all of your VPNs have MFA enabled, nobody's using weak or pwned passwords, employee accounts are deactivated when they leave the company, your help desk has processes to prevent being social engineered, and so on.

replies(1): >>43607918 #
2. thayne ◴[] No.43607918[source]
> How much software is actively being written in unsafe languages?

Well, let's see. Most major operating system kernels for starters. Web browsers. OpenSSL. Web servers/proxies like Apache, Nginx, HAProxy, IIS, etc. GUI frameworks like Gtk, Qt, parts of Flutter. And so on.