Of course, that's always the trade off with Rust. You're trading a _lot_ of time spent up-front for time saved in increments down the road.
As a concrete example, I'm in the process of building up a Rust database to replace the Postgres solution one of my applications is using. Partly because I'm a psycho, and partly because I've gotten query times down from 20 seconds on Postgres to 50ms with Rust (despite my best efforts to optimize Postgres).
Being a mostly ACID, async database, this involves some rather unpleasant interactions with the Rust borrow checker. I've had to refactor a significant portion of the code probably five times by now. The lack of feedback during the process is a huge pain point, as the article points out (though I'm not sure what the solution to that would be). Even if you _think_ you know the rules, you probably don't, and you're not going to find out until 2 hours later.
The second most painful point has to be the 'static lifetime, which comes up a lot when dealing with threading and async. For me it's when I need to use spawn_blocking inside an async function. Of course, the compiler has no way of knowing _when_ or _if_ spawn_blocking will finish, so it needs any borrows to be 'static. But in practice that means having to write all kinds of awkward workarounds in what should otherwise be simple code. I certainly understand the _why_, and I'm sure in X years it'll be fixed, but right now ... g'damn.
That said, the borrow checker _has_ improved. I think my last major Rust project was before the upgraded borrow checker, which wasn't able to infer lifetimes for local variables. So you had to throw a lot of stuff inside separate blocks. We also have a lot more elided lifetimes now. Just empirically from this project I'd say me and the borrow checker only had about 30% of the fisticuffs we did in the past.
Personally, I think the tradeoff is worth it. It won't be for everyone, or every project. But 20s to 50ms query time, with a ton of safety guarantees to ensure the valuable data running through the database is well cared for? Worth every line of refactored code.
Asides:
* The project in question: https://github.com/fpgaminer/tagstormdb
* I also replaced some of my large JSON responses with FlatBuffers. FlatBuffers is a bit of a PITA, but when you're trying to shuffle 4 million integers over to the webapp, being able to do almost 0 decoding on the browser side, and get them as a Uint32Array directly is gold.
* It's a miracle I got away with the search parser in the project. I use Pest, and both the tree it spits out and the AST I build from it hold references. Yet sprinkling a little 'a on the impl's and struct's did the trick.
* Dynamic dispatch has also improved, as far as I can tell, which used to always involve some weird lifetimes if the return values needed to borrow stuff.
* ChatGPT o1 is a lot better at Rust then 4 or 4o. I've gotten a lot more useful stuff out of o1 this time around, including less hallucinations. Still weaker than Python/TypeScript/etc, with maybe 2-3 compile errors that need to be fixed each time. But still better. Sonnet completely failed every time I tried it :/ (Both through Copilot and the web). o1 in Copilot _could_ be amazing, since I can directly attach all my code. But the o1 in Copilot _feels_ weaker. I'm fairly sure the 4o Copilot uses is finetuned, and possibly smaller, so it too always felt weaker. Seems like o1 is the same deal. Still really useful for the Typescript side of things, but for Rust I had to farm out to the web and just copy paste the files each time.