One caveat though - using a normal std Mutex within an async environment is an antipattern and should not be done - you can cause all sorts of issues & I believe even deadlock your entire code. You should be using tokio sync primitives (e.g. tokio Mutex) which can yield to the reactor when it needs to block. Otherwise the thread that's running the future blocks forever waiting for that mutex and that reactor never does anything else which isn't how tokio is designed).
So the compiler is warning about 1 problem, but you also have to know to be careful to know not to call blocking functions in an async function.
If anything that's a disadvantage. You want your health monitoring to be the canary, not something that keeps on trucking even if the system is no longer doing useful work. (See the classic safety critical software fail of 'I need a watchdog... I'll just feed it regularly in an isolated task')
/healthz
/very_common_operation
/may_deadlock_server
Normally, /may_deadlock_server doesn't get enough traffic to cause problems (let's say it's 10 RPS and 1000 RPS is /very_common_operation and the server operates fine). However, a sudden influx of requests to /may_deadlock_server may cause your service to deadlock (and not a lot, let's say on the order of a few hundred requests). Do you still want the server to lock up completely and forever and wait for a healthz timeout to reboot the service? What if healthz still remains fine but the entire service goes from 10ms response times for requests to 200ms, just enough to cause problems but not enough to make healthz actually unavailable? And all this just because /may_deadlock saw a spike in traffic. And also, the failing healthz check just restarts your service but it won't mitigate the traffic spike if it's sustained. Now consider also that /may_deadlock_server is a trivial gadget for an attacker to DOS your site.Or do you want the web server responding healthily & rely on metrics and alerts to let you know that /may_deadlock_server is taking a long time to handle requests / impacting performance? Your health monitoring is an absolute last step for automatically mitigating an issue but it'll only help if the bug is some state stuck in a transient state - if it'll restart into the same conditions leading to the starvation then you're just going to be in an infinite reboot loop which is worse.
Healthz is not an alternative to metrics and alerting - it's a last stopgap measure to try to automatically get out of a bad situation. But it can also cause a worse problem if the situation is outside of the state of the service - so generally you want the service to remain available if a reboot wouldn't fix the problem.