I'll bite, just so you get a real answer instead of the very correct but annoying "don't worry about it right now" answers everyone else is going to provide!
We have a rails monolith that sends our master database instance between 2,000 and 10,000 queries per second depending on the time of year. We have a seasonal bike business with more traffic in the summer. 5% of queries are insert/update/delete, the rest read.
mariadb (mysql flavor), all reads and writes sent just to master. Two slaves, one for live failover, the other sitting on a ZFS volume for backup snapshotting sending snapshots off to rsync.net (they are awesome BTW).
We run all our own hardware. The database machines have 512gb of ram and dual EPYC 74F3 24 core processors, backed by a 4 drive raid10 nvme linux software raid volume on top of micron 9300 drives. These machines also house a legacy mongodb cluster (actually a really really nice and easy to maintain key/value store, which is how we use it) on a separate raid volume, an elastic search cluster, and a redis cluster. The redis cluster often is doing 10,000 commands a second on a 20gb db, and the elastic search cluster is a 3tb full text search + geo search database that does about 150 queries a second.
In other words, mysql isn't single tenant here, though it is single tenant on the drives that back our mysql database.
We don't have any caching as it pertains to database queries. yes we shove some expensive to compute data in redis and use that as a cache, but it wouldn't be hitting our database on a cache miss, it would instead recalculate it on the fly from GPS data. I would expect to 3-5x our current traffic before considering caching more seriously, but I'll probably once again just upgrade machines instead. I've been saying this for 15 years....
At the end of 2024 I went on a really fun quest to cut our DB size from 1.4tb down to about 500gb, along with a bunch of query performance improvements (remove unnecessary writes with small refactors, better indexes, dropping unneeded indices, changing from strings to enums in places, etc). I spent about 1 week of very enjoyable and fast paced work to accomplish this while everyone was out christmas break (my day job is now mostly management), and prob would need another 2 weeks to go after the other 30% performance improvements I have in mind.
All this is to serve a daily average of 200-300 http requests per second to our backend, with a mix of website visitors and users of our mobile apps. I've seen 1000rps steady-state peak peak last year and wasn't worried about anything. I wouldn't be surprised if we could get up to 5,000rps to our API with this current setup and a little tuning.
The biggest table by storage and by row count has 300 million rows and I think 150gb including indexes, though I've had a few tables eclipse a billion rows before rearchitecting things. Basically, if you use DB for analytics things get silly, but you can go a long ways before thinking "maybe this should go in its own datastore like clickhouse".
Also, it's not just queries per second, but also row operations per second. mysql is really really fast. We had some hidden performance issues that allowed me to go from 10,000,000 row ops per second down to 200,000 row ops per second right now. This didn't really change any noticable query performance, mysql was cool for some things just doing a ton of full table scans all over the place....