“Sane scraper” doesn’t have a definition or anyone to enforce it. Similarly, APIs aren’t magic - if you make things publicly available, people will harvest it whether that’s with a 90s-style bot making individual requests or a headless browser which runs the JavaScript you use to make API calls.
The other thing to think about is the lack of enforcement: you can’t complain to the bot police when some dude in China decides to harvest your data, and if you try blocking by user-agent or IP you’ll play whack-a-mole trying to stay ahead of the bot operators who will spoof the former and churn the latter. After developing an appreciation for why security people talk about validating correctness rather than trying to enumerate badness, you’ll end up with a combination of rate-limiting and broader blocking for the same reasons. Yes, it’s no fun but the problem isn’t the sites but the people abusing the free services we’ve been given.