In that case, no matter what we are using there is going to be a critical issue. I think the best I could suggest at that point would be to have records in your zone that round robin different cloud providers, but that comes with its own challenges.
I believe there are some articles sitting around regarding how AWS plans for failure and the fallback mechanism actually reduces load on the system rather than makes it worse. I think it would require in-depth investigation on the expected failover mode to have a good answer there.
For instance, just to make it more concrete, what sort of failure mode are you expecting to happen with the Route 53 health check? Depending on that there could be different recommendations.
As far as the OP's point though, I'm going to probably assume that the health checks need to stay within/from AWS because 3rd party health checks could taint/dilute the point of the in-house AWS HC service to begin with.
One problem we've run into, which is the "DNS is single point of failure" is that there isn't a clear best strategy to deal with "failover to a different cloud at the DNS routing level."
I'm not the foremost expert when it comes to ASNs and BGPs, but from my understanding that would require some multi-cloud collaboration to get multiple CDNs to still resolve, something that feels like it would require both multiple levels of physical infrastructure as well as significant cost to actually implement correctly compared to the ROI for our customers.
There's a corollary here for me, which is, still as simple as possible to achieve the result. Maybe there is a multi-cloud strategy, but the strategies I've seen still rely on having the DNS zone in one provider that fail-overs or round-robins specific infra in specific locations.
Third party health checks have less of a problem of "tainting" and more just cause further complications, as you add in complexity to resolving your real state, the harder it is to get it right.
For instance, one thing we keep going back and forth on is "After the incident is over, is there a way for us to stay failed-over and not automatically fail back".
And the answer for us so far is "not really". There are a lot of bad options, which all could have catastrophic impacts if we don't get it exactly correct, and haven't come with significant benefits, yet. But I like to think I have an open mind here.
[1] But, it's DNS; the expectation is that some resolvers, hopefully very few of them, will cache data as if your TTL value was measured in days. IMHO, If you want to move all your traffic in a defined timeframe, DNS is not sufficient.
Proper HA is owning your own IP space and anycast advertising it from multiple IXes/colos/clouds to multiple upstreams / backbone networks. BGP hold times are like a dead-mans-switch and will ensure traffic stops being routed in that direction within a few seconds in case of a total outage, plus your own health-automation should disable those advertisements when certain things happen. Of course, you need to deal with the engineering complexity of your traffic coming in to multiple POPs at once, and it won't be cheap at all (to start, you're looking at ~10kUSD capex for a /24 of IP space, plus whatever the upstreams charge you monthly), but it will be very resilient to pretty much any single point of failure, including AWS disappearing entirely.