If you've ever implemented IS-IS or OSPF before, like 80% of the work is "LSP flooding", which is just the process that gets updates about available links from one end of the network to another as fast as possible without drowning the links themselves in update messages. Flooding algorithms don't build consensus, unlike Raft quorums, which intrinsically have a centralized set of authorities that keep a single source of truth for all the valid updates.
An OSPF router uses those updates to do build a forwarding table with a single-point shortest path first routine, but there's nothing to say that you couldn't instead use the same notion of publishing weighted advertisements of connectivity to, for instance, build a table to map incoming HTTP requests to backends that can field them.
The point is, if you're going to do distributed consensus, you've got a dilemma: either you're going to have the Ents moot in a single forest, close together, and round trip updates from across the globe in and out of that forest (painfully slow to get things in and out of the cluster), or you're going to try to have them moot long distance (painfully slow to have the cluster converge). The other thing you can do, though, is just sidestep this: we really don't have the Raft problem at all, in that different hosts on our network do not disagree with each other about whether they're running particular apps; if worker-sfu-ord-1934 says it's running an instance of app-4839, I pretty much don't give a shit if worker-sfu-maa-382a says otherwise; I can just take ORD's word for it.
That's the intuition behind why you'd want to do something like SWIM update propagation rather than Raft for a global state propagation scheme.
But if you're just doing service discovery for a well-bounded set of applications (like you would be if you were running engineering for a single large company and their internal apps), Raft gives you some handy tools you might reasonably take advantage of --- a key-value store, for instance. You're mostly in a single data center anyways, so you don't have the long-distance-Entmoot problem. And HashiCorp's tools will federate out across multiple data centers; the constraints you inherit by doing that federation mostly don't matter for a single company's engineering, but they're extremely painful if you're servicing an unbounded set of customer applications and providing each of them a single global picture of their deployments.
Or we're just holding it wrong. Also a possibility.