There's about a thousand different ways to update files on servers?
You can build os packages, and push those however you like.
You can use rsync.
You could push the files over dist, if you want.
You could probably do something cool with bittorrent (maybe that trend is over?)
If you write Makefiles to push, you can use make -j X to get low effort parallelization, which works ok if your node count isn't too big, and you don't need as instant as possible updates.
Erlang source and beam files don't tend to get very large. And most people's dist clusters aren't very large either; I don't think I've seen anyone posting large cluster numbers lately, but I'd be surprised if anyone was pushing to 10,000 nodes at once. Assuming they're well connected, pushing to 10,000 nodes takes some prep, but not that much; if you're driving it from your laptop, you probably want an intermediate pusher node in your datacenter, so you can push once from home/office internet to the pusher node, and then fork a bunch of pushers in the datacenter to push to the other hosts. If you've got multiple locations and you're feeling fancy, have a pusher node at each location, push to the pusher node nearest you; that pushes to the node at each location and from there to individual nodes.
Other issues are more pressing; like making sure you write your code so it's hotload friendly, and maybe trying to test that to confirm you won't use the immense power of hotloading to very rapidly crash all your server processes.