I’m still new to the erlang/elixir community and I haven’t run it in prod yet but this is my experience coming from Java, Node, and Python.
You can build os packages, and push those however you like.
You can use rsync.
You could push the files over dist, if you want.
You could probably do something cool with bittorrent (maybe that trend is over?)
If you write Makefiles to push, you can use make -j X to get low effort parallelization, which works ok if your node count isn't too big, and you don't need as instant as possible updates.
Erlang source and beam files don't tend to get very large. And most people's dist clusters aren't very large either; I don't think I've seen anyone posting large cluster numbers lately, but I'd be surprised if anyone was pushing to 10,000 nodes at once. Assuming they're well connected, pushing to 10,000 nodes takes some prep, but not that much; if you're driving it from your laptop, you probably want an intermediate pusher node in your datacenter, so you can push once from home/office internet to the pusher node, and then fork a bunch of pushers in the datacenter to push to the other hosts. If you've got multiple locations and you're feeling fancy, have a pusher node at each location, push to the pusher node nearest you; that pushes to the node at each location and from there to individual nodes.
Other issues are more pressing; like making sure you write your code so it's hotload friendly, and maybe trying to test that to confirm you won't use the immense power of hotloading to very rapidly crash all your server processes.
I wish I had used a pusher node to deploy things when a colleague was using almost all the upstream bandwidth in the office making a video call when my bosses were giving demo and the fix I coded for an issue discovered during the demo could not deploy via Capistrano