- Azure network latency is about 85 microseconds.
- AWS network latency is about 55 microseconds.
- Both can do better, but only in special circumstances such as RDMA NICs in HPC clusters.
- Cross-VPC or cross-VNET is basically identical. Some people were saying it's terribly slow, but I didn't see that in my tests.
- Cross-zone is 300-1200 microseconds due to the inescapable speed of light delay.
- VM-to-VM bandwidth is over 10 Gbps (>1 GB/s) for both clouds, even for the smallest two vCPU VMs!
- Azure Premium SSD v1 latency varies between about 800 to 3,000 microseconds, which is many times worse than the network latency.
- Azure Premium SSD v2 latency is about 400 to 2,000 microseconds, which isn't that much better, because:
- Local SSD caches in Azure are so much faster than remote disk that we found that Premium SSD v1 is almost always faster than Premium SSD v2 because the latter doesn't support caching.
- Again in Azure, the local SSD "cache" and also the local "temp disks" both have latency as low as 40 microseconds, on par with a modern laptop NVMe drive. We found that switching to the latest-gen VM SKU and turning on the "read caching" for the data disks was the magic "go-fast" button for databases... without the risk of losing out data.
We investigated the various local-SSD VM SKUs in both clouds such as the Lasv3 series, and as the article mentioned, the performance delta didn't blow my skirt up, but the data loss risk made these not worth the hassle.