"Privacy First: Guaranteed. We will never sell your data or use it to target ads. Period. We will never log your IP address (the way other companies identify you). And we’re not just saying that. We’ve retained KPMG to audit our systems annually to ensure that we're doing what we say.
Frankly, we don’t want to know what you do on the Internet—it’s none of our business—and we’ve taken the technical steps to ensure we can’t."
* no logging
* DNS over HTTPS
In the same breath, they insinuate that Google both sells and uses DNS usage from their 8.8.8.8 and 8.8.4.4 resolvers.
Now, audits are generally not worth very much (even, perhaps even especially, from a Big Four group like KPMG), but for this type of thing (verifying that a company isn't doing something they promised they would not do) they're about the best we have.
PING 1.1.1.1 (1.1.1.1): 56 data bytes
64 bytes from 1.1.1.1: icmp_seq=0 ttl=47 time=214.866 ms
64 bytes from 1.1.1.1: icmp_seq=1 ttl=47 time=173.416 ms
64 bytes from 1.1.1.1: icmp_seq=2 ttl=45 time=256.007 ms
64 bytes from 1.1.1.1: icmp_seq=3 ttl=45 time=196.638 ms
64 bytes from 1.1.1.1: icmp_seq=4 ttl=45 time=294.694 ms
64 bytes from 1.1.1.1: icmp_seq=5 ttl=45 time=314.883 ms
64 bytes from 1.1.1.1: icmp_seq=6 ttl=47 time=335.099 ms
(From Singapore)
Google's 8.8.8.8 has about <4ms
64 bytes from 1.1.1.1: icmp_seq=0 ttl=60 time=2.099 ms
64 bytes from 1.1.1.1: icmp_seq=1 ttl=60 time=2.073 ms
64 bytes from 1.1.1.1: icmp_seq=2 ttl=60 time=1.963 ms
64 bytes from 1.1.1.1: icmp_seq=3 ttl=60 time=2.089 ms
PING 8.8.8.8 (8.8.8.8): 56 data bytes
64 bytes from 8.8.8.8: icmp_seq=0 ttl=60 time=1.908 ms
64 bytes from 8.8.8.8: icmp_seq=1 ttl=60 time=1.888 ms
64 bytes from 8.8.8.8: icmp_seq=2 ttl=60 time=1.993 ms
64 bytes from 8.8.8.8: icmp_seq=3 ttl=60 time=1.891 ms
From SG too. Could it be... just you?
Things are a bit quicker in the US:
64 bytes from 1.1.1.1: icmp_seq=1 ttl=60 time=0.421 ms
64 bytes from 8.8.8.8: icmp_seq=1 ttl=58 time=0.645 ms
BTW if you want to use DNS over HTTPS on Linux/Mac I strongly recommend dnscrypt proxy V2 (golang rewrite) https://github.com/jedisct1/dnscrypt-proxy and put e.g. cloudflare in their config toml file to make use of it.
The Baseline Requirements agreed between Web Browser vendors and root Certificate Authorities dictate how the CA can figure out if an applicant is allowed a certificate for a particular name, for dnsNames this is the Ten Blessed Methods, for ipAddress the rules are a bit... eh, rusty, but the idea is you can't get one for that dynamic IP you have from your cable provider for 24 hours, but somebody who really controls the IP address can get one. They're uncommon, but not rare, maybe a dozen a day are issued?
Your web browser requires that the name in the URL exactly matches the name in the certificate. So if you visit https://some-dns-server.example/ the certificate needs to be for some-dns-server.example (or *.example) and a certificate for 1.1.1.1 doesn't work, even if some-dns-server.example has IP address 1.1.1.1 - so this cert is only useful because they want people actually typing https://1.1.1.1/ into browsers...
[edited, I have "Servers" on the brain, it's _Subject_ Alternative Name, you can use them to name email recipients, and lots of things that aren't servers]
ping 1.1.1.1
PING 1.1.1.1 (1.1.1.1): 56 data bytes
64 bytes from 1.1.1.1: icmp_seq=0 ttl=59 time=3.111 ms
64 bytes from 1.1.1.1: icmp_seq=1 ttl=59 time=3.172 ms
64 bytes from 1.1.1.1: icmp_seq=2 ttl=59 time=3.301 ms
64 bytes from 1.1.1.1: icmp_seq=3 ttl=59 time=3.018 ms
64 bytes from 1.1.1.1: icmp_seq=4 ttl=59 time=3.218 ms
^C
--- 1.1.1.1 ping statistics ---
5 packets transmitted, 5 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 3.018/3.164/3.301/0.096 ms
fwiw Google DNS is around the same, 2.942ms average.What is intriguing to me is why Cloudflare are offering this. Perhaps it is to provide data on traffic that is 'invisible' to them, as in it doesn't currently touch their networks. Possibly as a sales-lead generator.
Or is the plan to become dominant and then use DNS blackholing to shutdown malware that is a threat to their systems?
https://stackoverflow.com/questions/1095780/are-ssl-certific...
Pinging 8.8.8.8 with 32 bytes of data:
Reply from 8.8.8.8: bytes=32 time<1ms TTL=57
Reply from 8.8.8.8: bytes=32 time=1ms TTL=57
Reply from 8.8.8.8: bytes=32 time<1ms TTL=57
Reply from 8.8.8.8: bytes=32 time<1ms TTL=57
Pinging 1.1.1.1 with 32 bytes of data:
Reply from 1.1.1.1: bytes=32 time=6ms TTL=57
Reply from 1.1.1.1: bytes=32 time=6ms TTL=57
Reply from 1.1.1.1: bytes=32 time=6ms TTL=57
Reply from 1.1.1.1: bytes=32 time=6ms TTL=57
(Switzerland)
Cloudflare is somewhat right: Means, Motive and Opportunity - but for a conviction you have to prove someone acted on the Opportunity. The Motive of Google is tampered with severe risk for loosing trust.
Cloudflare can make an argument they are fundamentally better positioned and that is all they do. As with all US based operations the NSA may cook up some convincing counterarguments and we may never know.
This is the entry for the cert used:
DNS Name=*.cloudflare-dns.com
IP Address=1.1.1.1
IP Address=1.0.0.1
DNS Name=cloudflare-dns.com
IP Address=2606:4700:4700:0000:0000:0000:0000:1111
IP Address=2606:4700:4700:0000:0000:0000:0000:1001
To be explicit: This is not Cloudflare's fault and we should blame the manufacturer of the router, or the ISP for deploying their custom "friendly" settings. But it is what it is.
[mason@iMac-Pro-No-5 fubastardo (master)]$ ping 1.1.1.1
PING 1.1.1.1 (1.1.1.1): 56 data bytes
64 bytes from 1.1.1.1: icmp_seq=0 ttl=56 time=2.310 ms
64 bytes from 1.1.1.1: icmp_seq=1 ttl=56 time=2.287 ms
64 bytes from 1.1.1.1: icmp_seq=2 ttl=56 time=2.103 ms
64 bytes from 1.1.1.1: icmp_seq=3 ttl=56 time=2.785 ms
64 bytes from 1.1.1.1: icmp_seq=4 ttl=56 time=2.276 ms
64 bytes from 1.1.1.1: icmp_seq=5 ttl=56 time=2.646 ms
^C
--- 1.1.1.1 ping statistics ---
6 packets transmitted, 6 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 2.103/2.401/2.785/0.236 ms
[mason@iMac-Pro-No-5 fubastardo (master)]$
[mason@iMac-Pro-No-5 fubastardo (master)]$
[mason@iMac-Pro-No-5 fubastardo (master)]$ ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8): 56 data bytes
64 bytes from 8.8.8.8: icmp_seq=0 ttl=56 time=2.217 ms
64 bytes from 8.8.8.8: icmp_seq=1 ttl=56 time=1.837 ms
64 bytes from 8.8.8.8: icmp_seq=2 ttl=56 time=1.838 ms
64 bytes from 8.8.8.8: icmp_seq=3 ttl=56 time=2.010 ms
64 bytes from 8.8.8.8: icmp_seq=4 ttl=56 time=1.827 ms
64 bytes from 8.8.8.8: icmp_seq=5 ttl=56 time=2.056 ms
64 bytes from 8.8.8.8: icmp_seq=6 ttl=56 time=1.807 ms
^C
--- 8.8.8.8 ping statistics ---
7 packets transmitted, 7 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 1.807/1.942/2.217/0.145 ms
[mason@iMac-Pro-No-5 fubastardo (master)]$
Yes, but...
This only works if they don't use SNI[1]. If they use SNI then you just get the default cert. They might have more certs for other hostnames served on that IP address.
Because crappy software (looking at you here OpenSSL) makes writing SANs into a Certificate Subject Request way harder than it needs to be, a lot of CAs (including Let's Encrypt) will take a CSR that says "My Common Name is foo.example" and sigh, and issue a cert which adds SAN dnsName foo.example, because they know that's what you want. Really somebody should fix the software, one of these days.
In older Windows versions, SChannel (Microsoft's implementation of SSL/TLS) doesn't understand ipAddress, and thinks the correct way to match an ipAddress against a certificate is to turn the address into ASCII text of dotted decimals and compare that to the dnsName entries. This, unsurprisingly, is not standards compliant.
It's good to see a CA not trying to fudge this, but the consequence is probably that if you have older Windows (XP? Maybe even something newer) these certs don't check out as valid for the site. Eh. Upgrade already.
https://developers.cloudflare.com/1.1.1.1/dns-over-https/
Is there a technical reason the DNS-over-HTTPS resolvers need their upstream resolvers to be looked up by name and not IP?
We talked to the APNIC team about how we wanted to create a privacy-first, extremely fast DNS system. They thought it was a laudable goal. We offered Cloudflare's network to receive and study the garbage traffic in exchange for being able to offer a DNS resolver on the memorable IPs. And, with that, 1.1.1.1 was born
Who’s behind this?
1.1.1.1 is a partnership between Cloudflare and APNIC.
Cloudflare runs one of the world’s largest, fastest networks. APNIC is a non-profit organization managing IP address allocation for the Asia Pacific and Oceania regions.
Cloudflare had the network. APNIC had the IP address (1.1.1.1). Both of us were motivated by a mission to help build a better Internet. You can read more about each organization’s motivations on our respective posts: Cloudflare Blog / APNIC Blog.
Host Loss% Snt Last Avg Best Wrst StDev
1. 192.168.1.254 0.0% 75 1.3 1.6 1.1 14.8 1.6
2. bbXXX-XXX-XXX-XX.singnet.com.sg 0.0% 75 3.4 2.8 1.9 18.7 2.5
3. 202.166.123.134 0.0% 75 3.2 3.5 2.7 15.9 2.0
4. 202.166.123.133 0.0% 75 3.0 3.0 2.4 6.6 0.7
5. ae8-0.tp-cr03.singnet.com.sg 0.0% 75 3.1 3.3 2.8 6.9 0.7
6. ae4-0.tp-er03.singnet.com.sg 0.0% 75 2.9 3.1 2.6 6.7 0.5
7. 203.208.191.197 0.0% 75 7.8 4.6 2.9 18.3 3.6
8. 203.208.149.138 0.0% 75 3.0 7.5 2.7 67.2 13.4
9. 203.208.153.126 0.0% 75 182.8 186.9 174.4 327.7 20.5
203.208.172.226
203.208.172.178
203.208.158.50
203.208.152.214
203.208.173.106
203.208.149.58
203.208.149.30
10. ix-xe-0-1-2-0.tcore2.pdi-palo-alto.as6453.net 0.0% 74 201.4 190.5 183.9 210.1 5.9
11. if-ae-5-2.tcore2.sqn-san-jose.as6453.net 0.0% 74 181.4 184.7 179.4 197.9 4.6
12. if-ae-1-2.tcore1.sqn-san-jose.as6453.net 0.0% 74 177.8 177.3 172.0 190.0 4.8
13. 63.243.205.106 0.0% 74 179.2 184.2 179.1 196.2 4.5
14. 1dot1dot1dot1.cloudflare-dns.com 0.0% 74 191.9 184.7 172.4 202.3 6.6
Looks like singtel has some bad routing rules for Cloudflare and it's going through to the USA rather than hitting a local PoP.Might send CloudFlare a quick email as they'll probably want singtel to correct this.
Come on, CloudFlare. You guys know better than that. Please stop breaking the (local) internet.
To compare the two, together with Google's DNS as a reference, from a fast connection:
64 bytes from 1.1.1.1: icmp_seq=5 ttl=59 time=3.62 ms
64 bytes from 8.8.8.8: icmp_seq=5 ttl=60 time=3.60 ms
64 bytes from 9.9.9.9: icmp_seq=5 ttl=60 time=9.20 ms
...and from a slower (home) connection: 64 bytes from 1.1.1.1: icmp_seq=5 ttl=58 time=11.1 ms
64 bytes from 8.8.8.8: icmp_seq=5 ttl=59 time=11.9 ms
64 bytes from 9.9.9.9: icmp_seq=5 ttl=59 time=34.2 ms
Note that I just used the speed of every fifth package instead of the average for five packets in order to keep the comment relatively short and more humanly readable than "rtt min/avg/max/mdev".https://quad9.net has been serving me well.
Won't sell != Won't collect
> We will never log your IP address (the way other companies identify you)
Never log IP != Never log anything
Bonus: The way other companies identify you ~= There are other ways
Edit: Looks like many people assume I'm nitpicking. So here are more specific questions:
* Is logging a hashcode of the IP considered as "not logging the IP"?
* Can combination of timestamp, packet info other than end IP (latency, hops, etc), geoIP and other factors be used for deep intelligence?
>"4.2.1.6. Subject Alternative Name The subject alternative name extension allows identities to be bound to the subject of the certificate. These identities may be included in addition to or in place of the identity in the subject field of the certificate. Defined options include an Internet electronic mail address, a DNS name, an IP address, and a Uniform Resource Identifier(URI). Other options exist, including completely local definitions."[1]
Edit: 1.0.0.1 also takes me to the router configuration screen. And there's no configuration setting for it. :(
ping 1.1.1.1
Reply from 1.1.1.1: bytes=32 time=366ms TTL=58
Reply from 1.1.1.1: bytes=32 time=366ms TTL=58
Reply from 1.1.1.1: bytes=32 time=365ms TTL=58
Reply from 1.1.1.1: bytes=32 time=365ms TTL=58
ping 8.8.8.8
Reply from 8.8.8.8: bytes=32 time=402ms TTL=59
Reply from 8.8.8.8: bytes=32 time=373ms TTL=59
Reply from 8.8.8.8: bytes=32 time=373ms TTL=59
Reply from 8.8.8.8: bytes=32 time=374ms TTL=59
--- 8.8.8.8 ping statistics ---
23 packets transmitted, 20 received, 13% packet loss, time 22093ms
rtt min/avg/max/mdev = 37.756/51.634/75.856/12.714 ms
--- 1.1.1.1 ping statistics ---
7 packets transmitted, 7 received, 0% packet loss, time 6007ms
rtt min/avg/max/mdev = 38.920/43.627/52.355/4.547 ms
same same $ dig +short @8.8.8.8 icnerd-1e5f.kxcdn.com
p-rumo00.kxcdn.com.
188.42.31.172
$ dig +short @1.1.1.1 icnerd-1e5f.kxcdn.com
p-rumo00.kxcdn.com.
188.42.31.172
$ dig +short @9.9.9.9 icnerd-1e5f.kxcdn.com
con-na00.kvcdn.com.
p-ussj00.kxcdn.com.
209.58.130.199
$ dig +short @9.9.9.10 icnerd-1e5f.kxcdn.com
con-na00.kvcdn.com.
p-ussj00.kxcdn.com.
Cloudflare is a for-profit corporation--you know, "duty to shareholders" and all that. We must assume, almost by definition, that they actually have their own self-interests at heart.
They match your requests with IBM's X-Force threat intelligence database and give you filtered results.
https://www.theregister.co.uk/2017/11/20/quad9_secure_privat...
1.1.1.1/1.0.0.1 rtt min/avg/max/mdev = 198.036/199.739/202.978/2.319 ms
8.8.8.8/8.8.4.4 rtt min/avg/max/mdev = 12.798/13.681/14.408/0.673 ms
114.114.114.114/114.114.115.115 rtt min/avg/max/mdev = 15.508/25.381/38.815/9.842 ms
[:~] % ping 1.1.1.1
PING 1.1.1.1 (1.1.1.1) 56(84) bytes of data.
64 bytes from 1.1.1.1: icmp_seq=1 ttl=59 time=22.0 ms
64 bytes from 1.1.1.1: icmp_seq=2 ttl=59 time=21.1 ms
64 bytes from 1.1.1.1: icmp_seq=3 ttl=59 time=21.8 ms
64 bytes from 1.1.1.1: icmp_seq=4 ttl=59 time=21.0 ms
64 bytes from 1.1.1.1: icmp_seq=5 ttl=59 time=21.8 ms
64 bytes from 1.1.1.1: icmp_seq=6 ttl=59 time=21.2 ms
^C
--- 1.1.1.1 ping statistics ---
6 packets transmitted, 6 received, 0% packet loss, time 5009ms
rtt min/avg/max/mdev = 21.023/21.509/22.031/0.399 ms
[:~] % ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=59 time=26.4 ms
64 bytes from 8.8.8.8: icmp_seq=2 ttl=59 time=26.6 ms
64 bytes from 8.8.8.8: icmp_seq=3 ttl=59 time=26.7 ms
64 bytes from 8.8.8.8: icmp_seq=4 ttl=59 time=26.4 ms
64 bytes from 8.8.8.8: icmp_seq=5 ttl=59 time=26.7 ms
64 bytes from 8.8.8.8: icmp_seq=6 ttl=59 time=25.9 ms
^C
--- 8.8.8.8 ping statistics ---
6 packets transmitted, 6 received, 0% packet loss, time 5010ms
rtt min/avg/max/mdev = 25.925/26.501/26.790/0.344 ms
So what do you see as the threat profile?
Indeed, see the recent KPMG scandal:
https://www.marketwatch.com/story/kpmg-indictment-suggests-m...
$> ping 1.1
PING 1.1 (1.0.0.1) 56(84) bytes of data.
64 bytes from 1.0.0.1: icmp_seq=1 ttl=55 time=28.3 ms
64 bytes from 1.0.0.1: icmp_seq=2 ttl=55 time=33.0 ms
64 bytes from 1.0.0.1: icmp_seq=3 ttl=55 time=43.6 ms
64 bytes from 1.0.0.1: icmp_seq=4 ttl=55 time=41.7 ms
64 bytes from 1.0.0.1: icmp_seq=5 ttl=55 time=56.5 ms
64 bytes from 1.0.0.1: icmp_seq=6 ttl=55 time=38.4 ms
64 bytes from 1.0.0.1: icmp_seq=7 ttl=55 time=34.8 ms
64 bytes from 1.0.0.1: icmp_seq=8 ttl=55 time=45.7 ms
64 bytes from 1.0.0.1: icmp_seq=9 ttl=55 time=45.2 ms
64 bytes from 1.0.0.1: icmp_seq=10 ttl=55 time=43.1 ms
I pay them to access the internet, every further information they gather about my internet activity does not mean any benefit for me.
~% ping 1.1.1.1
PING 1.1.1.1 (1.1.1.1) 56(84) bytes of data.
64 bytes from 1.1.1.1: icmp_seq=1 ttl=57 time=11.0 ms
64 bytes from 1.1.1.1: icmp_seq=2 ttl=57 time=10.9 ms
64 bytes from 1.1.1.1: icmp_seq=3 ttl=57 time=10.5 ms
64 bytes from 1.1.1.1: icmp_seq=4 ttl=57 time=10.0 ms
64 bytes from 1.1.1.1: icmp_seq=5 ttl=57 time=13.0 ms
64 bytes from 1.1.1.1: icmp_seq=6 ttl=57 time=10.1 ms
^C
--- 1.1.1.1 ping statistics ---
6 packets transmitted, 6 received, 0% packet loss, time 5006ms
rtt min/avg/max/mdev = 10.037/10.953/13.052/1.010 ms
~% ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=56 time=14.7 ms
64 bytes from 8.8.8.8: icmp_seq=2 ttl=56 time=14.5 ms
64 bytes from 8.8.8.8: icmp_seq=3 ttl=56 time=13.5 ms
64 bytes from 8.8.8.8: icmp_seq=4 ttl=56 time=13.2 ms
64 bytes from 8.8.8.8: icmp_seq=5 ttl=56 time=14.0 ms
64 bytes from 8.8.8.8: icmp_seq=6 ttl=56 time=14.8 ms
^C
--- 8.8.8.8 ping statistics ---
6 packets transmitted, 6 received, 0% packet loss, time 5008ms
rtt min/avg/max/mdev = 13.260/14.151/14.823/0.585 ms
I'm in Wellington.
64 bytes from 1.1.1.1: icmp_seq=1 ttl=56 time=37.9 ms
64 bytes from 1.1.1.1: icmp_seq=2 ttl=56 time=36.9 ms
64 bytes from 1.1.1.1: icmp_seq=3 ttl=56 time=36.7 ms
64 bytes from 1.1.1.1: icmp_seq=4 ttl=56 time=35.9 ms
64 bytes from 8.8.8.8: icmp_seq=1 ttl=56 time=35.4 ms
64 bytes from 8.8.8.8: icmp_seq=2 ttl=56 time=35.2 ms
64 bytes from 8.8.8.8: icmp_seq=3 ttl=56 time=35.2 ms
64 bytes from 8.8.8.8: icmp_seq=4 ttl=56 time=35.7 ms
2606:4700:4700::1111
2606:4700:4700::1001
Not as memorable, unfortunately.see: http://www.revolutionwifi.net/revolutionwifi/2011/03/explain...
Of course an ISP or nation could block/reroute the IP 1.1.1.1 too, so maybe it doesn't matter. Neither way would allow MITM, I was just thinking about ways oppressive ISPs/nations could stop DNS-over-HTTPS from working.
The "good" news is that this isn't being used for anything you really need - imagine if 1.1.1.1 had been delegated and now it was the resolution for www.facebook.com or indeed news.ycombinator.com ...
The bad news is that idiots do not learn from their mistakes, that's Dunning Kruger, the people who built your device don't understand why this was the Wrong Thing™ and won't now seek to do better in future. If we're lucky they'll go out of business, but that's the best we can hope for.
$ ping 1.1.1.1
PING 1.1.1.1 (1.1.1.1): 56 data bytes
64 bytes from 1.1.1.1: icmp_seq=0 ttl=64 time=2.793 ms
64 bytes from 1.1.1.1: icmp_seq=1 ttl=64 time=3.010 ms
64 bytes from 1.1.1.1: icmp_seq=2 ttl=64 time=2.789 ms
64 bytes from 1.1.1.1: icmp_seq=3 ttl=64 time=2.963 ms
64 bytes from 1.1.1.1: icmp_seq=4 ttl=64 time=2.954 ms
64 bytes from 1.1.1.1: icmp_seq=5 ttl=64 time=1.330 ms
^C
--- 1.1.1.1 ping statistics ---
6 packets transmitted, 6 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 1.330/2.640/3.010/0.592 ms
$ ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8): 56 data bytes
64 bytes from 8.8.8.8: icmp_seq=0 ttl=61 time=6.531 ms
64 bytes from 8.8.8.8: icmp_seq=1 ttl=61 time=5.956 ms
64 bytes from 8.8.8.8: icmp_seq=2 ttl=61 time=7.300 ms
64 bytes from 8.8.8.8: icmp_seq=3 ttl=61 time=7.457 ms
64 bytes from 8.8.8.8: icmp_seq=4 ttl=61 time=6.796 ms
64 bytes from 8.8.8.8: icmp_seq=5 ttl=61 time=6.785 ms
^C
--- 8.8.8.8 ping statistics ---
6 packets transmitted, 6 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 5.956/6.804/7.457/0.494 ms
Assigning an IP address you don't own on a local network usually means that you cut off access to the actual owner of that address. You might not (immediately) notice it because you don't need to access anything that's located there. But it will set you up for unpleasant surprises in the future when your users (or yourself) want to access a resource that happens to be located there.
RFC 1918 <https://tools.ietf.org/html/rfc1918> provides explicit IP ranges you should use for private resources (10.x.x.x, ~172.16.x.x, 192.168.x.x), which are not routed over the Internet and where your organization is responsible to avoid IP address conflicts.
Benchmarking Results for the interested: (sorted worst first, P value is bottom-X-percent)
1.1.1.1:
P00.5=48.2ms (55.8ms VPN)
P50.0=32.8ms (37.0ms VPN)
P95.0=29.1ms (33.0ms VPN)
P99.5=29.1ms (32.7ms VPN)
8.8.8.8:
P00.5=225.4ms (71.5ms VPN)
P50.0=48.0ms (53.6ms VPN)
P95.0=44.1ms (51.3ms VPN)
P99.5=43.8ms (50.7ms VPN)
I've noticed I measured with my VPN on, so I put the VPN measurements in brackets behind the nominal values. The 8.8.8.8 benchmark is a bit odd but I repeated it several times with 100 iterations each and this is basically what I get.The article mentions QUIC as being something that might make HTTPS faster than standard TLS. I guess over time DNS servers can start encoding HTTPS requests into JSON, like google’s impl, though there is no spec that I’ve seen yet that actually defines that format.
Can someone explain what the excitement around DNS-over-HTTPS is all about, and why DNS-over-TLS isn’t enough?
EDIT: I should mention that I started implementing this in trust-dns, but after reading the spec became less enthusiastic about it and more interested in finalizing my DNS-over-TLS support in the trust-dns-resolver. The client and server already support TLS, I couldn't bring myself to raise the priority enough to actually complete the HTTPS impl (granted it's not a lot of work, but still, the tests etc, take time).
Other data that can be logged:
- timestamp - this can be very revealing when correlated with other datasets.j
- ASN - can sometimes act like fingerprint on it's own, and assists in correlating other data (e.g. the timestamp)
- any identifiable variation in the structure or behavior between different DNS resolver implementations. See nmap's "-O" option that detects the OS from the TCP/IP protocol implementation.
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=55 time=19.6 ms
64 bytes from 8.8.8.8: icmp_seq=2 ttl=55 time=19.9 ms
64 bytes from 8.8.8.8: icmp_seq=3 ttl=55 time=19.8 ms
64 bytes from 8.8.8.8: icmp_seq=4 ttl=55 time=19.7 ms
64 bytes from 8.8.8.8: icmp_seq=5 ttl=55 time=19.8 ms
64 bytes from 8.8.8.8: icmp_seq=6 ttl=55 time=19.7 ms
64 bytes from 8.8.8.8: icmp_seq=7 ttl=55 time=19.8 ms
64 bytes from 8.8.8.8: icmp_seq=8 ttl=55 time=19.7 ms
64 bytes from 8.8.8.8: icmp_seq=9 ttl=55 time=19.8 ms
PING 1.1.1.1 (1.1.1.1) 56(84) bytes of data.
64 bytes from 1.1.1.1: icmp_seq=1 ttl=57 time=0.390 ms
64 bytes from 1.1.1.1: icmp_seq=2 ttl=57 time=0.565 ms
64 bytes from 1.1.1.1: icmp_seq=3 ttl=57 time=0.472 ms
64 bytes from 1.1.1.1: icmp_seq=4 ttl=57 time=0.556 ms
64 bytes from 1.1.1.1: icmp_seq=5 ttl=57 time=0.560 ms
64 bytes from 1.1.1.1: icmp_seq=6 ttl=57 time=0.573 ms
64 bytes from 1.1.1.1: icmp_seq=7 ttl=57 time=0.359 ms
64 bytes from 1.1.1.1: icmp_seq=8 ttl=57 time=0.575 ms
64 bytes from 1.1.1.1: icmp_seq=9 ttl=57 time=0.543 ms
64 bytes from 1.1.1.1: icmp_seq=10 ttl=57 time=0.548 ms
From Zagreb, Croatia.
I guess that new cloudflare POP is paying off.Edit: formatting
Cloudflare serves sites visited from China that aren't using their China-requires-an-ICP-license service from their west coast USA location where the big 3 Chinese telcos will peer for free.
The OP did not say that cloudflare is "saying" that. The OP very clearly said they are "insinuating" it. And yes under the heading "DNS's Privacy Problem" the post mentions:
"With all the concern over the data that companies like Facebook and Google are collecting on you,..."
I think that juxtaposition of this statement under a bolded heading of "DNS's Privacy Problem" is very much insinuating that.
>"The trouble with fighting for human freedom is that one spends most of one's time defending scoundrels. For it is against scoundrels that oppressive laws are first aimed, and oppression must be stopped at the beginning if it is to be stopped at all."
> Cloudflare's business has never been built around tracking users or selling advertising. We don't see personal data as an asset; we see it as a toxic asset. While we need some logging to prevent abuse and debug issues, we couldn't imagine any situation where we'd need that information longer than 24 hours.
How about aggregate stats? Will CloudFlare be keeping track of any long term usage statistics per domain?
I'm not talking about tracking the person making the request. I'm referring to tracking the hostnames that are being resolved. Given the near 1:1 mapping between user's accessing a website and DNS resolution for that website[1], wide scale usage of something like this gives decent analytics on net usage of any website even if it's not served by CloudFlare.
[1]: Assuming the DNS response cache times are low enough that a new user session to a website would require a fresh DNS request to resolve the website's IP.
Or will this DNS service, like their DDoS service, be at the whim of their CEO?
Third-party services like this will also have a huge range of queries cached so the response time will definitely be better than have a rasppi with little free memory try/attempt to cache that.
So the difference is how long the logs are kept, and possibly what the log data is used for.
PING 1.1.1.1 (1.1.1.1) 56(84) bytes of data.
64 bytes from 1.1.1.1: icmp_seq=1 ttl=58 time=3.57 ms
64 bytes from 1.1.1.1: icmp_seq=2 ttl=58 time=3.30 ms
64 bytes from 1.1.1.1: icmp_seq=3 ttl=58 time=3.31 ms
64 bytes from 1.1.1.1: icmp_seq=4 ttl=58 time=3.21 ms
64 bytes from 1.1.1.1: icmp_seq=5 ttl=58 time=3.21 ms
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=57 time=3.15 ms
64 bytes from 8.8.8.8: icmp_seq=2 ttl=57 time=3.17 ms
64 bytes from 8.8.8.8: icmp_seq=3 ttl=57 time=2.34 ms
64 bytes from 8.8.8.8: icmp_seq=4 ttl=57 time=2.93 ms
64 bytes from 8.8.8.8: icmp_seq=5 ttl=57 time=3.19 ms
MyRepublic: PING 1.1.1.1 (1.1.1.1) 56(84) bytes of data.
64 bytes from 1.1.1.1: icmp_seq=1 ttl=60 time=1.88 ms
64 bytes from 1.1.1.1: icmp_seq=2 ttl=60 time=1.93 ms
64 bytes from 1.1.1.1: icmp_seq=3 ttl=60 time=1.96 ms
64 bytes from 1.1.1.1: icmp_seq=4 ttl=60 time=1.85 ms
64 bytes from 1.1.1.1: icmp_seq=5 ttl=60 time=1.85 ms
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=59 time=1.86 ms
64 bytes from 8.8.8.8: icmp_seq=2 ttl=59 time=1.66 ms
64 bytes from 8.8.8.8: icmp_seq=3 ttl=59 time=1.40 ms
64 bytes from 8.8.8.8: icmp_seq=4 ttl=59 time=1.38 ms
64 bytes from 8.8.8.8: icmp_seq=5 ttl=59 time=1.60 ms
Looks like Google DNS's still a little bit faster.The input bar is a search bar in modern browsers.
While Cloudflare has been pretty neutral about censoring sites in the past (notably, pirate sites), the Daily Stormer incident put them in a though spot[1].
They talk a bit about Project Galileo (the link is broken BTW, it should be https://www.cloudflare.com/galileo), but their examples do not mention topics that would be controversial in western societies, and the site is quite vague. Would they also protect sites like sci-hub, for example?
While I would rather use a DNS not owned by Google, I have never seen any site blocked by them, including sites with a nation-wide block. I hope that Cloudflare is able to do the same thing.
1: https://torrentfreak.com/cloudflare-doesnt-want-daily-storme...
My understanding of Cloudflare's policies though are with the exception of exceptionally objectionable content, Cloudflare only takes sites down in response to a court order. I don't know if it has been established that DNS is something which operators have a proactive obligation to censor, but I imagine it's the kind of thing Cloudflare would go to court over.
1- https://www.vox.com/policy-and-politics/2017/8/14/16143820/g...
Is there a service that Quad9 offers that does not have the blocklist or other security?
The primary IP address for Quad9 is 9.9.9.9, which includes the blocklist, DNSSEC validation, and other security features. However, there are alternate IP addresses that the service operates which do not have these security features. These might be useful for testing validation, or to determine if there are false positives in the Quad9 system.
Secure IP: 9.9.9.9 Provides: Security blocklist, DNSSEC, No EDNS Client-Subnet sent. If your DNS software requires a Secondary IP address, please use the secure secondary address of 149.112.112.112
Unsecure IP: 9.9.9.10 Provides: No security blocklist, DNSSEC, sends EDNS Client-Subnet. If your DNS software requires a Secondary IP address, please use the unsecure secondary address of 149.112.112.10
Note: Use only one of these sets of addresses – secure or unsecure. Mixing secure and unsecure IP addresses in your configuration may lead to your system being exposed without the security enhancements, or your privacy data may not be fully protected
--------------------------
IPV6: https://quad9.net/faq/#Is_there_IPv6_support_for_Quad9
Is there IPv6 support for Quad9?
Yes. Quad9 operates identical services on a set of IPv6 addresses, which are on the same infrastructure as the 9.9.9.9 systems.
Secure IPv6: 2620:fe::fe Blocklist, DNSSEC, No EDNS Client-Subnet
Unsecure IPv6: 2620:fe::10 No blocklist, DNSSEC, send EDNS Client-Subnet
Ping statistics for 1.1.1.1: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 1ms, Maximum = 2ms, Average = 1ms
Ping statistics for 8.8.8.8: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 25ms, Maximum = 27ms, Average = 26ms
Ping statistics for 8.8.4.4: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 26ms, Maximum = 28ms, Average = 27ms
It's a lot harder to do that with DNS-over-HTTPS because it looks like normal traffic.
That said, in this case ISPs can just null route the IP address of the obvious main resolvers such as 1.1.1.1. I imagine most of the benefit is surely to people who can spin up their own resolvers.
i.e python:
octets = ip_str.split('.')
if len(octets) != 4:
raise AddressValueError("Expected 4 octets in %r" % ip_str)
Aside it's strange https everywhere has been pushed aggressively by many here under the bogeyman of ISP adware and spying while completely ignoring the much larger adware and privacy threats posed by the stalking of Google, Facebook and others. It is disingenuous and insincere.
Chrome security warning when i try to access it, ping <1ms when ping ip adress.
Bogons are a list of prefixes that most ISPs blackhole as there is usually never any legitimate traffic bound for those destinations. RFC1918 addresses, for example.
I can't reach 1.1.1.1 either, but 1.0.0.1 works fine. Maybe try that.
"We committed to never writing the querying IP addresses to disk and wiping all logs within 24 hours."
"While we need some logging to prevent abuse and debug issues, we couldn't imagine any situation where we'd need that information longer than 24 hours. And we wanted to put our money where our mouth was, so we committed to retaining KPMG, the well-respected auditing firm, to audit our code and practices annually and publish a public report confirming we're doing what we said we would."
> rfc 8336
I'll have to read up on this, thanks for the link.
> h2 coalescing
DNS is already capable of using TCP/TLS (and by it's nature UDP) for multiple DNS requests at a time. Is there some additional benefit we get here?
> h2 push
This one is interesting, but DNS already has optimizations built in for things like CNAME and SRV record lookups, where the IP is implicitly resolved when available and sent back with the original request. Is this adding something additional to those optimizations?
> caching
DNS has caching built-in, TTLs on each record. Is there something this is providing over that innate caching built into the protocol?
> it starts to add up to a very interesting story.
I'd love to read about that story, if someone has written something, do you have a link?
Also, a question that occurred to me, are we talking about the actual website you're connecting to being capable of preemptively passing DNS resolution to web clients over the same connection?
Thanks!
If someone controls routers is it not nearly useless?
So for example all mobile 4g providers could laugh at this and build a nearly as good database of every site you visit?
This is a very exciting development, thank you for posting this.
Universal free speech is not laudable, it's suicidal. If your free speech doesn't protect you from those who want to take it away, they will win, on a long enough time horizon. They only need to win once.
> I imagine most of the benefit is surely to people who can spin up their own resolvers.
There are already many easily run DNS resolvers available. Is there a benefit you see in operating them over HTTPS that improves on that?
https://github.com/google/namebench
I remember being in France it was huge speedup over the providers default DNS.
For example, from my network google is averaging a faster response by ~.5ms
$ ping 1.1.1.1
PING 1.1.1.1 (1.1.1.1) 56(84) bytes of data.
64 bytes from 1.1.1.1: icmp_seq=1 ttl=59 time=28.0 ms
64 bytes from 1.1.1.1: icmp_seq=2 ttl=59 time=19.2 ms
64 bytes from 1.1.1.1: icmp_seq=3 ttl=59 time=19.1 ms
64 bytes from 1.1.1.1: icmp_seq=4 ttl=59 time=19.0 ms
64 bytes from 1.1.1.1: icmp_seq=5 ttl=59 time=20.5 ms
64 bytes from 1.1.1.1: icmp_seq=6 ttl=59 time=19.6 ms
^C
--- 1.1.1.1 ping statistics ---
6 packets transmitted, 6 received, 0% packet loss, time 5010ms
rtt min/avg/max/mdev = 19.043/20.950/28.072/3.226 ms
$ ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=54 time=19.1 ms
64 bytes from 8.8.8.8: icmp_seq=2 ttl=54 time=20.1 ms
64 bytes from 8.8.8.8: icmp_seq=3 ttl=54 time=20.6 ms
64 bytes from 8.8.8.8: icmp_seq=4 ttl=54 time=21.1 ms
64 bytes from 8.8.8.8: icmp_seq=5 ttl=54 time=21.9 ms
64 bytes from 8.8.8.8: icmp_seq=6 ttl=54 time=19.4 ms
^C
--- 8.8.8.8 ping statistics ---
6 packets transmitted, 6 received, 0% packet loss, time 5008ms
rtt min/avg/max/mdev = 19.114/20.414/21.922/0.988 ms
However, if i do DNS lookups against a few major domains, google is actually slower by ~2ms $ for domain in microsoft.com google.com cloudflare.com facebook.com twitter.com; \
do cloudflare=$(dig @1.1.1.1 ${domain} | awk '/msec/{print $4}'); \
google=$(dig @8.8.8.8 ${domain} | awk '/msec/{print $4}');\
printf "${domain}:\tcloudflare ${cloudflare}ms\tgoogle ${google}ms\n";\
done
microsoft.com: cloudflare 22ms google 23ms
google.com: cloudflare 19ms google 22ms
cloudflare.com: cloudflare 19ms google 23ms
facebook.com: cloudflare 21ms google 20ms
twitter.com: cloudflare 19ms google 21ms
You'd have to run a bunch of queries to see if there is an actual impact vs. just an outlier (e.g. the first ping response from cloudflare), just wanted to point it out.I'm not sure what you meant in point (a) but, of course, DNS cannot be parallelized with HTTP since the browser doesn't know where to connect until DNS completes. Also, DNS requests for subresources can't start until the referring resource has been loaded. So you could easily see a few serialized DNS requests in the long pole for loading a web site.
Also note that the timing above were ping times. An actual DNS query will have to recurse if the result is not cached at the DNS server -- which in these days of 60-second TTLs for is not uncommon. Cloudflare, though, happens to be the authoritative DNS for quite a few web sites, in which case no recursion is necessary.
DNS resolving offers no such terms and no such reason to make such a claim. I don't see that playing here. And bear in mind, when the CEO did it, he wrote about how dangerous it was that companies had that power. I don't feel other companies running other DNS services hold that level of concern or awareness.
When you consider that their "competitor" in the space of free DNS resolvers with easy-to-remember IPs is Google, who recently tried blocking the word "gun" in Google Shopping... it's hard not to see the introduction of a Cloudflare DNS resolver as at least a net positive for resisting censorship. And more options is almost always better.
Because the BRs say that the subject Common Name, if present (which it usually will be for really crappy software that still doesn't implement standards from _last god-damn century_) must be chosen from the list of SANs, these certificates will have an IP address as their CN, plus an ipAddress SAN.
Here is an example, which my records say had an IP address as its only name, but at time of writing crt.sh is timing out for me so forgive me if this some completely unrelated cert and I've pasted the wrong one:
I've been a long time user of OpenDNS's public DNS service (and have come to adore it greatly). Other recent new entrant to this space worth mentioning includes Global Cyber Alliance's [0] Quad9 DNS service, launched in Q4 2017.
This to me looks like a good move by Cloudflare, business model wise, given the increasing awareness among general public to the dangers of privacy breaches -- aside from the supposed boost in network speed piggybacking off of Cloudflare's extensive server farm network [1].
Whether the service delivers on it's bold claims, however, is to be seen. I'm going to go give this a shot now.
[0] https://www.globalcyberalliance.org/initiatives/quad9.html [1] https://www.cloudflare.com/network/
wrt coalescing/origin/secondary-certificates its a powerful notion to consider your recursive resolver's ability to serve other http traffic on the same connection. That has implications for anti-censorship and traffic analysis.
Additionally the ability to push DNS information that it anticipates you will need outside the real time moment of an additional record has some interesting properties.
DoH right now is limited to the recursive resolver case. But it does lay the groundwork for other http servers being able to publish some DNS information - that's something that needs some deep security based thinking before it can be allowed, but this is a step towards being compatible with that design.
wrt caching - some apps might want a custom dns cache (as firefox does), but some may simply use an existing http cache for that purpose without having to invent a dns cache. leveraging code is good. There are lots of other little things like that which http brings for free - media type negotiation, proxying, authentication, etc..
After currency, it's close to being the second killer app for blockchain.
Anything else, as in anything centralized, will be vulnerable to random state actor censorship, be they China, the Google, USG, Turkey or any other deplorables and is therefore broken.
Namecoin was an early attempt at that (almost as old as bitcoin), but it came in too early.
Time to restart that train.
Cloudflare has a large number of PoPs and are increasing them rapidly. If the service is distributed to them all than the authoritative server is likely to give a response that is similar to that it would have provided if the subnet had been explicitly provided since the Cloudflare PoP sending the request will be located network wise close to the client that originally made the request. This isn't always going to be true but the slightly higher odds that you will not connect to the optimal location for the service you are connecting to is probably worth the increase in privacy.
That was a lie. It was a commenter on an article.
http://www.cbc.ca/news/business/canada-revenue-kpmg-secret-a...
> Network operators have been licking their chops for some time over the idea of taking their users' browsing data and finding a way to monetize it.
The "1.1.1.1 stops ISPs/Starbucks from selling your browsing history" pitch is untrue and, given Cloudflare's expertise, seems disingenuous.
HTTPS transmits domains unencrypted in request headers, to support SNI. So even if DNS lookups are completely hidden, my ISP can still log all domains I visit by inspecting my HTTP(S) requests.
And the domain log from my web requests is more valuable than my DNS log. Advertisers and data aggregators can see the true timing and frequency of my browsing history, whereas a DNS log is affected by router/OS/browser lookup caching.
Secondly, the idea in audit is not really about digging into the engineering. So although they will need people who have some idea what DNS is, they don't need experts - this isn't code review. The auditors tend to spend most of their time looking at paperwork and at policy - so e.g. we don't expect auditors to discover a Raspberry Pi configured for packet logging hidden in a patchbay, but we do expect them to find if "Delete logs every morning" is an ambition and it's not anybody's job to actually do that, nor is it anybody's job to check it got done.
Anti-censorship so long as Matthew Prince doesn't have a bad morning.
I run my own DNS-over-TLS resolver at a trusted hosting provider. It upstreams to a selection of roots for which I have reasonable trust. My resolver does DNS-over-TLS, DNS-over-HTTPS, and plain DNS. Multiple listening ports for the secure stuff so that I have something that works for most circumstances.
Cloudflare has no interest in censorship -- the whole reason the Daily Stormer thing was such a big deal was because it's the only time Cloudflare has ever terminated a customer for objectionable content. Be sure to read the blog post to understand: https://blog.cloudflare.com/why-we-terminated-daily-stormer/
(Disclosure: I work for Cloudflare but I'm not in a position to set policy.)
$ ping 1.1.1.1
PING 1.1.1.1 (1.1.1.1) 56(84) bytes of data.
64 bytes from 1.1.1.1: icmp_seq=1 ttl=55 time=22.0 ms
64 bytes from 1.1.1.1: icmp_seq=2 ttl=55 time=19.7 ms
64 bytes from 1.1.1.1: icmp_seq=3 ttl=55 time=17.6 ms
64 bytes from 1.1.1.1: icmp_seq=4 ttl=55 time=20.2 ms
64 bytes from 1.1.1.1: icmp_seq=5 ttl=55 time=18.2 ms
^C
--- 1.1.1.1 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4006ms
rtt min/avg/max/mdev = 17.691/19.610/22.080/1.559 ms
[normal@inspiron ~]$ ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=56 time=7.12 ms
64 bytes from 8.8.8.8: icmp_seq=2 ttl=56 time=5.28 ms
64 bytes from 8.8.8.8: icmp_seq=3 ttl=56 time=8.24 ms
64 bytes from 8.8.8.8: icmp_seq=4 ttl=56 time=5.28 ms
64 bytes from 8.8.8.8: icmp_seq=5 ttl=56 time=4.01 ms
64 bytes from 8.8.8.8: icmp_seq=6 ttl=56 time=6.37 ms
^C
--- 8.8.8.8 ping statistics ---
6 packets transmitted, 6 received, 0% packet loss, time 5007ms
rtt min/avg/max/mdev = 4.014/6.053/8.240/1.380 ms
Reading a little between the lines here, would you say that at some point we effectively replace the existing DNS resolution graph with something implemented entirely over http? Where features like forwarding and proxying would have more common off the shelf tooling?
I can start see a picture here that looks to be more about common/shared code, and less about actual features of the underlying protocols.
If it was easy, it would have been done during the TLS 1.3 process, but after a lot of discussion we're down to basically "Here is what people expect 'SNI encryption' would do for them, here's why all the obvious stuff can't achieve that, and here are some ugly, slow things that could work, now what?"
PING 8.8.8.8 (8.8.8.8): 56 data bytes
64 bytes from 8.8.8.8: icmp_seq=0 ttl=56 time=19.145 ms
64 bytes from 8.8.8.8: icmp_seq=1 ttl=56 time=18.927 ms
64 bytes from 8.8.8.8: icmp_seq=2 ttl=56 time=19.258 ms
64 bytes from 8.8.8.8: icmp_seq=3 ttl=56 time=20.000 ms
64 bytes from 8.8.8.8: icmp_seq=4 ttl=56 time=20.428 ms
PING 1.1.1.1 (1.1.1.1): 56 data bytes
64 bytes from 1.1.1.1: icmp_seq=0 ttl=53 time=21.351 ms
64 bytes from 1.1.1.1: icmp_seq=1 ttl=53 time=18.606 ms
64 bytes from 1.1.1.1: icmp_seq=2 ttl=53 time=19.451 ms
64 bytes from 1.1.1.1: icmp_seq=3 ttl=53 time=19.084 ms
64 bytes from 1.1.1.1: icmp_seq=4 ttl=53 time=18.989 ms
I don't think it's intended to say anything about Google specifically. Keep in mind that there are many other DNS services out there, and some of them are known for being pretty scummy, e.g. replacing NXDOMAIN results with "smart search" / ad pages.
> a.b
> Part a specifies the first byte of the binary address. Part b is interpreted as a 24-bit value that defines the rightmost three bytes of the binary address. This notation is suitable for specifying (outmoded) Class C network addresses.
Also Cloudflare gets vastly more negative opinions that they don't check enough and serve too many unsavory sites so it seems there's no way to win with the HN crowd.
I did not mean that I was worried that CloudFlare's DNS would start blocking sites whose content they disagree with (although that would also be worrisome).
I'm worried that copyright holders might be able to use the Daily Stormer case as a precedent to force CloudFlare to stop offering services to infringing sites.
If they are able to do that, I can also see them attempting to force CloudFlare to remove DNS entries as well.
1.1.1.1 round-trip min/avg/max/stddev = 10.984/12.221/14.909/1.239 ms
8.8.8.8 round-trip min/avg/max/stddev = 11.022/12.702/15.102/1.317 ms
$ ping 0
PING 0 (127.0.0.1) 56(84) bytes of data.
64 bytes from 127.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms
Of course, don't expect this to work universally. A lot of software will try to be clever with input validation, and fail.Tangentially related: https://fosdem.org/2018/schedule/event/email_address_quiz/
$ cat 127.1.c
#include <stdio.h>
#include <arpa/inet.h>
int main(int argc, char *argv[])
{
struct in_addr addr;
if (inet_aton(argv[1], &addr))
printf("%08x\n", addr.s_addr);
return 0;
}
$ make 127.1 CFLAGS=-Wall
cc -Wall 127.1.c -o 127.1
$ ./127.1 1.1
01000001
$ ./127.1 127.1
0100007f
KPMG's risk department - the lawyers' lawyers - appears to be violently allergic to their customers disclosing any report to outside parties. Based on my experience you can get a copy, but first you and the primary customer need to submit some paperwork. And among the conditions you need to agree with is that you don't redistribute the report or its contents.
Disclosure: I deal with security audits and technical aspects of compliance.
https://yro.slashdot.org/story/18/02/05/1944225/cloudflare-t...
I'm halfway up to newcastle getting ~10ms across the board, 1.1.1.1, 8.8.8.8, and 192.231.203.132.
Of course performance on each is a different matter.
1.1.1.1 is giving the best response times @ 8-11ms.
Internode's is giving decent @ 10-14ms
8.8.8.8 is a bit wonky, sometimes I hit a 10ms route once they cache it, but propagation is very slow and most responses are 140-180ms.
$ ping -c 10 1.1.1.1
PING 1.1.1.1 (1.1.1.1): 56 data bytes
64 bytes from 1.1.1.1: icmp_seq=0 ttl=60 time=1789.957 ms
64 bytes from 1.1.1.1: icmp_seq=1 ttl=60 time=19.620 ms
64 bytes from 1.1.1.1: icmp_seq=2 ttl=60 time=9.372 ms
64 bytes from 1.1.1.1: icmp_seq=3 ttl=60 time=11.585 ms
64 bytes from 1.1.1.1: icmp_seq=4 ttl=60 time=20.660 ms
64 bytes from 1.1.1.1: icmp_seq=5 ttl=60 time=11.808 ms
64 bytes from 1.1.1.1: icmp_seq=6 ttl=60 time=12.784 ms
64 bytes from 1.1.1.1: icmp_seq=7 ttl=60 time=11.908 ms
64 bytes from 1.1.1.1: icmp_seq=8 ttl=60 time=11.373 ms
64 bytes from 1.1.1.1: icmp_seq=9 ttl=60 time=11.992 ms
--- 1.1.1.1 ping statistics ---
10 packets transmitted, 10 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 9.372/191.106/1789.957/532.962 ms
$ ping -c 10 8.8.8.8
PING 8.8.8.8 (8.8.8.8): 56 data bytes
64 bytes from 8.8.8.8: icmp_seq=0 ttl=60 time=1308.156 ms
64 bytes from 8.8.8.8: icmp_seq=1 ttl=60 time=17.557 ms
64 bytes from 8.8.8.8: icmp_seq=2 ttl=60 time=13.043 ms
64 bytes from 8.8.8.8: icmp_seq=3 ttl=60 time=16.217 ms
64 bytes from 8.8.8.8: icmp_seq=4 ttl=60 time=15.033 ms
64 bytes from 8.8.8.8: icmp_seq=5 ttl=60 time=15.132 ms
64 bytes from 8.8.8.8: icmp_seq=6 ttl=60 time=14.157 ms
64 bytes from 8.8.8.8: icmp_seq=7 ttl=60 time=16.100 ms
64 bytes from 8.8.8.8: icmp_seq=8 ttl=60 time=15.600 ms
64 bytes from 8.8.8.8: icmp_seq=9 ttl=60 time=13.837 ms
--- 8.8.8.8 ping statistics ---
10 packets transmitted, 10 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 13.043/144.483/1308.156/387.893 ms
• My ISP can spoof DNS responses.
• My ISP can sniff DNS requests.
• My ISP can sniff SNI.
• My ISP can look up reverse DNS on the IPs I visit.
DNS over TLS is nice—I just set up Unbound on my router to use 1.1.1.1@853 and 1.0.0.1@853 as forwarding zones. That eliminates the first bullet, at the cost of allowing CloudFlare to track my DNS requests.
I wonder how easy it is to route DNS‐over‐TLS over Tor?
> Saving detailed results to /var/folders/j8/vd7q07z7r_5wt0s2mq00vgn/T/namebench_2018-04-01_1856.csv
> default 18:56:37.001803 +0200 namebench Opening /var/folders/j8/vd7q07z7r_5wt0s2mq00vgn/T/namebench_2018-04-01_1856.html
So a VPS with enough storage plus Unbound and you're pretty much done in regards to "privacy first" and "trust".
CloudFlare Google DNS Quad9 OpenDNS
NewYork 2 msec 1 msec 2 msec 19 msec
Toronto 2 msec 28 msec 17 msec 27 msec
Atlanta 1 msec 2 msec 1 msec 19 msec
Dallas 1 msec 9 msec 1 msec 7 msec
San Francisco 3 msec 21 msec 15 msec 20 msec
London 1 msec 12 msec 1 msec 14 msec
Amsterdam 2 msec 6 msec 1 msec 6 msec
Frankfurt 1 msec 9 msec 2 msec 9 msec
Tokyo 2 msec 2 msec 81 msec 77 msec
Singapore 2 msec 2 msec 1 msec 189 msec
Sydney 1 msec 130 msec 1 msec 165 msec
Very impressive CloudFlare."to audit our code and practices annually and publish a public report confirming we're doing what we said we would."
I run an investment fund (hedge fund) and we are completing our required annual audit (not by KPMG). It is quite thorough, they manually check balances in our bank accounts directly with the bank, they verify balances directly off blockchain (it's a crypto fund) and have us prove ownership of keys by signing messages, etc. And they do do a due diligence (lots of doodoo there) that we are not doing scammy things like the equivalent of having a raspberry pi attached to the network. Now this is extremely tough of course, and they are limited in what they can accomplish there, but the thought does cross their mind. All firms are different, but from what we've seen most auditors do decent good jobs most of the time. Their reputation can only be hit so many times before their name is no longer valuable to be an auditor.
What does this mean? I have 8.8.8.8/8.8.4.4 set and they work fine for resolving things on my local network?
I can even connect to things with avahi like `xxyyzz.local`.
$ ping 1.1.1.1
PING 1.1.1.1 (1.1.1.1) 56(84) bytes of data.
64 bytes from 1.1.1.1: icmp_seq=1 ttl=59 time=13.8 ms
64 bytes from 1.1.1.1: icmp_seq=2 ttl=59 time=14.6 ms
64 bytes from 1.1.1.1: icmp_seq=3 ttl=59 time=13.7 ms
64 bytes from 1.1.1.1: icmp_seq=4 ttl=59 time=14.1 ms
64 bytes from 1.1.1.1: icmp_seq=5 ttl=59 time=13.7 ms
64 bytes from 1.1.1.1: icmp_seq=6 ttl=59 time=15.3 ms
$ ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=46 time=43.5 ms
64 bytes from 8.8.8.8: icmp_seq=2 ttl=46 time=42.3 ms
64 bytes from 8.8.8.8: icmp_seq=3 ttl=46 time=43.1 ms
64 bytes from 8.8.8.8: icmp_seq=4 ttl=46 time=42.0 ms
64 bytes from 8.8.8.8: icmp_seq=5 ttl=46 time=42.4 ms
What's being studied?
Fun fact: CCNA classes regularly use 1.1.1.1 as a router-id. Really good reason now not to configure it via a loopback address.
Microsoft Windows [Version 10.0.16299.309] (c) 2017 Microsoft Corporation. All rights reserved.
C:\Users\ram>tracert 1.1.1.1
Tracing route to 1dot1dot1dot1.cloudflare-dns.com [1.1.1.1] over a maximum of 30 hops:
1 6 ms 11 ms 5 ms 192.168.1.1
2 5 ms 5 ms 23 ms 10.4.224.1
3 * * * Request timed out.
4 15 ms 7 ms 10 ms 103.56.229.1
5 * * * Request timed out.
6 45 ms 56 ms 44 ms 115.255.252.225
7 86 ms 84 ms 87 ms 62.216.144.77
8 169 ms 173 ms 175 ms xe-2-0-4.0.cjr01.sin001.flagtel.com [62.216.129.161]
9 174 ms 174 ms 169 ms ge-2-0-0.0.pjr01.hkg005.flagtel.com [85.95.25.41]
10 173 ms 174 ms 170 ms xe-3-2-2.0.ejr04.seo002.flagtel.com [62.216.130.25]
11 171 ms 173 ms 170 ms 1dot1dot1dot1.cloudflare-dns.com [1.1.1.1]
Trace complete.C:\Users\ram>tracert 8.8.8.8
Tracing route to google-public-dns-a.google.com [8.8.8.8] over a maximum of 30 hops:
1 88 ms 305 ms 98 ms 192.168.1.1
2 13 ms 98 ms 102 ms 10.4.224.1
3 * * * Request timed out.
4 * 16 ms * 10.200.200.1
5 9 ms 3 ms 8 ms 209.85.172.217
6 11 ms 5 ms 9 ms 108.170.251.103
7 40 ms 33 ms 37 ms 209.85.246.164
8 * 90 ms 89 ms 209.85.241.87
9 89 ms 86 ms 89 ms 216.239.51.57
10 * * * Request timed out.
11 * * * Request timed out.
12 * * * Request timed out.
13 * * * Request timed out.
14 * * * Request timed out.
15 * * * Request timed out.
16 * * * Request timed out.
17 * * * Request timed out.
18 * * * Request timed out.
19 87 ms 82 ms 87 ms google-public-dns-a.google.com [8.8.8.8]
Trace complete.C:\Users\ram>tracert resolver2.opendns.com
Tracing route to resolver2.opendns.com [208.67.220.220] over a maximum of 30 hops:
1 3 ms 7 ms 8 ms 192.168.1.1
2 12 ms 11 ms 41 ms 10.4.224.1
3 * * * Request timed out.
4 21 ms 21 ms 51 ms 103.56.229.1
5 * 62 ms 12 ms 115.248.235.150
6 * 408 ms 65 ms 115.255.252.229
7 43 ms 49 ms 40 ms 14.142.22.201.static-Mumbai.vsnl.net.in [14.142.22.201]
8 * 41 ms 57 ms 172.23.78.237
9 46 ms 32 ms 29 ms 172.19.138.86
10 73 ms 46 ms 42 ms 115.110.234.50.static.Mumbai.vsnl.net.in [115.110.234.50]
11 41 ms 64 ms 44 ms resolver2.opendns.com [208.67.220.220]
Trace complete.C:\Users\ram>
1) A VPN gives you privacy but this prevents your ISP from even knowing you're using a VPN, correct?
2) This is a change you make to your wifi router, correct?
3) What is you're not on wifi, or you're using public wifi, is it possible to still benefit from this?
Thanks in advance. I'll wait for my answers off the air :)
I suspect that Cloudflare and Google DNS both have POPs in Dallas, which accounts for the similar numbers to my private resolver. My point is, low latencies to datacenter-located resolver clients is great but the advantage is reduced when consumer internet users have to go across their ISP's long private fiber hauls to get to a POP. Once you're at the exchange point, it doesn't really matter which provider you choose. Go with the one with the least censorship, best security, and most privacy. For me, that's the one I run myself.
Side note: I wish AT&T was better about peering outside of their major transit POPs and better about building smaller POPs in regional hubs. For me, that would be Kansas City. Tons of big ISPs and content providers peer in KC but AT&T skips them all and appears to backhaul all Kansas traffic to DFW before doing any peering.
There are a couple of different approaches. One is DNS-over-TLS. That takes the existing DNS protocol and adds transport layer encryption. Another is DNS-over-HTTPS. It includes security but also all the modern enhancements like supporting other transport layers (e.g., QUIC) and new technologies like server HTTP/2 Server Push. Both DNS-over-TLS and DNS-over-HTTPS are open standards. And, at launch, we've ensured 1.1.1.1 supports both.
We think DNS-over-HTTPS is particularly promising — fast, easier to parse, and encrypted.
2) yup
3) often. You can set it on your computer, but some public WiFi systems will block it.
I meant that DNS requests are parallelized within the browser. Once it loads the initial resource (html), there might be 10 more dependencies it needs at various different URLs under different domain names. It's usually loading all these dependencies that make up the vast majority of the load time on a complex web page.
Those subsequent DNS requests can of course be made in parallel, so if your DNS latency is 20ms then you're adding ~20ms, not 10 x 20ms.
Even then, DNS is probably making up a small fraction of the overall load time. If a complex page is taking, say, 3000ms to load and render, then adding 20-40ms of DNS time is not going to make a perceptible difference.
I'm not a lawyer, though.
I think it's great if people are running their own DNS. :) But I'm certainly not mad that Cloudflare's offering yet another public alternative. As I said, more choices is better.
This doesn't answer whether or not cloudflare will be able to protect against someone intercepting their traffic and recording dns lookups independently, but that's a problem for any dns provider.
ping 1.1.1.1: ~22ms
ping 8.8.8.8: ~19ms
dig @1.1.1.1: ~45ms
dig @8.8.8.8: ~70ms
Disclaimer: Eyeballed averages over a few samples. A more rigorous test of DNS lookup times would be cool to see.
Disclosure: I work for Cloudflare, but not on DNS.
My ISP
Pinging 168.210.2.2 with 32 bytes of data:
Reply from 168.210.2.2: bytes=32 time=1ms TTL=58
Reply from 168.210.2.2: bytes=32 time=1ms TTL=58
Reply from 168.210.2.2: bytes=32 time=1ms TTL=58
Reply from 168.210.2.2: bytes=32 time=1ms TTL=58
Pinging 8.8.8.8 with 32 bytes of data:
Reply from 8.8.8.8: bytes=32 time=18ms TTL=54
Reply from 8.8.8.8: bytes=32 time=18ms TTL=54
Reply from 8.8.8.8: bytes=32 time=18ms TTL=54
Reply from 8.8.8.8: bytes=32 time=18ms TTL=54
CloudFlare
Pinging 1.1.1.1 with 32 bytes of data:
Reply from 1.1.1.1: bytes=32 time=22ms TTL=246
Reply from 1.1.1.1: bytes=32 time=22ms TTL=246
Reply from 1.1.1.1: bytes=32 time=22ms TTL=246
Reply from 1.1.1.1: bytes=32 time=21ms TTL=246
It's also a PITA to change this on each device.
~ ping -c 10 1.1.1.1
PING 1.1.1.1 (1.1.1.1) 56(84) bytes of data.
64 bytes from 1.1.1.1: icmp_seq=1 ttl=64 time=1.15 ms
64 bytes from 1.1.1.1: icmp_seq=2 ttl=64 time=1.15 ms
64 bytes from 1.1.1.1: icmp_seq=3 ttl=64 time=1.06 ms
64 bytes from 1.1.1.1: icmp_seq=4 ttl=64 time=1.04 ms
64 bytes from 1.1.1.1: icmp_seq=5 ttl=64 time=1.03 ms
64 bytes from 1.1.1.1: icmp_seq=6 ttl=64 time=1.01 ms
64 bytes from 1.1.1.1: icmp_seq=7 ttl=64 time=1.02 ms
64 bytes from 1.1.1.1: icmp_seq=8 ttl=64 time=1.07 ms
64 bytes from 1.1.1.1: icmp_seq=9 ttl=64 time=1.00 ms
64 bytes from 1.1.1.1: icmp_seq=10 ttl=64 time=0.848 ms
--- 1.1.1.1 ping statistics ---
10 packets transmitted, 10 received, 0% packet loss, time 9009ms
rtt min/avg/max/mdev = 0.848/1.042/1.153/0.086 ms
~ ping -c 10 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=56 time=6.82 ms
64 bytes from 8.8.8.8: icmp_seq=2 ttl=56 time=6.72 ms
64 bytes from 8.8.8.8: icmp_seq=3 ttl=56 time=6.39 ms
64 bytes from 8.8.8.8: icmp_seq=4 ttl=56 time=6.73 ms
64 bytes from 8.8.8.8: icmp_seq=5 ttl=56 time=6.55 ms
64 bytes from 8.8.8.8: icmp_seq=6 ttl=56 time=6.14 ms
64 bytes from 8.8.8.8: icmp_seq=7 ttl=56 time=6.24 ms
64 bytes from 8.8.8.8: icmp_seq=8 ttl=56 time=6.22 ms
64 bytes from 8.8.8.8: icmp_seq=9 ttl=56 time=6.19 ms
64 bytes from 8.8.8.8: icmp_seq=10 ttl=56 time=6.30 ms
--- 8.8.8.8 ping statistics ---
10 packets transmitted, 10 received, 0% packet loss, time 9011ms
rtt min/avg/max/mdev = 6.149/6.433/6.826/0.248 ms
1) Level3
2) DynGuide
3) UltraDNS
4) OpenDNS
5) Quad9
6) CloudFlare
7) Google
The caveat that a "good amount of servers support the protocol" isn't very clear, how many is a "good amount"? Does that hold true now? Unsupported servers appear to fall back to traditional DNS resolution, oer the diagram; is this not the case with the HTTP/TLS implementations?
My only complain is when you connect to public wifi that requires to display some wifi capture page, acceptance of ToS, to sign in with your room number, airliner wifi, etc. Usually they break when you don't use their automated provided DNS servers. Requiring you to remove your preferred DNS entries, waiting for the wifi popup to open, do the required thing, and put back your preferred DNS servers. I end up just keeping the defaults, and that's a shame.
Wish they were a good solution. Any tips?
DNSCloak • DNSCrypt DoH client by Sergey Smirnov https://itunes.apple.com/ca/app/dnscloak-dnscrypt-doh-client...
It supports DNSCrypt, DNSSEC and DNS-over-HTTPS, the IAP are for tips :)
It works via running a VPN server on your device.
To change your normal plaintext DNS resolver just tap the circle-i on your WiFi network.
Try browsing to https://[2606:4700:4700::1111] with desktop Safari. (It's a known issue and we're working with Apple to get it fixed.)
brew install dnscrypt-proxy
Change line 25 in /usr/local/etc/dnscrypt-proxy.toml to server_names = ['cloudflare']
sudo brew services restart dnscrypt-proxy
Then change your DNS server to 127.0.0.1 (run Network pref panel, unlock, Advanced, DNS)I also have the remote access enabled for my family members so I can diagnose and make changes like this directly on their modem.
I’m not insinuating that “joe public” is dumb. He just doesn’t need to care about DNS on his local network, there’s software that handles it for him.
The point would be to keep Cloudflare from being able to track my DNS requests.
> "We committed to never writing the querying IP addresses to disk ..."
A DNS resolver does need to record the querying IP for at least a few moments because, you know, they have to respond to your query.
However, I don't know why they changed that sentence; it could be for other reasons too.
I'm taking exception with Cloudflare's announcement, which makes a pitch to end users that CF can protect your domain history from ISP snooping, then links to a two-minute setup guide for people with "no technical skill". They really can't protect your domain history, and I feel bad for people using this service who have been led to believe otherwise.
AFAIK there is nothing in the TLS 1.3 draft [1] about SNI encryption. There are other draft proposals for SNI encryption that build on top of TLS 1.3 [2]. It's a hard problem and there are no deployed solutions I'm aware of.
[1] https://tools.ietf.org/html/draft-ietf-tls-tls13-28
[2] https://tools.ietf.org/html/draft-ietf-tls-sni-encryption-00
From my vantage point, 1.1.1.1 is inaccessible, while 1.0.0.1 seems to work just fine.
Comments on the blog post blame this on "various reasons" but, at least in my case, this seems to be a Cloudflare issue:
$ ping -c 5 -q 1.0.0.1
PING 1.0.0.1 (1.0.0.1) 56(84) bytes of data.
--- 1.0.0.1 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4005ms
rtt min/avg/max/mdev = 34.955/35.737/37.492/0.936 ms
$ ping -c 5 -q 1.1.1.1
PING 1.1.1.1 (1.1.1.1) 56(84) bytes of data.
--- 1.1.1.1 ping statistics ---
5 packets transmitted, 0 received, 100% packet loss, time 4102ms
$ traceroute 1.0.0.1
traceroute to 1.0.0.1 (1.0.0.1), 30 hops max, 60 byte packets
[...]
3 * * *
4 12.83.79.61 (12.83.79.61) 28.126 ms 28.663 ms 29.110 ms
5 cgcil403igs.ip.att.net (12.122.132.121) 35.854 ms 37.532 ms 37.510 ms
6 ae16.cr7-chi1.ip4.gtt.net (173.241.128.29) 33.997 ms 29.083 ms 29.647 ms
7 xe-0-0-0.cr1-det1.ip4.gtt.net (89.149.128.74) 37.758 ms 35.165 ms 36.620 ms
8 cloudflare-gw.cr0-det1.ip4.gtt.net (69.174.23.26) 36.946 ms 37.343 ms 38.574 ms
9 1dot1dot1dot1.cloudflare-dns.com (1.0.0.1) 38.385 ms 36.621 ms 37.157 ms
$ traceroute 1.1.1.1
traceroute to 1.1.1.1 (1.1.1.1), 30 hops max, 60 byte packets
[...]
3 * * *
4 12.83.79.61 (12.83.79.61) 30.388 ms 12.83.79.41 (12.83.79.41) 30.601 ms 31.280 ms
5 cgcil403igs.ip.att.net (12.122.132.121) 37.602 ms 37.873 ms 37.808 ms
6 ae16.cr7-chi1.ip4.gtt.net (173.241.128.29) 33.441 ms 29.788 ms 29.678 ms
7 xe-0-0-0.cr1-det1.ip4.gtt.net (89.149.128.74) 35.266 ms 35.124 ms 33.921 ms
8 cloudflare-gw.cr0-det1.ip4.gtt.net (69.174.23.26) 35.294 ms 35.949 ms 35.455 ms
9 * * *
10 * * *
11 * * *
12 *^C
----EDIT: I have AT&T-provided CPE that I have to use due to 802.1X. If I log into the device (over HTTP) and use the built-in (web-based) diagnostics tools, I am able to successfully ping 1.1.1.1 from the device itself:
ping successful: icmp seq:0, time=2.364 ms
ping successful: icmp seq:1, time=1.085 ms
ping successful: icmp seq:2, time=1.160 ms
ping successful: icmp seq:3, time=1.245 ms
ping successful: icmp seq:4, time=0.739 ms
These RTTs are way too low, however. The RTT for a ping to the CPE's next-hop/default gateway comes in at, minimum, ~20 ms.When pinging 1.1.1.1 from my (pfSense-based) router sitting directly behind the modem, however, no replies come back from the modem to the router (confirmed via pcap on the upstream-facing interface).
Thus, it looks like this is an issue with the AT&T CPE (5268AC).
It's worth pointing out that KPMG was Wells Fargo's independent auditor while the bank recently committed fraud on a massive scale by creating more than a million fake deposit accounts and 560,000 credit card applications for customers without their knowledge or approval.[1]
Calling KPMG a "well-respected auditing firm" when they failed to detect over a million fake bank accounts is a joke. See:
https://www.reuters.com/article/wells-fargo-kpmg/lawmakers-q...
[1] https://www.warren.senate.gov/files/documents/2016-10-27_Ltr...
(this is one of the advantages of https vs straight tls)
If I send a request to 1.0.0.1 for a specific RR that I'm 99.9% certain isn't cached (although I didn't check the query logs on the authoritative DNS servers to verify a request actually came in), the response contains the (expected) TTL of 14400.
If I then send the same request to 1.1.1.1, I get a response that is identical except with a TTL of 3591 seconds.
According to the timestamps in my client, the second request was made nine seconds after the first one (3591+9=3600), hence my question: is Cloudflare "overriding" the TTL I explicitly set on this specific RR (14400s) with a different TTL (i.e., 3600s)?
It's not uncommon to retain logs like that for debugging purposes, abuse prevention purposes, etc, but then to go back later and wipe them or anonymize them.
tracert 1.1.1.1
Tracing route to 1dot1dot1dot1.cloudflare-dns.com [1.1.1.1]
over a maximum of 30 hops: 1 1 ms 1 ms 1 ms 1dot1dot1dot1.cloudflare-dns.com [1.1.1.1]
tracert 1.0.0.1
Tracing route to 1dot1dot1dot1.cloudflare-dns.com [1.0.0.1]
over a maximum of 30 hops: 1 3 ms <1 ms <1 ms 192.168.1.254
2 48 ms 18 ms 34 ms 99-153-196-1.lightspeed.stlsmo.sbcglobal.net [99.153.196.1]
3 19 ms 17 ms 17 ms 64.148.120.125
4 29 ms 24 ms 18 ms 71.144.225.112
5 19 ms 18 ms 18 ms 71.144.224.85
6 19 ms 18 ms 19 ms 12.83.40.161
7 26 ms 27 ms 26 ms cgcil403igs.ip.att.net
[12.122.132.121]
8 27 ms 24 ms 28 ms ae16.cr7-chi1.ip4.gtt.net [173.241.128.29]
9 32 ms 31 ms 31 ms xe-0-0-0.cr1-det1.ip4.gtt.net [89.149.128.74]
10 31 ms 31 ms 31 ms cloudflare-gw.cr0-det1.ip4.gtt.net [69.174.23.26]
11 31 ms 31 ms 35 ms 1dot1dot1dot1.cloudflare-dns.com [1.0.0.1]In a browser, 1.1.1.1 comes back as connection refused. 1.0.0.1 loads.
iMac ~ ping 1.1.1.1
PING 1.1.1.1 (1.1.1.1): 56 data bytes
64 bytes from 1.1.1.1: icmp_seq=0 ttl=64 time=0.688 ms
64 bytes from 1.1.1.1: icmp_seq=1 ttl=64 time=0.814 ms
64 bytes from 1.1.1.1: icmp_seq=2 ttl=64 time=1.153 ms
64 bytes from 1.1.1.1: icmp_seq=3 ttl=64 time=0.752 ms
64 bytes from 1.1.1.1: icmp_seq=4 ttl=64 time=0.755 ms
64 bytes from 1.1.1.1: icmp_seq=5 ttl=64 time=0.789 ms
64 bytes from 1.1.1.1: icmp_seq=6 ttl=64 time=0.876 ms
64 bytes from 1.1.1.1: icmp_seq=7 ttl=64 time=0.869 ms
64 bytes from 1.1.1.1: icmp_seq=8 ttl=64 time=0.830 ms
64 bytes from 1.1.1.1: icmp_seq=9 ttl=64 time=1.387 ms
--- 1.1.1.1 ping statistics ---
10 packets transmitted, 10 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 0.688/0.891/1.387/0.204 ms
Pinging 8.8.8.8 averages 8ms. CloudFlare must have a POP here in Nashville?Am I able to buy one for my own website? If so, how? If not, why not? I couldn't even get past the DigiCert cert selection page since a wildcard cert can't have SANs, and a SAN cert can't contain a wildcard. The only thing I haven't tried yet is supplying my own CSR.
Isn't that the entire point of such an audit? To be able to present it to outside third-parties?
For examples, Mozilla (CA/B) requires audits for root CAs. The CA must provide a link to the audit on the auditor's public web site -- forwarding a copy or hosting it on their own isn't sufficient.
$ ping -c 5 1.1.1.1
PING 1.1.1.1 (1.1.1.1): 56 data bytes
64 bytes from 1.1.1.1: icmp_seq=0 ttl=60 time=1.606 ms
64 bytes from 1.1.1.1: icmp_seq=1 ttl=60 time=1.562 ms
64 bytes from 1.1.1.1: icmp_seq=2 ttl=60 time=1.540 ms
64 bytes from 1.1.1.1: icmp_seq=3 ttl=60 time=1.574 ms
64 bytes from 1.1.1.1: icmp_seq=4 ttl=60 time=1.564 ms
--- 1.1.1.1 ping statistics ---
5 packets transmitted, 5 packets received, 0.0% packet loss round-trip min/avg/max/std-dev = 1.540/1.569/1.606/0.022 ms
$ ping -c 5 8.8.8.8
PING 8.8.8.8 (8.8.8.8): 56 data bytes
64 bytes from 8.8.8.8: icmp_seq=0 ttl=57 time=9.068 ms
64 bytes from 8.8.8.8: icmp_seq=1 ttl=57 time=8.923 ms
64 bytes from 8.8.8.8: icmp_seq=2 ttl=57 time=8.974 ms
64 bytes from 8.8.8.8: icmp_seq=3 ttl=57 time=8.916 ms
64 bytes from 8.8.8.8: icmp_seq=4 ttl=57 time=8.931 ms
--- 8.8.8.8 ping statistics ---
5 packets transmitted, 5 packets received, 0.0% packet loss round-trip min/avg/max/std-dev = 8.916/8.962/9.068/0.057 ms
I.e. if I set everything to use 1.1.1.1, will all my devices know to use the secure protocols, or will it be regular old unsecure DNS?
https://gist.github.com/mcmanus/766a9564a51325b6543644983539...
Please, please, please add some basic "features" (like Google does) that will help when troubleshooting resolution!
For example, the following will show the unicast IP address of the server you're hitting when using 8.8.8.8:
$ dig @8.8.8.8 txt o-o.myaddr.l.google.com. +short
Additionally, with one other DNS query, we can get a list of what netblocks are being used (for Google Public DNS) in what datacenters/locations: $ dig @8.8.8.8 txt locations.publicdns.goog. +short
(This same info, along with a small shell script to format it nicely, is available on their web site [0] as well.)[1] https://blog.cloudflare.com/announcing-1111/ [2] https://1.1.1.1
PING 1.1.1.1 (1.1.1.1) 56(84) bytes of data.
64 bytes from 1.1.1.1: icmp_seq=1 ttl=57 time=11.6 ms
64 bytes from 1.1.1.1: icmp_seq=2 ttl=57 time=11.2 ms
64 bytes from 1.1.1.1: icmp_seq=3 ttl=57 time=10.8 ms
64 bytes from 1.1.1.1: icmp_seq=4 ttl=57 time=11.1 ms
64 bytes from 1.1.1.1: icmp_seq=5 ttl=57 time=10.9 ms
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=57 time=15.0 ms
64 bytes from 8.8.8.8: icmp_seq=2 ttl=57 time=15.9 ms
64 bytes from 8.8.8.8: icmp_seq=3 ttl=57 time=15.1 ms
64 bytes from 8.8.8.8: icmp_seq=4 ttl=57 time=15.0 ms
64 bytes from 8.8.8.8: icmp_seq=5 ttl=57 time=15.1 ms
FTTC, southern EU. $ ping 1.1.1.1
PING 1.1.1.1 (1.1.1.1) 56(84) bytes of data.
64 bytes from 1.1.1.1: icmp_seq=1 ttl=56 time=10.8 ms
64 bytes from 1.1.1.1: icmp_seq=2 ttl=56 time=11.3 ms
64 bytes from 1.1.1.1: icmp_seq=3 ttl=56 time=10.7 ms
64 bytes from 1.1.1.1: icmp_seq=4 ttl=56 time=10.9 ms
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=60 time=10.7 ms
64 bytes from 8.8.8.8: icmp_seq=2 ttl=60 time=11.3 ms
64 bytes from 8.8.8.8: icmp_seq=3 ttl=60 time=11.1 ms
64 bytes from 8.8.8.8: icmp_seq=4 ttl=60 time=10.5 ms
Suppose you were a Wells Fargo depositor and a Wells Fargo teller opened a fake account in your name without consulting you. What harm did you suffer?
How massive is this fraud if you measure it in a more useful way than "number of accounts"?
Yep, exactly. Using 1.0.0.1, everything works. Using 1.1.1.1, nothing (ping, DNS, HTTPS) does.
EDIT: See earlier comment; looks like an issue w/ the AT&T-provided CPE (5268AC).
From the article: The only question that remained was when to launch the new service? This is the first consumer product Cloudflare has ever launched, so we wanted to reach a wider audience. At the same time, we're geeks at heart. 1.1.1.1 has 4 1s. So it seemed clear that 4/1 (April 1st) was the date we needed to launch it.
And in occident? Do they protect MRAs and Christians?
I love how their view of political targeting is limited to what the West wants to impose to all countries. Yet, the organization “A Voice For Men” was flagged as hate speech for funding the movie The Red Pill (2016), the most censored movie of 2017 in occident. If they haven’t identified them as political oppression victims, they don’t know much about Free Speech.
I like this. Do the root servers support this too?
1. They looked the other way when 100+ million of public money was laundered out of South Africa.
2. The scheme literally stole money destined to uplift poor rural communities
3. To top it off, a portion of the money was used to write of an extravagant wedding as a business expense.
4. When a junior auditor raised his concerns about the audit he was shut down.
http://amabhungane.co.za/article/2017-06-29-guptaleaks-the-d...
http://amabhungane.co.za/article/2017-06-30-guptaleaks-the-d...
http://amabhungane.co.za/article/2017-11-26-guptaleaks-kpmg-...
6. They put out false reports that were partly used as motivation to get rid of ministers fighting corruption.
https://www.timeslive.co.za/politics/2017-09-15-kpmg-cans-sa...
KPMG were not the only multinational firm that were complicit in fleecing the South African tax payer of billions. See
Mckinsey:
http://amabhungane.co.za/article/2017-09-14-how-mckinsey-and...
SAP: http://amabhungane.co.za/article/2017-07-24-guptaleaks-anoth...
T-systems:
http://amabhungane.co.za/article/2017-11-14-exclusive-gupta-...
The harm to WF shareholders was inflated metrics inflating the value of the company.
The whole point of KPMG was to validate these types of metrics for shareholders.
It's all plain-text over UDP. This is easily exploited for various purposes: spoofing (DDoS attacks), surveillance (such as by ISPs), hijacking/tampering, censorship, privacy concerns, and so on.
As everything else relies on DNS, the DNS must also be secure.
We have had to supply information to KPMG “IT Auditors” at a client due to some software we wrote.
In most cases the auditors are young grads who have never worked in an actual IT/software dev team. So they have very naive view and never ask the right questions. If one wanted to hide something it would be super easy.
Among other things, KPMG issued a-later withdrawn-report that was used to undermine the well-respected finance minister, so that a more malleable person could be installed, while also auditing the Guptas during their worst excesses.
Lest we choose to dismiss this as crimes in an insignificant country, KPMG SA has been part of the worldwide group since the 70's, and South Africa's supposedly high auditing standards were a source of national pride.
The story seems to have gone dead after some senior leaders fell on their swords, but six months ago, there was serious talk about the firm being shut down in South Africa.
Are you joking? The fake accounts were set up in order to bilk customers out of money in the form of overdrafts fees and penalties.
"Some customers noticed the deception when they were charged unexpected fees, received credit or debit cards in the mail that they did not request, or started hearing from debt collectors about accounts they did not recognize. But most of the sham accounts went unnoticed, as employees would routinely close them shortly after opening them. Wells has agreed to refund about $2.6 million in fees that may have been inappropriately charged."[1]
It also probably impossible to quantify the time customers lost having to deal this. But I think it safe to say it was significant.
>"How massive is this fraud if you measure it in a more useful way than "number of accounts"
OK lets use dollar amounts as a metric - $2.6 million dollars in fees, levied against your own customers? And considering Well Fargo found an additional 1.4 million previously undisclosed fake accounts as recently as August[2] and that the regulatory probe has now widened beyond their retail banking unit and not includes their private wealth division I would say pretty fucking massive.
It's really interesting that you seek to trivialize the scope and severity of a story you seem to know so very little about.
[1] https://www.nytimes.com/2016/09/09/business/dealbook/wells-f...
[2] http://money.cnn.com/2017/08/31/investing/wells-fargo-fake-a...
[3] https://www.barrons.com/articles/federal-probe-expands-to-we...
[user@v-fed-1 ~]$ dig txt o-o.myaddr.l.google.com @8.8.8.8 +short
"74.125.46.8"
"edns0-client-subnet 92.223.114.166/32"
[user@v-fed-1 ~]$ dig txt o-o.myaddr.l.google.com @8.8.8.8 +short
"74.125.46.11"
"edns0-client-subnet 176.36.247.0/24"
[user@v-fed-1 ~]$ dig txt o-o.myaddr.l.google.com @8.8.8.8 +short
"74.125.74.3"
"edns0-client-subnet 94.181.44.185/32"
[user@v-fed-1 ~]$ dig txt o-o.myaddr.l.google.com @8.8.8.8 +short
"74.125.46.8"
"edns0-client-subnet 92.223.114.166/32"
[user@v-fed-1 ~]$ dig txt o-o.myaddr.l.google.com @8.8.8.8 +short
"74.125.74.3"
"edns0-client-subnet 94.181.44.185/32"
"Approximately 85,000 of the accounts opened incurred fees, totaling $2 million. Customers' credit scores were also likely hurt by the fake accounts.[43] The bank was able to prevent customers from pursuing legal action as the opening of an account mandated customers enter into private arbitration with the bank."
"The bank paid $110 million to consumers who had accounts opened in their names without permission in March 2017." The money repaid fraudulent fees and paid damages to those affected."[1]
That's 85,000 of what you call "non-existent" fees totaling 2 million dollars. And whether or not those were secondary effects of the fraud is completely immaterial.
It's a rather bizarre position to want to defend a bank that not only defrauded its customers but has also admitted to doing so. But you are entitled to that. What you aren't entitled to however is your own alternative facts.
[1] https://en.wikipedia.org/wiki/Wells_Fargo_account_fraud_scan...
> "The bank was able to prevent customers from pursuing legal action as the opening of an account mandated customers enter into private arbitration with the bank."
That's really not going to work if the customer didn't intend to open the account. The fact that (by your numbers) average damages among those who were damaged at all were up to $23.50 may have had more to do with lack of legal action by customers.
Edit - and to speak more to the topic at hand, there were plenty of people at the firm I worked with who absolutely had the technical expertise to perform such an in depth audit. They are simply engaged when higher levels of assurance are required. What level of scrutiny should your auditors provide your bathroom time monitoring system?
$ ping 1.1.1.1
Pinging 1.1.1.1 with 32 bytes of data:
Reply from 1.1.1.1: bytes=32 time=4ms TTL=60
Reply from 1.1.1.1: bytes=32 time=4ms TTL=60
Reply from 1.1.1.1: bytes=32 time=4ms TTL=60
Reply from 1.1.1.1: bytes=32 time=4ms TTL=60
$ ping 8.8.8.8
Pinging 8.8.8.8 with 32 bytes of data:
Reply from 8.8.8.8: bytes=32 time=27ms TTL=60
Reply from 8.8.8.8: bytes=32 time=27ms TTL=60
Reply from 8.8.8.8: bytes=32 time=27ms TTL=60
Reply from 8.8.8.8: bytes=32 time=28ms TTL=60
Seems that 1.1.1.1 is even faster than my local ISP's primary DNS: $ ping 202.180.64.10
Pinging 202.180.64.10 with 32 bytes of data:
Reply from 202.180.64.10: bytes=32 time=11ms TTL=61
Reply from 202.180.64.10: bytes=32 time=11ms TTL=61
Reply from 202.180.64.10: bytes=32 time=11ms TTL=61
Reply from 202.180.64.10: bytes=32 time=11ms TTL=61
We run a homogeneous architecture -- that is, every machine in our fleet is capable of handling every type of request. The same machines that currently handle 10% of all HTTP requests on the internet, and handle authoritative DNS for our customers, and serve the DNS F root server, are now handling recursive DNS at 1.1.1.1. These machines are not sitting idle. Moreover, this means that all of these services are drawing from the same pool of resources, which is, obviously, enormous. This service will scale easily to any plausible level of demand.
In fact, in this kind of architecture, a little-used service is actually likely to be penalized in terms of performance because it's spread so thin that it loses cache efficiency (for all kinds of caches -- CPU cache, DNS cache, etc.). More load should actually make it faster, as long as there is capacity, and there is a lot of capacity.
Meanwhile, Cloudflare is rapidly adding new locations -- 31 new locations in March alone, bringing the current total to 151. This not only adds capacity for running the service, but reduces the distance to the closest service location.
In the past I worked at Google. I don't know specifically how their DNS resolver works, but my guess is that it is backed by a small set of dedicated containers scheduled via Borg, since that's how Google does things. To be fair, they have way too many services to run them all on every machine. That said, they're pretty good at scheduling more instances as needed to cover load, so they should be fine too.
In all likelihood, what really makes the difference is the design of the storage layer. But I don't know the storage layer details for either Google's or Cloudflare's resolvers so I won't speculate on that.
The "backup" IPv4 address is 1.0.0.1 rather than, say, 1.1.1.2, and why they needed APNIC's help to make this work
In theory you can tell other network providers "Hi, we want you to route this single special address 1.1.1.1 to us" and that would work. But in practice most of them have a rule which says "The smallest routes we care about are a /24" and 1.1.1.1 on its own is a /32. So what gets done about that is you need to route the entire /24 to make this work, and although you can put other services in that /24 if you _really_ want, they will all get routed together, including failover routing and other practices. So, it's usually best to "waste" an entire /24 on a single anycast service. Anycast is not exactly a cheap homebrew thing, so a /24 isn't _that_ much to use up.
We have a few former KPMG employees. They have many stories to tell, about everything from glass ceilings to harassment.
I don't see any free peering?
8.8.8.8 - ping 7ms dig 14ms
8.8.4.4 - ping 7ms dig 16ms
1.1.1.1 - ping 7ms dig 16ms
1.0.0.1 - ping 6ms dig 15ms
9.9.9.9 - ping 6ms dig 17ms
CF & Google about the same for me. Good to have an alternative in CF though, and certainly a very memorable IP :)If you've ever been audited for some other reason, you'll know they find lots of things, and then you fix them, and that's "fine". But well, is it fine? Or, should we acknowledge that they found lots of things and what those things were, even if you subsequently fixed them? The CA/B says you have several months to hand over your letter after the audit period. Guess what those months are spent doing...
When I try, my browser tells me:
Bad cert ident from 1.1.1.1: dNSName=*.cloudflare-dns.com cloudf: accept? (y or n)
Only a handful of small specialist firms actually just move bits in the UK. Every single UK ISP big enough to advertise on television is signed up to filter traffic and block things for being "illegal" or maybe if Hollywood doesn't like them, or if they have "naughty" words mentioned, or just because somebody slipped. If you're thinking "Not mine" and it runs TV adverts then, oops, nope, you're wrong about that and have had your Internet censored without realising it. I wonder how ISPs got their bad reputation...
And of course, like tests, no audit can prove correctness, only can find flaws.
Does anyone have a better solution for this?
(Also — why no IPv6 DNS?)
You want to protect free speech by taking it away because if you don't then someone might use free speech to take away free speech.
First, speech is not an action that can violate your rights. Sticks and stones, etc. And no, just because communication can help organize your political opposition does not mean the speech itself is violating your rights. Actions and legislation do that.
Second, deciding that some things are allowed and some aren't and then enforcing those arbitrary decisions through violence by the state certainly can violate those rights. And and gets easier and more every time.
I suppose you think that limited free speech is a thing that can persist. I strongly disagree. The idea of universe free speech is because any attempt to regulate leads to the loss of all of it fairly quickly if not instantly; they only need to win once. It exists to protect opinions that are disliked by most if not all.
I see your argument is basically that if free speech allows for speech that supports the idea of not allowing free speech then it will fail. And that may be true. That's why constant villigance is required even, especially, when they try to use people who's opinion almost everyone hates to justify it. There is no final solution.
Some exec to developer: Hey John, KPMG wrote to us that they will be here on friday to make an audit, lets just remove those 10 lines that <do whatever that you don't want to be shown in audit> until audit finishes.
I don't want to imply anything about Cloudflare here, just a comment about how useful that kind of private audits are generally.
Why is it worth point out? Please detail the work you've done in establilshing that KPMG had access to the data and willfully ignored it.
If you say "We don't have those logs," and you swear to it and a lawyer puts their name on the filing, it's not like Judge Alsup will start pentesting your company to find the one employee who accidentally has Dropbox pointed at an sftp mount of some production server.
They could make their deployment setup completely automated and publish the tooling to github, and have video evidence of them deploying the same SHA-256 stamped tooling to their data centers. They could expose operational details and transactions on their DNS servers as far as possible without revealing identifiable information. They could have regular physical audits by a constantly rotating set of well known and trusted parties (i.e. EFF, Mozilla).
Any censorship immediately leads to massive censorship even if they don't want to expand it. That's why it has to be stopped at the start; not done at all. Dumb pipe or censorship pipe.
It's not worth the complexity of multiple protocols that do the same thing. And it's not worth making the base system insanely complicated so that the magic 4 letters 'http' can show up.
TLS? Yeah, since the simpler secure DNSes failed, we might as well use that. But let's try to keep http complexity contained.
+ DNS-over-TLS for privacy
Publishing the full source code could help a little bit, but not much; one doesn't know what code is actually running.
Now Cloudflare is providing a very fast and privacy-driven DNS, so to me this is a step up from others (Quad9, OpenDNS being formidable alternatives)
Say you're on a public WIFI and don't want DNS queries from your machine, there's also DNS-over-HTTPS (which Cloudflare and a couple others support) which doesn't use the DNS protocol and would make a POST request to say, https://1.1.1.1/.well-known/dns-query instead.
Also with HTTPS, ISPs won't see the full URL, just that a secure connection was made to that domain.
Edit: in SSH2 the server authentication happens in the first cryptographic message from server (for the obvious efficiency reasons), and thus for doing SNI-style certificate selection there would have to be some plaintext server-ID in first clients message, but the security of the protocol does not require that as long as the in-tunnel authentication is mutual (it is for things like kerberos).
The 2 people that I was in contact with were both competent and experienced. Definitely not "young grads who have never worked in an actual IT/software dev team" as someone claimed elsewhere.
Cloudflare runs from 151 (and growing rapidly) locations worldwide. Without edns-client-subnet, the upstream DNS server will probably respond according to the geolocation of the Cloudflare location you're talking to -- which is probably pretty close to you, and therefore will probably produce a good outcome for you, while largely avoiding the privacy concerns.
64 bytes from 1.1.1.1: icmp_seq=0 ttl=60 time=0.966 ms
Outstanding.
64 bytes from 8.8.8.8: icmp_seq=0 ttl=59 time=25.478 ms
Not so great.
As it happens, an internal memo "leaked" to the media wherein Prince admitted he pulled the plug on The Daily Stormer because they are "assholes" and admitted that “The Daily Stormer site was bragging on their bulletin boards about how Cloudflare was one of them."[1] These forums are also what served as the area for readers to comment on articles. Ergo, he acknowledged that he knew his statement about the Daily Stormer "team" claiming CloudFlare supported their ideology was a lie.
You also have to go back in time and consider the context in which The Daily Stormer was successively de-platformed. The site had been publishing low-brow racist commentary including jokes about pushing Jews into ovens and referring to Africans as various simian species for years. It was, however, a single article wherein they mocked the woman who died at the Charlottesville, VA conflict between the alt-right and antifa that led to the widespread outrage that resulted in the The Daily Stormer being temporarily kicked off the internet.[2]
At the same time that Cloudflare was banning the Daily Stormer, they were (and still are, AFAIK) providing services to pro-pedophilia and ISIS web sites. The Daily Stormer itself pointed out not only the hypocrisy of this situation but also the risk it created to CloudFlare's continued safe harbor protections.[3]
[0]: https://blog.cloudflare.com/why-we-terminated-daily-stormer/ [1]: https://gizmodo.com/cloudflare-ceo-on-terminating-service-to... [2]: https://www.independent.co.uk/life-style/gadgets-and-tech/da... [3]: https://web.archive.org/web/20180401233331/https://dailystor...
No it wouldn’t.
Don't be daft.
While hiring them doesn't prove that Cloudflare's code and practices are sound, it does reduce the risk that they aren't.
We regularly receive government grants, and the best audit experiences I've had was with the small, EU-funded auditors. They have a high level of integrity and technical/financial knowledge. But that is a very specific niche.
$ ping -c 4 1.1.1.1
PING 1.1.1.1 (1.1.1.1) 56(84) bytes of data.
64 bytes from 1.1.1.1: icmp_seq=1 ttl=56 time=29.0 ms
64 bytes from 1.1.1.1: icmp_seq=2 ttl=56 time=27.7 ms
64 bytes from 1.1.1.1: icmp_seq=3 ttl=56 time=30.5 ms
64 bytes from 1.1.1.1: icmp_seq=4 ttl=56 time=28.6 ms
--- 1.1.1.1 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3004ms
rtt min/avg/max/mdev = 27.731/28.993/30.573/1.028 ms
$ ping -c 4 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=55 time=27.7 ms
64 bytes from 8.8.8.8: icmp_seq=2 ttl=55 time=30.7 ms
64 bytes from 8.8.8.8: icmp_seq=3 ttl=55 time=28.5 ms
64 bytes from 8.8.8.8: icmp_seq=4 ttl=55 time=30.6 ms
--- 8.8.8.8 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3005ms
rtt min/avg/max/mdev = 27.772/29.409/30.710/1.280 ms
I'm starting to feel I should change ISPs...Also, no need to hardcode that address - DHCP will happily serve it up. It also has the hostname metadata.google.internal and the (disfavored for security reasons) bare short hostname metadata.
https://www.bloomberg.com/news/articles/2017-09-22/kpmg-unde...
https://www.telegraph.co.uk/business/2017/09/15/kpmg-south-a...
https://www.reuters.com/article/us-kpmg-safrica/kpmgs-south-...
http://www.bbc.com/news/business-41283462
It's also been extensively covered in the South African media.
https://www.reuters.com/article/us-kpmg-safrica-exclusive/ex...
In fact, people wrote that DNS address on walls just to get away with the censorship of the government so you wouldn't be helping the government..
PING 1.1.1.1 (1.1.1.1) 56(84) bytes of data.
64 bytes from 1.1.1.1: icmp_seq=1 ttl=59 time=7.65 ms
64 bytes from 1.1.1.1: icmp_seq=2 ttl=59 time=8.53 ms
64 bytes from 1.1.1.1: icmp_seq=3 ttl=59 time=10.2 ms
64 bytes from 1.1.1.1: icmp_seq=4 ttl=59 time=8.04 ms
64 bytes from 1.1.1.1: icmp_seq=5 ttl=59 time=7.92 ms
64 bytes from 1.1.1.1: icmp_seq=6 ttl=59 time=7.85 ms
64 bytes from 1.1.1.1: icmp_seq=7 ttl=59 time=7.88 ms
64 bytes from 1.1.1.1: icmp_seq=8 ttl=59 time=7.73 ms
64 bytes from 1.1.1.1: icmp_seq=9 ttl=59 time=7.73 ms
ping dig
----------------
1.1.1.1 3.2 4
1.0.0.1 2.9 4
8.8.8.8 36.5 40
8.8.4.4 36.3 42
These are only averages though, and by testing a bit more with uncached domains I found the first hit will take a lot longer with cloudflare than with google.Cloudflare:
64 bytes from 1.1.1.1: icmp_seq=0 ttl=128 time=2 ms
64 bytes from 1.1.1.1: icmp_seq=1 ttl=128 time=2 ms
64 bytes from 1.1.1.1: icmp_seq=2 ttl=128 time=2 ms
64 bytes from 1.1.1.1: icmp_seq=3 ttl=128 time=9 ms
64 bytes from 1.1.1.1: icmp_seq=4 ttl=128 time=2 ms
Google: 64 bytes from 8.8.8.8: icmp_seq=0 ttl=54 time=12 ms
64 bytes from 8.8.8.8: icmp_seq=1 ttl=54 time=11 ms
64 bytes from 8.8.8.8: icmp_seq=2 ttl=54 time=13 ms
64 bytes from 8.8.8.8: icmp_seq=3 ttl=54 time=45 ms
64 bytes from 8.8.8.8: icmp_seq=4 ttl=54 time=14 ms
64 bytes from 8.8.8.8: icmp_seq=5 ttl=54 time=11 ms
64 bytes from 8.8.8.8: icmp_seq=6 ttl=54 time=34 ms
Quad9: 64 bytes from 9.9.9.9: icmp_seq=0 ttl=53 time=10 ms
64 bytes from 9.9.9.9: icmp_seq=1 ttl=53 time=69 ms
64 bytes from 9.9.9.9: icmp_seq=2 ttl=53 time=14 ms
64 bytes from 9.9.9.9: icmp_seq=3 ttl=53 time=58 ms
64 bytes from 9.9.9.9: icmp_seq=4 ttl=53 time=52 ms
One thing I noticed is that when I first pinged 1.1.1.1 I got 14ms, which then quickly dropped to ~3ms consistently: 64 bytes from 1.1.1.1: icmp_seq=0 ttl=128 time=14 ms
64 bytes from 1.1.1.1: icmp_seq=1 ttl=128 time=14 ms
64 bytes from 1.1.1.1: icmp_seq=2 ttl=128 time=2 ms
64 bytes from 1.1.1.1: icmp_seq=3 ttl=128 time=3 ms
64 bytes from 1.1.1.1: icmp_seq=4 ttl=128 time=1 ms
64 bytes from 1.1.1.1: icmp_seq=5 ttl=128 time=4 ms
You might direct your questions at your ISP instead as it appears that someone may be intercepting your DNS requests.
---- To elaborate a bit, the differences in the (74.125.x.x) IP addresses being returned is somewhat normal and would usually be attributed to simple load balancing (as d33 pointed out). That is, 8.8.8.8 is actually a load balancer with several servers (including 74.125.46.8, 74.125.46.11, and 74.125.74.3) behind it.
The differences seen in the returned "edns0-client-subnet", however, are, well, "interesting".
As you've directed the requests to 8.8.8.8 directly (as opposed to your system's default resolver, whatever that is), the response returned for "edns0-client-subnet" should normally either be your own IP address or a supernet that includes it. (In my case, for example, the value is the static IP address (/32) of my own resolver.) When sending multiple requests such as you have, the "edns0-client-subnet" shouldn't really be changing from one request/response to the next; at the least, the values shouldn't change this much.
The fact that the responses are changing would seem to indicate that Google DNS servers are receiving the requests from different IP addresses when they should, in fact, all be coming from the same IP address (yours). These changes would lead me to suspect that someone (i.e., your ISP) is intercepting your DNS requests and "transparently proxying" them on your behalf.
If your ISP is using CGNAT (and issues you a private IP address) or something similar, that might explain it. Otherwise, I would be suspicious.
gregs-Air:~ greg$ ping 1.1.1.1 PING 1.1.1.1 (1.1.1.1): 56 data bytes 64 bytes from 1.1.1.1: icmp_seq=0 ttl=55 time=51.035 ms 64 bytes from 1.1.1.1: icmp_seq=1 ttl=55 time=52.024 ms 64 bytes from 1.1.1.1: icmp_seq=2 ttl=55 time=52.945 ms 64 bytes from 1.1.1.1: icmp_seq=3 ttl=55 time=77.263 ms 64 bytes from 1.1.1.1: icmp_seq=4 ttl=55 time=53.427 ms 64 bytes from 1.1.1.1: icmp_seq=5 ttl=55 time=57.311 ms 64 bytes from 1.1.1.1: icmp_seq=6 ttl=55 time=192.017 ms 64 bytes from 1.1.1.1: icmp_seq=7 ttl=55 time=174.206 ms 64 bytes from 1.1.1.1: icmp_seq=8 ttl=55 time=142.224 ms 64 bytes from 1.1.1.1: icmp_seq=9 ttl=55 time=288.815 ms ^C --- 1.1.1.1 ping statistics --- 10 packets transmitted, 10 packets received, 0.0% packet loss round-trip min/avg/max/stddev = 51.035/114.127/288.815/77.996 ms gregs-Air:~ greg$ curl ifconfig.co 174.125.4.196 gregs-Air:~ greg$
$ openssl s_client -connect 1.1.1.1:443 </dev/null 2>&1 | openssl x509 -noout -text | grep "CN=\|DNS"
Issuer: C=US, O=DigiCert Inc, CN=DigiCert ECC Secure Server CA
Subject: C=US, ST=CA, L=San Francisco, O=Cloudflare, Inc., CN=*.cloudflare-dns.com
DNS:*.cloudflare-dns.com, IP Address:1.1.1.1, IP Address:1.0.0.1, DNS:cloudflare-dns.com, IP Address:2606:4700:4700:0:0:0:0:1111, IP Address:2606:4700:4700:0:0:0:0:1001
Pinging 1.1.1.1 with 32 bytes of data: Request timed out. Request timed out. Request timed out. Request timed out.
Ping statistics for 1.1.1.1: Packets: Sent = 4, Received = 0, Lost = 4 (100% loss),
Pinging 1.1.1.1 with 32 bytes of data: Request timed out. Request timed out. Request timed out. Request timed out.
Ping statistics for 1.1.1.1: Packets: Sent = 4, Received = 0, Lost = 4 (100% loss),
PING 1.1.1.1 (1.1.1.1): 56 data bytes
64 bytes from 1.1.1.1: icmp_seq=0 ttl=52 time=241.529 ms
64 bytes from 1.1.1.1: icmp_seq=1 ttl=52 time=318.034 ms
64 bytes from 1.1.1.1: icmp_seq=2 ttl=52 time=337.291 ms
64 bytes from 1.1.1.1: icmp_seq=3 ttl=52 time=255.748 ms
64 bytes from 1.1.1.1: icmp_seq=4 ttl=52 time=247.765 ms
64 bytes from 1.1.1.1: icmp_seq=5 ttl=52 time=235.611 ms
64 bytes from 1.1.1.1: icmp_seq=6 ttl=52 time=239.427 ms
64 bytes from 1.1.1.1: icmp_seq=7 ttl=52 time=247.911 ms
64 bytes from 1.1.1.1: icmp_seq=8 ttl=52 time=260.911 ms
64 bytes from 1.1.1.1: icmp_seq=9 ttl=52 time=281.153 ms
64 bytes from 1.1.1.1: icmp_seq=10 ttl=52 time=300.363 ms
64 bytes from 1.1.1.1: icmp_seq=11 ttl=52 time=234.296 ms
Sucks that VDSL2 no longer supports fastpath, not that I could use it on an ADSL line due to bonding anyway :/
*.internal queries can be sent to the local nameserver, for example, while others can be forwarded to the public nameserver.
Minimal unbound.conf example:
forward-zone:
name: "."
forward-addr: 1.1.1.1
forward-zone:
name: "internal"
forward-addr: 10.0.0.1
Unbound also supports DNS-over-TLS, although stubby's implementation is much better. It's usually ideal to forward to a local stubby instance instead.--- 1.1.1.1 ping statistics --- 26 packets transmitted, 26 packets received, 0.0% packet loss round-trip min/avg/max/stddev = 56.440/62.916/106.933/10.084 ms
--- 8.8.8.8 ping statistics --- 10 packets transmitted, 10 packets received, 0.0% packet loss round-trip min/avg/max/stddev = 27.454/30.733/33.344/1.456 ms
--- 9.9.9.9 ping statistics --- 13 packets transmitted, 13 packets received, 0.0% packet loss round-trip min/avg/max/stddev = 29.041/35.952/75.558/11.780 ms
Definitely not what I was expecting...
CloudFlare:
$ ping -c 240 -i 0.25 1.1.1.1
...
--- 1.1.1.1 ping statistics ---
240 packets transmitted, 240 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 16.271/17.286/25.105/1.236 ms
Google Public DNS: $ ping -c 240 -i 0.25 8.8.8.8
...
--- 8.8.8.8 ping statistics ---
240 packets transmitted, 240 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 5.092/10.083/35.949/2.426 ms
OpenDNS: $ ping -c 240 -i 0.25 208.67.222.222
...
--- 208.67.222.222 ping statistics ---
240 packets transmitted, 240 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 8.596/9.847/25.898/1.788 ms
Level 3: $ ping -c 240 -i 0.25 4.2.2.2
...
--- 4.2.2.2 ping statistics ---
240 packets transmitted, 240 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 8.479/9.563/18.971/1.336 ms
Comcast's Resolver: $ ping -c 240 -i 0.25 75.75.75.75
...
--- 75.75.75.75 ping statistics ---
240 packets transmitted, 240 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 8.410/9.717/19.428/1.487 ms
It even looks like OpenDNS and Level 3 are better than Google Public DNS in terms of latency.Cloudflare also specifically removed that site for a stated reason that they claimed CF was helping them. That is outside the bounds of the site content itself and is a perfectly fine argument to stop doing business based on libel and misrepresentation.
PING 1.1.1.1 (1.1.1.1): 56 data bytes
64 bytes from 1.1.1.1: icmp_seq=0 ttl=53 time=188.730 ms
64 bytes from 1.1.1.1: icmp_seq=1 ttl=53 time=178.453 ms
64 bytes from 1.1.1.1: icmp_seq=2 ttl=53 time=179.869 ms
64 bytes from 1.1.1.1: icmp_seq=3 ttl=53 time=177.808 ms
Google : PING 8.8.8.8 (8.8.8.8): 56 data bytes
Request timeout for icmp_seq 0
64 bytes from 8.8.8.8: icmp_seq=1 ttl=42 time=58.368 ms
Request timeout for icmp_seq 2
Request timeout for icmp_seq 3
Request timeout for icmp_seq 4
64 bytes from 8.8.8.8: icmp_seq=5 ttl=42 time=51.636 ms
64 bytes from 8.8.8.8: icmp_seq=6 ttl=42 time=55.772 ms
Request timeout for icmp_seq 7
64 bytes from 8.8.8.8: icmp_seq=8 ttl=42 time=42.365 ms
64 bytes from 8.8.8.8: icmp_seq=9 ttl=42 time=45.782 ms
Cloudflare seems more stable hereAs a Comcast@Home subscriber in SF, 1.1.1.1 is approximately 3x as fast as Comcast's own DNS (testing using dig).
$ ping -n 1.1.1.1
round-trip min/avg/max/stddev = 16.696/18.643/22.571/2.056 ms
$ ping -n 8.8.8.8
round-trip min/avg/max/stddev = 38.410/45.663/57.684/8.075 ms
64 bytes from 1.1.1.1: icmp_seq=0 ttl=57 time=17.580 ms
64 bytes from 1.1.1.1: icmp_seq=1 ttl=57 time=18.025 ms
64 bytes from 1.1.1.1: icmp_seq=2 ttl=57 time=17.780 ms
64 bytes from 1.1.1.1: icmp_seq=3 ttl=57 time=18.231 ms
64 bytes from 1.1.1.1: icmp_seq=4 ttl=57 time=17.906 ms
64 bytes from 1.1.1.1: icmp_seq=5 ttl=57 time=18.447 ms
This is from my residential ADSL2 connection in Sydney:
[Bigs-MacBook-Pro-2:~] bigiain% ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8): 56 data bytes
64 bytes from 8.8.8.8: icmp_seq=0 ttl=59 time=21.257 ms
64 bytes from 8.8.8.8: icmp_seq=1 ttl=59 time=25.831 ms
64 bytes from 8.8.8.8: icmp_seq=2 ttl=59 time=22.231 ms
64 bytes from 8.8.8.8: icmp_seq=3 ttl=59 time=21.498 ms
^C
--- 8.8.8.8 ping statistics ---
4 packets transmitted, 4 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 21.257/22.704/25.831/1.841 ms
[Bigs-MacBook-Pro-2:~] bigiain% ping 1.1.1.1
PING 1.1.1.1 (1.1.1.1): 56 data bytes
64 bytes from 1.1.1.1: icmp_seq=0 ttl=59 time=22.481 ms
64 bytes from 1.1.1.1: icmp_seq=1 ttl=59 time=38.814 ms
64 bytes from 1.1.1.1: icmp_seq=2 ttl=59 time=19.923 ms
64 bytes from 1.1.1.1: icmp_seq=3 ttl=59 time=19.911 ms
^C
--- 1.1.1.1 ping statistics ---
4 packets transmitted, 4 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 19.911/25.282/38.814/7.882 ms
And this is from an ec2 instance is ap-southeast-2: ubuntu@ip-172-31-xx-xx:~$ ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=55 time=2.24 ms
64 bytes from 8.8.8.8: icmp_seq=2 ttl=55 time=2.27 ms
64 bytes from 8.8.8.8: icmp_seq=3 ttl=55 time=2.30 ms
64 bytes from 8.8.8.8: icmp_seq=4 ttl=55 time=2.26 ms
64 bytes from 8.8.8.8: icmp_seq=5 ttl=55 time=2.31 ms
64 bytes from 8.8.8.8: icmp_seq=6 ttl=55 time=2.25 ms
^C
--- 8.8.8.8 ping statistics ---
6 packets transmitted, 6 received, 0% packet loss, time 5007ms
rtt min/avg/max/mdev = 2.244/2.274/2.310/0.066 ms
ubuntu@ip-172-31-xx-xx:~$ ping 1.1.1.1
PING 1.1.1.1 (1.1.1.1) 56(84) bytes of data.
64 bytes from 1.1.1.1: icmp_seq=1 ttl=55 time=1.03 ms
64 bytes from 1.1.1.1: icmp_seq=2 ttl=55 time=1.05 ms
64 bytes from 1.1.1.1: icmp_seq=3 ttl=55 time=1.05 ms
64 bytes from 1.1.1.1: icmp_seq=4 ttl=55 time=1.01 ms
64 bytes from 1.1.1.1: icmp_seq=5 ttl=55 time=1.07 ms
^C
--- 1.1.1.1 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4004ms
rtt min/avg/max/mdev = 1.015/1.046/1.076/0.035 ms
> We will be destroying all “raw” DNS data as soon as we have performed statistical analysis on the data flow. We will not be compiling any form of profiles of activity that could be used to identify individuals, and we will ensure that any retained processed data is sufficiently generic that it will not be susceptible to efforts to reconstruct individual profiles. Furthermore, the access to the primary data feed will be strictly limited to the researchers in APNIC Labs, and we will naturally abide by APNIC’s non-disclosure policies.
So it's a 5 year research program, with options to extend it as a research program. To me, that means they intend to keep DNS data for up to 5 years (or longer) before performing statistical analysis and processing on it. Here is APNIC Labs's privacy policy http://labs.apnic.net/privacy.shtml and APNIC's privacy policy https://www.apnic.net/about-apnic/corporate-documents/docume...
So much for "privacy-first".
--- 1.1.1.1 ping statistics ---
100 packets transmitted, 100 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 10.536/13.084/19.910/3.284 ms
--- 8.8.4.4 ping statistics ---
100 packets transmitted, 100 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 10.931/15.141/32.453/6.498 ms
--- 1.0.0.1 ping statistics ---
100 packets transmitted, 100 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 10.219/16.709/29.498/6.960 ms
--- 9.9.9.9 ping statistics ---
100 packets transmitted, 100 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 10.290/22.336/43.267/10.238 ms
--- 208.67.222.222 ping statistics ---
100 packets transmitted, 100 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 12.985/22.786/46.929/10.036 ms
--- 208.67.220.220 ping statistics ---
100 packets transmitted, 100 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 16.273/27.225/49.783/10.246 ms
--- 8.8.8.8 ping statistics ---
100 packets transmitted, 100 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 10.581/35.527/125.641/33.204 ms
PING 1.1.1.1 (1.1.1.1): 56 data bytes
64 bytes from 1.1.1.1: icmp_seq=0 ttl=59 time=22.806 ms
64 bytes from 1.1.1.1: icmp_seq=1 ttl=59 time=23.321 ms
64 bytes from 1.1.1.1: icmp_seq=2 ttl=59 time=24.379 ms
64 bytes from 1.1.1.1: icmp_seq=3 ttl=59 time=25.869 ms
64 bytes from 1.1.1.1: icmp_seq=4 ttl=59 time=24.485 ms
64 bytes from 1.1.1.1: icmp_seq=5 ttl=59 time=24.165 ms
PING 8.8.8.8 (8.8.8.8): 56 data bytes
64 bytes from 8.8.8.8: icmp_seq=0 ttl=57 time=23.005 ms
64 bytes from 8.8.8.8: icmp_seq=1 ttl=57 time=22.867 ms
64 bytes from 8.8.8.8: icmp_seq=2 ttl=57 time=24.461 ms
64 bytes from 8.8.8.8: icmp_seq=3 ttl=57 time=23.680 ms
64 bytes from 8.8.8.8: icmp_seq=4 ttl=57 time=35.581 ms
64 bytes from 8.8.8.8: icmp_seq=5 ttl=57 time=21.033 ms
64 bytes from 8.8.8.8: icmp_seq=6 ttl=57 time=41.634 ms
PING 1.1.1.1 (1.1.1.1): 56 data bytes
--- 1.1.1.1 ping statistics ---
10 packets transmitted, 10 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 1.335/1.431/1.517/0.053 ms
Do you mean OpenDNS?
PING 1.1.1.1 (1.1.1.1): 56 data bytes
64 bytes from 1.1.1.1: icmp_seq=0 ttl=60 time=5.044 ms
64 bytes from 1.1.1.1: icmp_seq=1 ttl=60 time=6.447 ms
64 bytes from 1.1.1.1: icmp_seq=2 ttl=60 time=6.371 ms
64 bytes from 1.1.1.1: icmp_seq=3 ttl=60 time=6.308 ms
64 bytes from 1.1.1.1: icmp_seq=4 ttl=60 time=7.317 ms
64 bytes from 1.1.1.1: icmp_seq=5 ttl=60 time=5.989 ms
Your pings also have the same thing showing up 128 vs 53. I tried on my laptop and get something simmilar. traceroute to 1.1.1.1 is 1 hop which is wrong. 1.0.0.1 shows a few hops.
`dig google.com @1.1.1.1` doesn't work for me.
The standard Comcast black-box router/modem I have has a mean ping of ~9ms, and a min of ~3ms, so yeah, I'd have to agree.
(I get ~28ms to 1.1.1.1.)
The highlight point to me is that they not only say that won't collect data that could be used to identify individuals, but seem to realize even seemingly anonymized data can be traced back to individuals too, hence the further claim.
I'm inclined to give APNIC the benefit of the doubt, they're a nonprofit, and a fundamental part of the Internet's addressing structure, but it'd be nice to get a bit more detail from them on what they :do: collect.
dig -4 +short myip.opendns.com a @resolver1.opendns.com
dig -6 +short myip.opendns.com aaaa @resolver1.ipv6-sandbox.opendns.com
dig -4 +short o-o.myaddr.l.google.com txt @8.8.8.8
dig -6 +short o-o.myaddr.l.google.com txt @2001:4860:4860::8888
to get back my IPv4/IPv6 addresses; especially if Cloudflare can do it faster. Does anyone know if they already have something like this?
ping 1.1.1.1
PING 1.1.1.1 (1.1.1.1) 56(84) bytes of data.
64 bytes from 1.1.1.1: icmp_seq=1 ttl=58 time=1.36 ms
64 bytes from 1.1.1.1: icmp_seq=2 ttl=58 time=1.32 ms
64 bytes from 1.1.1.1: icmp_seq=3 ttl=58 time=1.34 ms
64 bytes from 1.1.1.1: icmp_seq=4 ttl=58 time=1.38 ms
64 bytes from 1.1.1.1: icmp_seq=5 ttl=58 time=1.37 ms
ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=56 time=1.33 ms
64 bytes from 8.8.8.8: icmp_seq=2 ttl=56 time=1.38 ms
64 bytes from 8.8.8.8: icmp_seq=3 ttl=56 time=1.35 ms
64 bytes from 8.8.8.8: icmp_seq=4 ttl=56 time=1.36 ms
64 bytes from 8.8.8.8: icmp_seq=5 ttl=56 time=1.35 ms
This is exactly what I'm seeing with the small amount of testing I'm doing against google to compare vs cloudflare.
Sometimes google will respond in 30ms (cache hit), more often than not it has to do at least a partial lookup (160ms), and sometimes even go further to (400ms.)
The worst I'm encountering on 1.1.1.1 is around 200ms for a cache miss.
Basically, what it looks like is that google is load balancing my queries and I'm getting poor performance because of it - I'm guessing they simply need to kill some of their capacity to see increased cache hits.
Anecdotally I'm at least seeing better performance out of 1.1.1.1 than my ISP's (internode) which has consistently done better than 8.8.8.8 in the past.
Also anecdotally, my short 1-2 month trial of using systemd-resolved is now coming to a failed conclusion, I suspect I'll be going back to my pdnsd setup because it just works better.
$ ping 1.1.1.1 PING 1.1.1.1 (1.1.1.1): 56 data bytes 64 bytes from 1.1.1.1: icmp_seq=0 ttl=58 time=111.781 ms 64 bytes from 1.1.1.1: icmp_seq=1 ttl=58 time=102.982 ms 64 bytes from 1.1.1.1: icmp_seq=2 ttl=58 time=102.206 ms 64 bytes from 1.1.1.1: icmp_seq=3 ttl=58 time=110.135 ms 64 bytes from 1.1.1.1: icmp_seq=4 ttl=58 time=110.085 ms
$ ping 8.8.8.8 PING 8.8.8.8 (8.8.8.8): 56 data bytes 64 bytes from 8.8.8.8: icmp_seq=0 ttl=58 time=6.886 ms 64 bytes from 8.8.8.8: icmp_seq=1 ttl=58 time=5.475 ms 64 bytes from 8.8.8.8: icmp_seq=2 ttl=58 time=5.674 ms 64 bytes from 8.8.8.8: icmp_seq=3 ttl=58 time=5.557 ms 64 bytes from 8.8.8.8: icmp_seq=4 ttl=58 time=7.066 ms
$ ping 9.9.9.9 PING 9.9.9.9 (9.9.9.9): 56 data bytes 64 bytes from 9.9.9.9: icmp_seq=0 ttl=58 time=5.880 ms 64 bytes from 9.9.9.9: icmp_seq=1 ttl=58 time=5.534 ms 64 bytes from 9.9.9.9: icmp_seq=2 ttl=58 time=5.251 ms 64 bytes from 9.9.9.9: icmp_seq=3 ttl=58 time=5.194 ms 64 bytes from 9.9.9.9: icmp_seq=4 ttl=58 time=5.698 ms
https://medium.com/@nykolas.z/dns-resolvers-performance-comp...
Is there anything better than https://www.dnsoverride.com/ (found via google)?
$ ping 1.1.1.1
PING 1.1.1.1 (1.1.1.1): 56 data bytes
Request timeout for icmp_seq 0
Request timeout for icmp_seq 1
Request timeout for icmp_seq 2
Request timeout for icmp_seq 3
Request timeout for icmp_seq 4
Request timeout for icmp_seq 5
Request timeout for icmp_seq 6
Request timeout for icmp_seq 7
Request timeout for icmp_seq 8
Request timeout for icmp_seq 9
Request timeout for icmp_seq 10
$ ping 1.0.0.1
PING 1.0.0.1 (1.0.0.1): 56 data bytes
64 bytes from 1.0.0.1: icmp_seq=0 ttl=50 time=167.359 ms
64 bytes from 1.0.0.1: icmp_seq=1 ttl=50 time=165.791 ms
64 bytes from 1.0.0.1: icmp_seq=2 ttl=50 time=165.846 ms
64 bytes from 1.0.0.1: icmp_seq=3 ttl=50 time=166.755 ms
64 bytes from 1.0.0.1: icmp_seq=4 ttl=50 time=166.694 ms
64 bytes from 1.0.0.1: icmp_seq=5 ttl=50 time=166.088 ms
64 bytes from 1.0.0.1: icmp_seq=6 ttl=50 time=166.460 ms
64 bytes from 1.0.0.1: icmp_seq=7 ttl=50 time=166.668 ms
64 bytes from 1.0.0.1: icmp_seq=8 ttl=50 time=166.753 ms
64 bytes from 1.0.0.1: icmp_seq=9 ttl=50 time=165.670 ms
64 bytes from 1.0.0.1: icmp_seq=10 ttl=50 time=166.816 ms
Seem not China friendly :-(Thankfully I noticed quickly, so I knew what the problem would be.
PING 1.1.1.1 (1.1.1.1): 56 data bytes
64 bytes from 1.1.1.1: icmp_seq=0 ttl=55 time=14.053 ms
64 bytes from 1.1.1.1: icmp_seq=1 ttl=55 time=12.715 ms
64 bytes from 1.1.1.1: icmp_seq=2 ttl=55 time=13.615 ms
64 bytes from 1.1.1.1: icmp_seq=3 ttl=55 time=14.018 ms
64 bytes from 1.1.1.1: icmp_seq=4 ttl=55 time=12.261 ms
64 bytes from 1.1.1.1: icmp_seq=5 ttl=55 time=11.428 ms
64 bytes from 1.1.1.1: icmp_seq=6 ttl=55 time=11.950 ms
64 bytes from 1.1.1.1: icmp_seq=7 ttl=55 time=13.034 ms
64 bytes from 1.1.1.1: icmp_seq=8 ttl=55 time=13.679 ms
64 bytes from 1.1.1.1: icmp_seq=9 ttl=55 time=12.415 ms
64 bytes from 1.1.1.1: icmp_seq=10 ttl=55 time=12.088 ms
PING 1.0.0.1: 64 data bytes
--- 1.0.0.1 ping statistics ---
14 packets transmitted, 14 packets received, 0.0% packet loss round-trip min/avg/max/stddev = 120.784/126.222/128.433/2.036 ms
1.1.1.1 timed out, must be blocked by my iso.
There's troubleshooting utilities in the CHAOS class, e.g. dig @1.1.1.1 id.server ch txt
--- 1.1.1.1 ping statistics ---
rtt min/avg/max/mdev = 30.507/32.155/36.020/1.419 ms
--- 8.8.8.8 ping statistics ---
rtt min/avg/max/mdev = 19.618/21.572/23.009/0.991 ms
The traceroutes are inconclusive but they kind of look like Google has a POP in Fukuoka and CloudFlare are only in Tokyo.edit: Namebench was broken for me, but running GRC's DNS Benchmark my ISP's own resolver is the fastest, then comes Google 8.8.8.8, then Level3 4.2.2.[123], then OpenDNS, then NTT, and then finally 1.1.1.1.
[user@v-fed-1 ~]$ dig txt o-o.myaddr.test.l.google.com @8.8.8.8 +short
"173.194.98.4"
"edns0-client-subnet 94.181.44.185/32"
[user@v-fed-1 ~]$ dig txt o-o.myaddr.test.l.google.com @8.8.8.8 +short
"173.194.98.4"
"edns0-client-subnet 94.181.44.185/32"
[user@v-fed-1 ~]$ dig txt o-o.myaddr.test.l.google.com @8.8.8.8 +short
"173.194.98.4"
"edns0-client-subnet 94.181.44.185/32"
[user@v-fed-1 ~]$ dig txt edns-client-sub.net @8.8.8.8 +short
"{'ecs_payload':{'family':'1','optcode':'0x08','cc':'RU','ip':'94.181.44.0','mask':'24','scope':'0'},'ecs':'True','ts':'1522656335.56','recursive':{'cc':'FI','srcip':'74.125.74.4','sport':'40964'}}"
[user@v-fed-1 ~]$ dig txt edns-client-sub.net @8.8.8.8 +short
"{'ecs_payload':{'family':'1','optcode':'0x08','cc':'RU','ip':'94.181.44.0','mask':'24','scope':'0'},'ecs':'True','ts':'1522656336.4','recursive':{'cc':'US','srcip':'74.125.46.4','sport':'51510'}}"
[user@v-fed-1 ~]$ dig txt edns-client-sub.net @8.8.8.8 +short
"{'ecs_payload':{'family':'1','optcode':'0x08','cc':'RU','ip':'94.181.44.0','mask':'24','scope':'0'},'ecs':'True','ts':'1522656337.96','recursive':{'cc':'US','srcip':'74.125.46.4','sport':'54992'}}"
127.1 is a DNS-over-HTTPS proxy. [user@v-fed-1 ~]$ dig txt o-o.myaddr.l.google.com @127.1 +short
"173.194.98.11"
"edns0-client-subnet 94.181.44.0/24"
[user@v-fed-1 ~]$ dig txt o-o.myaddr.l.google.com @127.1 +short
"173.194.98.11"
"edns0-client-subnet 94.181.44.0/24"
[user@v-fed-1 ~]$ dig txt o-o.myaddr.l.google.com @127.1 +short
"173.194.98.6"
"edns0-client-subnet 193.151.48.130/32
Some story from other (business) connection. [user@v-fed-1 ~]$ dig txt o-o.myaddr.l.google.com @8.8.8.8 +short
"74.125.74.3"
"edns0-client-subnet 37.113.134.30/32"
[user@v-fed-1 ~]$ dig txt o-o.myaddr.l.google.com @8.8.8.8 +short
"74.125.46.4"
"edns0-client-subnet 85.29.165.14/32"
[user@v-fed-1 ~]$ dig txt o-o.myaddr.l.google.com @8.8.8.8 +short
"173.194.98.13"
"edns0-client-subnet 77.234.25.49/32"
Query times and rechability from 58 locations. 3 locations still can't reach 1.1.1.1, but for most users cached response is faster from Cloudflare.
PING 1.1.1.1 (1.1.1.1): 56 data bytes
64 bytes from 1.1.1.1: icmp_seq=0 ttl=57 time=5.531 ms
64 bytes from 1.1.1.1: icmp_seq=1 ttl=57 time=4.420 ms
64 bytes from 1.1.1.1: icmp_seq=2 ttl=57 time=5.450 ms
64 bytes from 1.1.1.1: icmp_seq=3 ttl=57 time=5.438 ms
64 bytes from 1.1.1.1: icmp_seq=4 ttl=57 time=4.231 ms
64 bytes from 1.1.1.1: icmp_seq=5 ttl=57 time=5.933 ms
PING 8.8.8.8 (8.8.8.8): 56 data bytes
64 bytes from 8.8.8.8: icmp_seq=0 ttl=57 time=6.440 ms
64 bytes from 8.8.8.8: icmp_seq=1 ttl=57 time=4.574 ms
64 bytes from 8.8.8.8: icmp_seq=2 ttl=57 time=4.684 ms
64 bytes from 8.8.8.8: icmp_seq=3 ttl=57 time=4.992 ms
64 bytes from 8.8.8.8: icmp_seq=4 ttl=57 time=5.942 ms
64 bytes from 8.8.8.8: icmp_seq=5 ttl=57 time=5.955 ms
This means you'll have a really hard time trying to get rid of SNI system-wide, what with a lot of minor apps making their own https connections (granted, on Android or iOS they probably use a common API, but not on a computer).
== CloudFlare ==
Ping statistics for 1.1.1.1:
Minimum = 10ms, Maximum = 10ms, Average = 10ms
Ping statistics for 2606:4700:4700::1111 Minimum = 40ms, Maximum = 40ms, Average = 40ms
== OpenDNS ==Ping statistics for 208.67.222.222:
Minimum = 38ms, Maximum = 38ms, Average = 38ms
Ping statistics for 2620:0:ccc::2: Minimum = 34ms, Maximum = 34ms, Average = 34ms
I would be interested to hear from google (8.8.8.8) how much ping traffic that address gets ...
I know that I will quickly ping 8.8.8.8 as a very quick and dirty test of network up ... its just faster to type than any other address I could test with.
Have a ton of respect for David Ulevitch and the whole OpenDNS team. While OpenDNS started with an ad-supported business model, they've completely pivoted away from that. Now that they're part of Cisco, I believe their nearly exclusive revenue stream today is their Umbrella product which is a network security product aimed at businesses. While I don't know for sure, I'd be highly surprised if OpenDNS were selling browsing data.
I think there's a good way to put this to the test - establish a DNS "mixer" that will randomly direct DNS requests to either 1.1.1.1 or 8.8.8.8 or (whatever) and let the public have access to it.
In this way, Cloudflare would bear some small expense from processing these DNS requests (essentially zero) but would receive no information about the initial requestor.
It would be interesting to run this experiment and perhaps see some real traffic on the DNS mixer ... and then see how cloudflare responds.
Would they block the mixer ?
A VPN gives you little protection against browser fingerprinting, which may alone leak enough information about you to identify you. Also privacy-by-policy is in no way near privacy-by-design. If you want privacy, use the Tor Browser.
Lots of network hardware (i.e., routers, firewalls if they're not outright blocking) de-prioritise ICMP (and other types of network control/testing traffic) and the likelihood is that Google (and other free DNS providers) are throttling the number of ICMP replies that they send.
They're not providing an ICMP reply service, they're providing a DNS service. I'd a situation during the week where I'd to tell one of our engineers to stop tracking 8.8.8.8 as an indicator of network availability for this reason.
$ host 1.0.0.1
1.0.0.1.in-addr.arpa domain name pointer 1dot1dot1dot1.cloudflare-dns.com.
$ host 2606:4700:4700::1001
1.0.0.1.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.7.4.0.0.7.4.6.0.6.2.ip6.arpa domain name pointer 1dot1dot1dot1.cloudflare-dns.com.
I would've expected these to return 1dot0dot0dot1.cloudflare-dns.com.You are correct that you can do this if you spend one round trip first to set up the channel, and both the proposals for how we might encrypt SNI in that Draft do pay a round trip. Which is why I said they're slow and ugly. And as you noticed, SSH2 and rdesktop do not, in fact, spend an extra round trip to buy this capability they just go without.
In short, there's really no good solution here but an amendment to TLS could conceivably make it to where it wouldn't be possible to narrow it down to which site that an IP address hosts the user was visiting. That could actually be good enough for traffic to e.g. cloudflare.
But hey, they say their product is legitimate, so it must be true.
Pinging 1.1.1.1 with 32 bytes of data:
Reply from 1.1.1.1: bytes=32 time=1ms TTL=128
Reply from 1.1.1.1: bytes=32 time=1ms TTL=128
Reply from 1.1.1.1: bytes=32 time=1ms TTL=128
Reply from 1.1.1.1: bytes=32 time=2ms TTL=128
Pinging 8.8.8.8 with 32 bytes of data:
Reply from 8.8.8.8: bytes=32 time=91ms TTL=37
Request timed out.
Reply from 8.8.8.8: bytes=32 time=66ms TTL=37
Request timed out.
Pinging 1.0.0.1 with 32 bytes of data:
Reply from 1.0.0.1: bytes=32 time=146ms TTL=50
Reply from 1.0.0.1: bytes=32 time=144ms TTL=50
Reply from 1.0.0.1: bytes=32 time=142ms TTL=50
Reply from 1.0.0.1: bytes=32 time=140ms TTL=50
The o-o.myaddr.l.google.com domain is a feature of Google's authoritative name servers (ns[14].google.com) and not of 8.8.8.8. You can send similar queries through 1.1.1.1 (where you will see that there is no EDNS Client Subnet data provided, improving the privacy of your DNS but potentially returning less accurate answers, as Google's authoritative servers do not have your IP subnet, but only the IP address of the CloudFlare resolver forwarding your query.
Your upstream diagnosis seems to suggest otherwise, but perhaps you have an issue with using pfBlockerNG? If you're using pfSense with pfBlockerNG + DNSBL IP rules, it populates empty firewall alias files with 1.1.1.1 which was falsely assumed to be unused.
Review your aliases and pfBlockerNG alerts. If you see it dropped there, disable the firewall rule option on DNSBL, see screenshot [0]
Additional brief discussion on reddit [1] with comments from the pfBlockerNG author.
[0] https://i.imgur.com/u5q5SP2.png
[1] https://www.reddit.com/r/PFSENSE/comments/88wg6g/issue_with_...
As always, too easy to be misunderstood in comments like these.
Any idea why my ISP redirects this IP?
Edit: Assuming this is the right file: https://github.com/iputils/iputils/blob/master/ping.c, I don't see the reverse lookup code anywhere. But then I'm not the most proficient in reading linux code.
traceroute to 1.1.1.1 (1.1.1.1), 64 hops max, 52 byte packets
1 1dot1dot1dot1.cloudflare-dns.com (1.1.1.1) 1.117 ms 0.710 ms 0.727 ms
//1.1.1.1
It's one more letter than a suffix, but as a prefix its a bit clearer. I've known companies to post LAN hostname addresses that way, and in written/printed materials it stands out pretty clearly as an address to type.It follows the URL standards (no schema implies current or default schema). Many auto-linking tools (such as a Markdown, Word) recognize it by default (though sometimes results are unpredictable given schema assumptions). It's also increasingly the recommendation for HTML resources where you do want to help insure same-schema requests (good example cross-server/CDN CSS and JS links now are typically written as //css-host.example.com/some/css/file.css).
Verizon still the fastest, my ISP, but switched to 1.1.1.1 for the perceived privacy benefit. The speed difference wouldn't be noticeable for me between Verizon, Google, Cloudflare.
Anyway, there is a Connection > Local IP Network. But no DNS settings anywhere.
>because the Linux project isn't dedicated to auditing the Linux project.
Huh? Code Review? Testing? The entire point of open source especially w.r.t security is to have millions of eyes on the source. Heck with the entire world being able to audit and review the source code, people still find bugs that were introduced decades ago.
>It's like calling a home security system pointless if it doesn't detect any forced entries.
I'm afraid that didn't make much sense to me.
Anyway, why are we focusing on irrelevant minutia or language anyway. I simply asked a commentor to show the work they've done for basing their opinion.
However, having a business relationship with another organization is not a right. Hate speakers are not a protected class.
DNS does not operate in the same manner nor with the same assumptions. One can obviously run their own DNS resolver as has been pointed out repeatedly in this thread.
Please list the, "pro-pedophilia and ISIS web sites." hosted by Cloudflare?
Edit: There's probably a business opportunity for a registrar/DNS provider/host that operates under 'free speech purism,' though it's hard to say it won't go the way of usenet in that regard.
Cloudflare:
Reply from 1.0.0.1: bytes=32 time=119ms TTL=56
Reply from 1.0.0.1: bytes=32 time=74ms TTL=56
Reply from 1.0.0.1: bytes=32 time=74ms TTL=56
Reply from 1.0.0.1: bytes=32 time=74ms TTL=56
Reply from 1.0.0.1: bytes=32 time=74ms TTL=56
GoogleDNS:
Reply from 8.8.8.8: bytes=32 time=44ms TTL=55
Reply from 8.8.8.8: bytes=32 time=43ms TTL=55
Reply from 8.8.8.8: bytes=32 time=43ms TTL=55
Reply from 8.8.8.8: bytes=32 time=43ms TTL=55
Reply from 8.8.8.8: bytes=32 time=44ms TTL=55
That's irrelevant when we are talking about a company being paid specifically to audit something. The entire world is able to send me food as well, but I don't get mad when it doesn't except for when I pay someone to do it.
>I simply asked a commentor to show the work they've done
And it was a dumb question. An auditing company that failed to detect massive fraud either willfully ignored it to sellout or was too incompetent to recognize it.
People can complain and ask for information that I can't provide. That's your right.
I have the same responsibility to provide proof as you do to believe me, even if I provided "proof".
Bother someone else.
Linux is developed almost exclusively by people who get paid for their work. Billions of dollars of real money has been poured by IBM, Intel, RH, etc. You are thoroughly confused my friend. Lets stick with the original point.
> An auditing company that failed to detect massive fraud either willfully ignored it to sellout or was too incompetent to recognize it.
So explain how they audited the firm, explain which data they had access to and how they were incompetent
You can't define your way out of providing evidence. An auditor does X. They couldn't do X, therefore they were incompetent. That schoolyard logic doesn't work. People will ask you to backup your opinion. Its completely fine to say I don't know...
What aren't you getting? Developing is not auditing. KPMG wasn't paid to do banking, they were just paid to audit.
>So explain how they audited the firm, explain which data they had access to and how they were incompetent
As an auditing firm you either demand enough data to do a real audit or you walk away from the deal. So either they didn't have enough data or they were sell-outs rubber stamping it. That's just how auditing works.
>People will ask you to backup your opinion. Its completely fine to say I don't know...
It's not an opinion. It's literally what they are paid to do. If I pay for a hamburger and someone just gives me a pile of sand, any bystander can tell that the seller didn't do their job.
If you want more evidence of KPMG incompetence, check out this: https://seekingalpha.com/news/3344058-ge-urged-proxy-advisor...
Using 1.0.0.1 works.
Pinging 1.1.1.1 with 32 bytes of data: Reply from 1.1.1.1: bytes=32 time=45ms TTL=53 Reply from 1.1.1.1: bytes=32 time=45ms TTL=53 Reply from 1.1.1.1: bytes=32 time=45ms TTL=53 Reply from 1.1.1.1: bytes=32 time=45ms TTL=53
Ping statistics for 1.1.1.1: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 45ms, Maximum = 45ms, Average = 45ms
Pinging 1.0.0.1 with 32 bytes of data: Reply from 1.0.0.1: bytes=32 time=46ms TTL=54 Reply from 1.0.0.1: bytes=32 time=46ms TTL=54 Reply from 1.0.0.1: bytes=32 time=46ms TTL=54 Reply from 1.0.0.1: bytes=32 time=46ms TTL=54
Ping statistics for 1.0.0.1: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 46ms, Maximum = 46ms, Average = 46ms
Pinging 8.8.4.4 with 32 bytes of data: Reply from 8.8.4.4: bytes=32 time=29ms TTL=56 Reply from 8.8.4.4: bytes=32 time=29ms TTL=56 Reply from 8.8.4.4: bytes=32 time=29ms TTL=56 Reply from 8.8.4.4: bytes=32 time=29ms TTL=56
Ping statistics for 8.8.4.4: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 29ms, Maximum = 29ms, Average = 29ms
Pinging 8.8.8.8 with 32 bytes of data: Reply from 8.8.8.8: bytes=32 time=21ms TTL=56 Reply from 8.8.8.8: bytes=32 time=21ms TTL=56 Reply from 8.8.8.8: bytes=32 time=21ms TTL=56 Reply from 8.8.8.8: bytes=32 time=21ms TTL=56
Ping statistics for 8.8.8.8: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 21ms, Maximum = 21ms, Average = 21ms
Pinging 208.67.220.220 with 32 bytes of data: Reply from 208.67.220.220: bytes=32 time=45ms TTL=54 Reply from 208.67.220.220: bytes=32 time=46ms TTL=54 Reply from 208.67.220.220: bytes=32 time=45ms TTL=54 Reply from 208.67.220.220: bytes=32 time=50ms TTL=54
Ping statistics for 208.67.220.220: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 45ms, Maximum = 50ms, Average = 46ms
Pinging 208.67.222.222 with 32 bytes of data: Reply from 208.67.222.222: bytes=32 time=61ms TTL=54 Reply from 208.67.222.222: bytes=32 time=61ms TTL=54 Reply from 208.67.222.222: bytes=32 time=61ms TTL=54 Reply from 208.67.222.222: bytes=32 time=61ms TTL=54
Ping statistics for 208.67.222.222: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 61ms, Maximum = 61ms, Average = 61ms
It's in the linked archived DS article and I confirmed the information is still true.