←back to thread

1281 points janpio | 1 comments | | HN request time: 0.208s | source
Show context
arccy ◴[] No.45676475[source]
If you're going to host user content on subdomains, then you should probably have your site on the Public Suffix List https://publicsuffix.org/list/ . That should eventually make its way into various services so they know that a tainted subdomain doesn't taint the entire site....
replies(16): >>45676781 #>>45676818 #>>45677023 #>>45677080 #>>45677130 #>>45677226 #>>45677274 #>>45677297 #>>45677341 #>>45677379 #>>45677725 #>>45677758 #>>45678975 #>>45679154 #>>45679258 #>>45679802 #
0xbadcafebee ◴[] No.45677379[source]

  In the past, browsers used an algorithm which only denied setting wide-ranging cookies for top-level domains with no dots (e.g. com or org). However, this did not work for top-level domains where only third-level registrations are allowed (e.g. co.uk). In these cases, websites could set a cookie for .co.uk which would be passed onto every website registered under co.uk.

  Since there was and remains no algorithmic method of finding the highest level at which a domain may be registered for a particular top-level domain (the policies differ with each registry), the only method is to create a list. This is the aim of the Public Suffix List.
  
  (https://publicsuffix.org/learn/)
So, once they realized web browsers are all inherently flawed, their solution was to maintain a static list of websites.

God I hate the web. The engineering equivalent of a car made of duct tape.

replies(14): >>45677442 #>>45678161 #>>45678382 #>>45678520 #>>45678922 #>>45679006 #>>45679642 #>>45680322 #>>45680711 #>>45680859 #>>45681349 #>>45682208 #>>45682457 #>>45683675 #
KronisLV ◴[] No.45678922[source]
> Since there was and remains no algorithmic method of finding the highest level at which a domain may be registered for a particular top-level domain

A centralized list like this not just for domains as a whole (e.g. co.uk) but also specific sites (e.g. s3-object-lambda.eu-west-1.amazonaws.com) is both kind of crazy in that the list will bloat a lot over the years, as well as a security risk for any platform that needs this functionality but would prefer not to leak any details publicly.

We already have the concept of a .well-known directory that you can use, when talking to a specific site. Similarly, we know how you can nest subdomains, like c.b.a.x, and it's more or less certain that you can't create a subdomain b without the involvement of a, so it should be possible to walk the chain.

Example:

  c --> https://b.a.x/.well-known/public-suffix
  b --> https://a.x/.well-known/public-suffix
  a --> https://x/.well-known/public-suffix
Maybe ship the domains with the browsers and such and leave generic sites like AWS or whatever to describe things themselves. Hell, maybe that could also have been a TXT record in DNS as well.
replies(3): >>45679097 #>>45679677 #>>45681381 #
IshKebab ◴[] No.45679097[source]
I presume it has to be a curated list otherwise spammers would use it to evade blocks. Otherwise why not just use DNS?
replies(1): >>45679827 #
1. inopinatus ◴[] No.45679827[source]
Whois would be the choice. DNS’s less glamourous sibling, purpose built for delegated publication of accountability records