←back to thread

379 points mobeigi | 2 comments | | HN request time: 0s | source
Show context
DanielHB ◴[] No.41869510[source]
I want to share a story in a somewhat related topic:

anti web-scraping techniques

The most devious version I ever seen of this, I was baffled, astonished and completely helpless:

This website I was trying to scrap generated a new font (as in a .woff file) on every request, the font had the position of the letters randomly moved around (for example, the 'J' would be in place of the 'F' character in the .woff and so on) and the text produced by the website would be encoded to match that specific font.

So every time you loaded the website you got a completely different font with a completely different text, but for the user the text would look fine because the font mapped it to the original characters. If you tried to copy-and-paste the text from the website you would get some random garbled text.

The only way I could think of to scrap that would have been to OCR the .woff font files, but OCR could easily prevent mass-scraping due to sheer processing costs.

replies(7): >>41869674 #>>41869684 #>>41869775 #>>41869796 #>>41869877 #>>41870330 #>>41871277 #
1. wildpeaks ◴[] No.41869877[source]
A downside is it makes the site unusable for screen readers and SEO, plus it adds backend costs (compared to a plain backend that serves static files) if it's generated dynamically, although one can pre-generate a bunch of variants and randomly pick one at runtime (which could be handled by the load balancer) to minimize the costs.
replies(1): >>41870860 #
2. ksp-atlas ◴[] No.41870860[source]
Yeah, my immediate thought was this would be bad for screen readers, plus OCR could easily defeat this