←back to thread

550 points polskibus | 1 comments | | HN request time: 0s | source
Show context
black_puppydog ◴[] No.19117358[source]
I tried around with the twitter homepage yesterday, and figured that the transferred html, js, images etc when just pressing F5 on twitter (with a hot cache!) was way bigger than a png (!) screenshot of the same page.

My prediction: the next step will be to move rendering server-side, just send pixels, and the only thing ad blockers will be able to do (after moving to deep learning or some similarly ridiculously expensive way to combat this idiocy) is to place literal black bars over all ads.

Now that I wrote it down, I'd actually like to have that in uBlock now, so I can see which sites are horribly ad infested, and make my choices accordingly.

replies(2): >>19117929 #>>19119721 #
evgen ◴[] No.19119721[source]
Why bother covering the rendered ads when you can just use a bit of machine learning on the client side to recognize the text, yank it back into a decent layout and then filter it and present it back as HTML. When it comes down to it all web sites still need to present something to the user that their eye and brain can understand, so just intercept before final presentation and treat it like an OCR problem to solve.
replies(1): >>19121422 #
1. black_puppydog ◴[] No.19121422[source]
I'm not saying it's impossible, I just don't want to tell any dev that that's what they should be spending their time on. It's the ultimate late-capitalism job description: fighting the adblocker wars. :(

Personally, when a page gets too annoying to surf with adblocker, I just drop the site. The web is big enough (so far) that I don't have to be bored.