←back to thread

382 points DamonHD | 1 comments | | HN request time: 0s | source
Show context
lynndotpy ◴[] No.43697899[source]
> Years ago it would've required a supercomputer and a PhD to do this stuff

This isn't actually true. You could do this 20 years ago on a consumer laptop, and you don't need the information you get for free from text moving under a filter either.

What you need is the ability to reproduce the conditions the image was generated and pixelated/blurred under. If the pixel radius only encompasses, say, 4 characters, then you only need to search for those 4 characters first. And then you can proceed to the next few characters represented under the next pixelated block.

You can think of pixelation as a bad hash which is very easy to find a preimage for.

No motion necessary. No AI necessary. No machine learning necessary.

The hard part is recreating the environment though, and AI just means you can skip having that effort and know-how.

replies(4): >>43697947 #>>43698101 #>>43698597 #>>43698629 #
nartho ◴[] No.43698597[source]
Noob here, can you elaborate on this ? if you take for example a square of 25px and change the value of each individual pixels to the average color of the group, most of the data is lost, no ? if the group of pixels are big enough can you still undo it ?
replies(6): >>43698743 #>>43698999 #>>43699022 #>>43699023 #>>43699026 #>>43711797 #
1. DougMerritt ◴[] No.43698743[source]
It's not that you're utterly wrong; some transformations are irreversible, or close to. Multiplying each pixel's value by 0, assuming the result is exactly 0, is a particularly clear example.

But others are reversible because the information is not lost.

The details vary per transformation, and sometimes it depends on the transformation having been an imperfectly implemented one. Other times it's just that data is moved around and reduced by some reversible multiplicative factor. And so on.