> A compression algorithm can then remove high-frequency information, which corresponds to small details, without drastically changing how the image looks to the human eye.
I slightly object to this. Removing small details = blurring the image, which is actually quite noticeable.
For some reason everyone really wants to assume this is true, so for the longest time people would invent new codecs that were prone to this (in particular wavelet-based ones like JPEG-2000 and Dirac) and then nobody would use them because they were blurry. I think this is because it's easy to give up on actually looking at the results of your work and instead use a statistic like PSNR, which turns out to be easy to cheat.
replies(2):