Cameras should have just rotated the actual image pixels when saving, instead of cheating. If that's too slow, implement it in hardware, or schedule a deferred process and don't let the images be exported until that's done.
Cameras should have just rotated the actual image pixels when saving, instead of cheating. If that's too slow, implement it in hardware, or schedule a deferred process and don't let the images be exported until that's done.
No, it was an elegant hack given all the constraints which mostly no longer exist on modern hardware (although I wouldn't be so sure about really small embedded systems).
Sure, modern cameras will have no issues loading the full JPEG into memory, but how would you have implemented this in cameras that only have enough for exactly one line's worth of compression blocks?
> or schedule a deferred process and don't let the images be exported until that's done.
Good luck doing this on a battery-powered camera writing directly to an SD card that's expected to be mountable after removing it from the camera without an intransparent postprocessing step.
What if I want to rotate an image by 90 degrees because my camera didn't correctly detect up & down?
To my understanding rotation is lossless, where as moving the data will incur quality loss (except for certain exceptions).
It's true that automatic handling of all input images is difficult, but imo it's important to document.
An example I recently encountered is that in neurological imaging, the axes are patient's right, anterior, superior whereas in radiology they are patient's left, anterior superior. Tricky to get right...
http://www.grahamwideman.com/gw/brain/orientation/orientterm...
I suppose it's no coincidence that the native output format of many sensors (or ISPs, to be precise) is divisible by 16 in both width and height.