←back to thread

565 points gaws | 7 comments | | HN request time: 1.054s | source | bottom
1. supernova87a ◴[] No.30067296[source]
Do they always take the image from straight on, front? Is the camera panned around with a narrow FOV so that it doesn't see portions of the canvas from an angle? Does the light source move with the camera?

Or, I'm thinking, it's often really interesting to see the 3D texture of a painting -- is that ever something desired to be recorded? For example, with impressionist / pointillist paintings I've seen in person, looking at the canvas from a very low angle is even more interesting to be able to see the brush strokes than seeing it directly face-on.

replies(3): >>30067320 #>>30067359 #>>30067694 #
2. nullc ◴[] No.30067320[source]
No idea about these images, but in reproduction work it's not uncommon to work with the film plane shifted so that the camera sees the work piece from an angle (but still a flat perspective) to avoid reflections that otherwise be seen flat on.
3. wildzzz ◴[] No.30067359[source]
I took a geology class in college where we studied a lot of high res photos of both individual rocks and larger rock formations. They used some sort of moving bed and a microscope for the rocks so you could get really close up while also seeing larger features. The formations we're photographed using a GigaPan mount that would allow you to attach a camera with a telephoto lens and it would step in both X and Y axis to create hundreds of images it would later stitch into a composite.
4. kingcharles ◴[] No.30067694[source]
This explains all the technical details - it is far more advanced than you would imagine:

https://www.youtube.com/watch?v=z_hm5oX7ZlE

replies(2): >>30067980 #>>30068067 #
5. femto ◴[] No.30067980[source]
The implication is that the image tiles are overlapped, so the edge of one tile falls near the center of another. That might provide enough parallax information to do a 3D reconstruction. I wonder if the raw images and metadata are available?
replies(1): >>30068039 #
6. kingcharles ◴[] No.30068039{3}[source]
It might be enough. Would be a hard algorithm to code though, don't you think?

Do you think they should have just used stereo cameras in the first place?

7. modeless ◴[] No.30068067[source]
Wow, that's a lot of work! And yet it seems a shame to not capture more information than a single RGB color value per pixel. I can think of three other types of information you could capture: 3D information (which they did capture but only at a coarse level), the BDRF, and hyperspectral information. Of these the BDRF seems most important for viewing of paintings, and I'm surprised that there's no consideration of that given the thoroughness of the rest of the project.

With the BDRF you would be able to show the painting as it would appear under any lighting condition and any viewing angle. This would make the painting appear much more realistic in a VR rendering, for example. The current stitched photo assumes uniform lighting and a straight-on view, and if you simply mapped it to a texture on a plane in a 3D environment it would look flat and lifeless.