That said, this image is amazing, and lets you see a lot more detail than you can easily manage at the museum.
That said, this image is amazing, and lets you see a lot more detail than you can easily manage at the museum.
It uses hundreds polarized LED lights and cameras, plus lots of image processing, to separate the lighting effects of specular reflectance (glossy shine) from subsurface scattering (glowing skin), so you can reconstruct the 3D image and relight it under different conditions, environments, and viewing angles.
https://en.wikipedia.org/wiki/Paul_Debevec
https://en.wikipedia.org/wiki/Light_stage
"The Light Stage With Paul Debevec" - 360 Video (captured with JauntVR panoramic camera):
https://www.youtube.com/watch?v=xujwI4dimDA
Digitizing Photorealistic Humans Inside USC's Light Stage:
https://www.youtube.com/watch?v=c6QJT5CXl3o
Paul Debevec: Light Fields, Light Stages, and the Future of Virtual Production:
https://www.youtube.com/watch?v=bAe2dUJxe3w
A Light Stage was featured in the 2013 film "The Congress", which is a 2013 film adaptation of Stanislaw Lem's book, "The Futurological Congress", directed by Ari Folman:
https://en.wikipedia.org/wiki/The_Congress_(2013_film)
https://en.wikipedia.org/wiki/The_Futurological_Congress
I really love that movie and the book it was based on, which both raised some interesting issues: Like Blade Runner's relationship to Do Androids Dream of Electric Sheep, it was a lot different than the book, but shares some deep ideas, and stands on its own as a great movie.
The Congress Official Trailer (2014) Robin Wright, Jon Hamm HD:
https://www.youtube.com/watch?v=1rNSTizOsws
The scan scene in the Light Stage at USC ICT's motion capture studio was emotionally riveting and technically realistic, with Robin Wright playing a partially fictionalized version of herself, with Harvey Keitel playing her agent, baring her face and soul to the sparkling panoptic all encompassing emotion capturing machine.
The Congress (2013) Scan Scene:
https://www.youtube.com/watch?v=pPAl5GwvdY8
"Rouen Revisited" is an earlier interactive kiosk project that Paul Debevec and Golan Levin created in 1996 at Interval Research Corporation, based on photogrammetric modeling techniques he developed at UCB:
https://www.youtube.com/watch?v=Ao3kf0YQ31c
https://acg.media.mit.edu/people/golan/rouen/
>Between 1892 and 1894, the French Impressionist Claude Monet produced nearly 30 oil paintings of the main façade of the Rouen Cathedral in Normandy. Fascinated by the play of light and atmosphere over the Gothic church, Monet systematically painted the cathedral at different times of day, from slightly different angles, and in varied weather conditions. Each painting, quickly executed, offers a glimpse into a narrow slice of time and mood. We are interested in widening these slices, extending and connecting the dots occupied by Monet's paintings in the multidimensional space of turn-of-the-century Rouen. In Rouen Revisited, we present an interactive kiosk in which users are invited to explore the façade of the Rouen Cathedral, as Monet might have painted it, from any angle, time of day, and degree of atmospheric haze. Users can contrast these re-rendered paintings with similar views synthesized from century-old archival photographs, as well as from recent photographs that reveal the scars of a century of weathering and war.
>Rouen Revisited is our homage to the hundredth anniversary of Monet's cathedral paintings. Like Monet's series, our installation is a constellation of impressions, a document of moments and percepts played out over space and time. In our homage, we extend the scope of Monet's study to where he could not go, bringing forth his object of fascination from a hundred feet in the air and across a hundred years of history.
Here's a paper about "Multifocus HDR VIS/NIR hyperspectral imaging and its application to works of art" that references his work, about how you can capture the 3D texture and hyperspectral reflectance field of artwork in a way that you could dynamically relight in different conditions and environments, interactively view in VR, use in high quality computer games and renderings, etc:
https://www.osapublishing.org/oe/fulltext.cfm?uri=oe-27-8-11...
>Multifocus HDR VIS/NIR hyperspectral imaging and its application to works of art
>Abstract: This paper presents a complete framework for capturing and processing hyperspectral reflectance images of artworks in situ, using a hyperspectral line scanner. These capturing systems are commonly used in laboratory conditions synchronized with scanning stages specifically designed for planar surfaces. However, when the intended application domain does not allow for image capture in these controlled conditions, achieving useful spectral reflectance image data can be a very challenging task (due to uncontrolled illumination, high-dynamic range (HDR) conditions in the scene, and the influence of chromatic aberration on the image quality, among other factors). We show, for the first time, all the necessary steps in the image capturing and post-processing in order to obtain high-quality HDR-based reflectance in the visible and near infrared, directly from the data captured by using a hyperspectral line scanner coupled to a rotating tripod. Our results show that the proposed method outperforms the normal capturing process in terms of dynamic range, color and spectral accuracy. To demonstrate the potential interest of this processing strategy for on-site analysis of artworks, we applied it to the study of a vintage copy of the famous painting “Transfiguration” by Raphael, as well as a facsimile of “The Golden Haggadah” from the British Library of London. The second piece has been studied for the identification of highly reflective gold-foil covered areas.
[...]
>5. Conclusions and future work: In this study, a complete framework is introduced for the hyperspectral reflectance capture of a painting in situ, and under high dynamic range conditions. Both the high dynamic range and the focusing problem due to chromatic aberrations have been overcome by using multiple captures with different focus positions and exposure times. A final hyperspectral reflectance cube has been computed using weighting maps calculated for both sample and flat fields and the quality of this cube has been tested and compared with a spectral cube captured in the usual LDR and single focus way. Our results show that the proposed method outperforms the best low dynamic range capture acquired. The sharpness index, as well as the color and spectral metrics show that it is possible to achieve good quality spectral reflectance images using a hyperspectral scanner in non-controlled illumination conditions. Moreover, as an example application, highly reflective golden material has been segmented from a facsimile. Our results show that by applying the proposed framework for capturing and processing, those areas which saturate the sensor in the usual capturingway, can be correctly exposed and segmented using the HDR multifocus capture. In future research, a new version of this framework will be developed including piecewise cube stitching for blending different cubes captured in different regions of big paintings. This will allow us to get closer to the painting and retrieve higher spatial resolution data, whilst still maintaining the spectral resolution and performance achieved in this study. Moreover, we will use the spectral reflectance images computed in this study, together with X-ray fluorescence measurements for the non-invasive pigment identification, in order to help the dating of ancient paintings and other works of art.