Wow, that's a lot of work! And yet it seems a shame to not capture more information than a single RGB color value per pixel. I can think of three other types of information you could capture: 3D information (which they did capture but only at a coarse level), the BDRF, and hyperspectral information. Of these the BDRF seems most important for viewing of paintings, and I'm surprised that there's no consideration of that given the thoroughness of the rest of the project.
With the BDRF you would be able to show the painting as it would appear under any lighting condition and any viewing angle. This would make the painting appear much more realistic in a VR rendering, for example. The current stitched photo assumes uniform lighting and a straight-on view, and if you simply mapped it to a texture on a plane in a 3D environment it would look flat and lifeless.