Starry Night in 3D
Explore Vincent van Gogh’s beloved painting in astonishing detail in this behind-the-scenes look at a new imaging tool.
Kurt Heumiller, Jonathan Muzikar, Robert Kastler
Dec 1, 2020
The ideal way to see Vincent van Gogh’s The Starry Night is in person, right in front of it, on the fifth floor of MoMA. But we in the Imaging and Visual Resources (IVR) department recently found ourselves asking the question, What is the ideal way to capture The Starry Night as an image? Although it’s assumed that the most popular works in a museum’s collection will have the best photographs, these works are often on view to the public year round, limiting our opportunities for imaging them. Last summer, when MoMA was closed for its expansion, IVR had the first chance in 12 years to photograph Van Gogh’s masterpiece while it was being reframed.
Our primary goal was to produce a high-resolution, extremely color-accurate image with ideal lighting that shows off Van Gogh’s signature paint texture and brushstrokes. When we realized that there might not be another opportunity to image the work for a decade, we wondered if we could create more than just a still, 2D image.
Recently we’ve had a growing interest in a 3D imaging technique called photogrammetry. Photogrammetry allows us to create 3D models by using hundreds of photographs to capture every area of an object from multiple angles. Then, through some intensive computation, we can determine with extraordinary precision the location each photo was taken from to calculate the actual shape of the object.
Photography set-up in Paintings Conservation. Photographer Jonathan Muzikar sets up lighting for Marc Chagall’s I and the Village
Our photogrammetry projects have mainly focused on sculptures and other obviously 3D objects. But every painting also has dimension and shape, which is most noticeable in works like The Starry Night with its strong impasto creating visible mountains and valleys of brushstrokes.
Because Van Gogh’s canvas could not travel to IVR’s larger studios, where we image most paintings, our colleagues in the Conservation department generously allowed us to set up in their X-ray room. After calibrating the color and lighting and creating a 273-megapixel still image, we moved the camera in close to the work and captured a series of 437 100-megapixel images while moving the camera up and down and across the canvas.
Animation showing photographer Jonathan Muzikar capturing a few of the 437 images needed to create the 3D model. You can see the painting surrounded by color and measurement targets.
Animation showing how twelve different angles give a better understanding of the shape of the paint.
Next, it took an extremely powerful computer a couple of hours to compute the camera locations and align the 230 gigabytes of data. Over the course of the following week, we worked to refine the alignment and increase the accuracy of the model. It’s a little like trimming the I Love Lucy Christmas tree—cut one side that sticks out and now the other side is unbalanced and needs trimming, so you have to stand back and reevaluate how everything fits together, then trim and repeat. The improved alignment was then handed back to the computer to calculate the shape of the painting in 3D space. This process took over a day for the computer to complete, but resulted in measurements with a precision of approximately three microns (3 µm)—less than half the width of a human blood cell. The resulting Starry Night model is made of over 329 million points in 3d space.
The small area highlighted in the red circle is shown as dots rendered in 3D space bellow.
What is this 3D model for?
The initial purpose of the 3D model was to anticipate needs for the next decade. We had hoped that it might be possible to 3D-print a version of the canvas that could be used by the Education department with visually impaired groups. However, the pandemic radically changed our thinking, and instead of focusing on tactile experiences, we began imagining, in collaboration with MoMA’s Digital Media team, a virtual experience that people could enjoy at home.
Digital Media had recently released a 3D model of David Tudor’s Rainforest IV using a platform called Sketchfab, and we began moving forward to do the same with The Starry Night. 329 million points of data gives us an exceptionally detailed 3D model, but this is far more than most smartphones and home computers can handle. The challenge of reducing data but still showing as much detail as possible has been a focus of research in areas like video games and CGI movies. Borrowing some of those techniques, we were able to work with Sketchfab to convert the data points into a mesh of “polygons” and then overlay texture and photographic data onto that in smaller chunks to make it much easier for someone to enjoy the experience, even on their phone.
We hope this 3D model and its astonishing level of detail helps us all get a little closer to Van Gogh’s world in our own homes.
Dots rendered in 3D space
The conservation and presentation of MoMA’s collection is made possible by Bank of America.