Modern Dream: How Refik Anadol Is Using Machine Learning and NFTs to Interpret MoMA’s Collection
From DeepDream to the metaverse, a group of artists and curators sat down to discuss new ways of creating, sharing, and communicating about art.
This week, on the new-media platform Feral File, artist Refik Anadol presents Unsupervised, an exhibition of works created by training an artificial intelligence model with the public metadata of The Museum of Modern Art’s collection. Spanning more than 200 years of art, from paintings to photography to cars to video games, the Museum’s collection represents a unique data set for an artist who has worked with many different public archives. The AI-based abstract images and shapes in Unsupervised are interpretations of the Museum’s wide-ranging collection, weighted toward the exhibition of new artworks at MoMA this fall.
Starting with the exhibition opening on November 18, new artworks will be revealed and released over three days. Each work will be made available to collectors as nonfungible tokens, or NFTs.
MoMA curators Paola Antonelli and Michelle Kuo sat down with Anadol and Casey Reas, the artist-founder of Feral File, to talk about the ecology of mobile images, art in the age of mechanical learning, and the question: What if a machine tried to create “modern art”?
This conversation has been edited for length and clarity.
Refik Anadol Studio. Unsupervised — Data Universe — MoMA. 2021. Video. Courtesy the artist and Feral File
Paola Antonelli: Refik, how did you start thinking of your Machine Hallucinations series, of which Unsupervised is a part?
Refik Anadol: Five years ago, I was very fortunate to be one of the artists in residence at the Google Artists and Machine Intelligence program. This was the moment of DeepDream’s development, the very first time we were witnessing AI algorithms making an impact on the art and technology communities. I was purely a data artist, I guess, at the time. And I was truly blown away by how AI could profoundly change the thinking around producing art, and give us new tools. I wanted to explore several interrelated questions: Can a machine learn? Can it dream? Can it hallucinate?
Michelle Kuo: So can a machine hallucinate? Normally, machine learning is oriented toward achieving resemblance: How can the machine learn from vast amounts of data and then learn, identify, and create something that looks real, that looks like our world? But you are going in the opposite direction, away from resemblance and toward abstraction. How did you start to think about visual datasets and processing them differently?
RA: The first month of my residency at AMI, I found a wonderful open-source cultural archive in Istanbul, called SALT, with 1.7 million documents. Seeing these documents inspired me to think about how I could use my training in both AI and visual arts to creatively engage with vast archives of human experience. Could we apply AI algorithms to a library that is open to everyone? And what would happen if we witnessed a machine learning in front of a human, where the information turns to knowledge and to wisdom? And what would occur if we allowed or helped the machine to create a new relationship between the human and the archive?
So that was the first experiment. And, remarkably, the algorithms came up with some exciting forms. But five years ago, the AI algorithms used for generative adversarial networks, or GANs, were relatively basic. And the computation resources were very basic, versus where we are now. A GAN is a type of neural network that can generate new content instead of simply analyzing or processing existing content. We have been using these networks to create landscapes, urban scenes, architectures, and even Renaissance paintings that never existed but look unbelievably real or contextualized as they emerge from existing data.
By the way, I’m not alone. We worked with Mike Tyka, an artist and researcher, and one of the founders of AMI at Google. And I am part of a diverse team of 14 people who represent 14 languages and 10 countries, imagining together, and Casey has been a mentor, who speculated about data painting as a professor at UCLA’s Department of Design Media Arts.
But five years later, when we look at the same concept first introduced with DeepDream, the algorithm gets extremely powerful, complex, and, remarkably, much more creative. One of the initial issues was we didn’t know exactly what AI was learning from the datasets. But I’m happy to say that now we have tools developed that can reshape this data and the outcomes more meaningfully.
Refik Anadol Studio. Three images from the Unsupervised — Machine Hallucinations — MoMA — Dreams series. 2021
PA: What kind of new knowledge have you gained in the process?
RA: In the past five years, we’ve trained more than 100 AI models, and used close to five petabytes of raw data. This is, as far as I know, one of the most challenging datasets ever used beyond specific research contexts, from clouds to national parks to cities to urban studies of Seoul, Stockholm, Berlin, Istanbul, New York, Los Angeles. Every single collective memory of photographic input is extremely unique and extremely site-specific, connected to time and space.
But what is profoundly inspiring is that every single data point in the archive will have its own context and discourse. They are not similar, and they are uniquely valuable.
PA: Do these hallucinations tell us something new that we did not know about either the data sets or reality?
RA: I think so. We are using the past—what’s been re-collected—and, through a machine, speculating in the present and for the near future. You are really feeling something multidimensional in time. It’s very Newtonian thinking: If you know how it started, can you predict where it may go?
And I think every single location in latent space resonates with how we perceive what happens in our lives. As humans, when we think about how we learn, and recollect our memories, or where our consciousness comes from, we realize that the machine’s process is not so different from our thinking or imagining. And I think machines help us define this powerful feeling of being able to touch and reconstruct memories.
Casey Reas: I’ll say something briefly that’s speculative. Just as one example, let’s imagine we’re in Paris, it’s sometime in the early 20th century. And there are a few dozen artists who are making work and they’re all looking at each other’s work. And they’re all influenced by each other. And then, of course, there were a lot of things that were possible, but that were never made.
And so what I find really interesting about Refik’s project with MoMA’s dataset, with your collection, is that it speculates about possible images that could have been made, but that were never made before. And when I think about these GANs, I don’t think about them as intelligent in the way that something has consciousness; I think of them the way that the body or even an organ like the liver is intelligent. They’re processing information and permuting it and moving it into some other state of reality.
The data itself is essential to what comes out the other side.
Mike Tyka. Jurogumo. Neural net. 2016
MK: And Casey, you were saying that these don’t look like anything else you’ve seen produced by machine learning.
CR: When I saw those DeepDream images for the first time, it was a foundational moment for me, too. I’ve been studying painting and drawing and photography and cinema for 30 years. I’ve been in museums all over the world. But with the images from DeepDream, I saw something in them that I’d never seen before, something that was completely unexpected, like something no human in thousands of years of image-making ever created. And I think we can debate the quality of those images for a really long time. But it sparked something, some desire to create and explore. And I think, yes, when I look at these images that Refik has produced, I can see references to images and objects that I know are in the MoMA collection, but at the same time, nothing resembles any specific work; instead, I can see hybrids and crossings that I’ve never seen before.
I think some people will say that there’s nothing there that’s not in the original data. But I think, perceptually and qualitatively, there’s a lot there that’s just not in the dataset. There are ways that this other mind is making combinations that are completely unexpected, completely different from our perception.
MK: Because here, the machine is part of every decision: what to prioritize, what to think about, what to make a decision about, what features to emphasize, what is important. And so you’re getting all of these totally fantastical images, almost automatist, or like automatic writing or drawing. Both chance and automatism have such a strong presence as strategies in modern art, and in the strategies that artists in MoMA’s collection were using throughout Surrealism, Fluxus, Concrete art. And then here you get a totally new, different layer of automatism that’s happening.
CR: Yes, it’s a whole different set of filters and a whole different set of biases. And for me, that’s the other thing that’s really exciting about it. For example, I have all of these filters that are just wired into my biology and all these filters that I’ve been training for 30 years. And this cuts all the way around them. It may make a connection between a photograph from 1880 and an architectural rendering from 1960 and a painting from 2016, it may find pattern matching in a way that nobody has ever made those correlations before. And then it’s presenting it back for us to filter again and observe and to either use as the beginning of something new or to be the end in itself.
And I think what’s interesting about these specific images that Refik is showing, is that they’re not the direct output from the GAN; instead, they’re a part of the artistic process.
When the machine is doing this work of exploring the data sets and remembering and connecting, what’s the role of the artist?
RA: This project is on the shoulders of many giants, like the archive itself, MoMA’s entire digitized collection. And it connects many imaginations across multiple centuries.
The feeling of connectedness is very profound when imagining this incredible archive. In 2011, I visited MoMA for the first time. And I clearly remember there was this amazing design exhibition, curated by Paola, about the communication between people and objects. But when I tried to remember these fantastic objects, I couldn’t enter the latent space of my mind and reconstruct them easily without typing something into a search tool. Humans forget, but machines do not, right? As humans, we have this associative context, we try to go back in time and space and remember the conditions in order to remember the details. Whereas AI is able to truly reconstruct those conditions, however the archive is thought, so that going back in time, across multiple decades, and reading all these other creations and imaginations and outcomes, putting it together in this multidimensional landscape is really—I don’t know how else to say it—divine.
PA: In this kind of situation, in which the machine is doing the work of exploring the data sets and remembering and connecting, what is the role of the artist? Is the artist an editor, a curator, an interpreter, an oracle?
RA: For me, art reflects humanity’s capacity for imagination. And if I push my compass to the edge of imagination, I find myself well connected with the machines, with the archives, with knowledge, and the collective memories of humanity. So, like many creators, artists, designers, I feel like I am trying to find ways to connect memories with the future. And it’s not too hard to imagine, for example, that this specific data universe is the entire representation of every single data point that ever existed in the MoMA archive. And they’re all connected, based on their similarities, by the most cutting-edge, recent algorithms.
But, of course, this data never comes in this form; there are certain parameters that artists are still giving reasons for machines to connect. For this unique AI training process, we’ve been experimenting with a learning rate, known as a hyperparameter, that I think has a major creative impact on Hallucinations. And it’s not fully autonomous; meaning, for example, with the color, or the interconnectedness between the data points, in the images we’ve created, there are also many supervised parameters that make the final form. There’s a collaboration between machine and human. With the same data, we can generate infinite versions of the same sculpture, but choosing this moment, and creating this moment in time and space, is the moment of creation.
If I push my compass to the edge of imagination, I find myself connected with machines, with archives, with knowledge, and the collective memories of humanity.
PA: It’s interesting because even an artist as technologically “liberated” as you still makes references to traditional forms of art. Do you think you’ll ever be able to untether yourself, and teach us to perceive your art in a completely new way, not using old metaphors?
RA: Heavy, heavy question!
CR: I love it.
RA: I mean, maybe time for some metaverse speculation, though I want to launch a challenge to the world to find an alternative word after Facebook announced its parent company name change to Meta.
PA: Let’s call it something else. Bananaverse!
RA: Paola, to be very honest, I remember your TED talk, when you were talking about how certain critics had a profound reaction to the GANs in MoMA’s collection 10 years ago. GANs were not even regarded as a space of imagination then, versus now they are seen as this new universe.... I feel like there’s some connection there.
PA: So do you think that we will be able to stop thinking of painting, sculpture, photography, and everything else, and just be fully in the bananaverse?
RA: I mean, I feel like it will be another option, I guess, in the filter of reality. I think it will be just a new way of seeing, or more like a new door to perception, right? Can we meld multiple mediums purposefully? Can we reconstruct reality based on what we learn, what we may learn?
CR: I think we can collapse it all. I mean, in a software space, there’s no difference between something that started as a photograph and started as a painting, started as a piece of audio; all of those bits can intermingle. So I am a fan of compressing and collapsing those spaces rather than distinguishing them, as we move into the future. But you know, I have one foot in the 20th century, one foot in the 21st century, and I just love art history. Love it, love it. So I’m always going to make those references.
And I think I also have spent 20 years talking to people who don’t want to understand or are not curious about the digital space. And so I found that talking about the history of photography is a bridge that can bring people in. But I’m all for the challenge of only speaking in the present.
There’s no original anymore. What you are looking at on your screen, and what I’m looking at on mine, are identical things.
Want to learn more about Unsupervised and the impact of NFTs on contemporary art? Join Refik Anadol, Casey Reas, and Michelle Kuo for a virtual conversation, hosted by the Department of Membership, on December 8. For access to even more programming with artists and curators, plus everyday free admission and exclusive viewing hours, become a member today.
UPDATE: On December 8, 2021, Refik Anadol, Casey Reas, and Michelle Kuo came together to discuss Unsupervised and the impact of NFTs on contemporary art at a virtual conversation hosted by the Department of Membership. See the video below. For access to even more programming with artists and curators, plus everyday free admission and exclusive viewing hours, become a member today.
What NFTs Mean for Contemporary Art
Artist Seth Price talks with curator Michelle Kuo about Beeple, collage, finance risk culture, and where immaterial art is taking us in a material world.
Seth Price, Michelle Kuo
Apr 29, 2021
Are Museums the Research and Development Labs of Society?
Explore topics from AI to angels with this selection from MoMA’s R&D Salons.
Paola Antonelli, Theano Siaraferas
May 8, 2020