In the fall of 2010, close to 4.8 million articles were downloaded from the password-protected, subscriber-only, nonprofit online academic journal repository JSTOR in an extended cyber hack that used the campus network at MIT. The articles represented roughly 80% of JSTOR’s total cache.
A little over six months earlier, in June 2010, it was discovered that a novel computer virus had infiltrated Natanz nuclear power station, one of the largest in Iran. Its origins were unclear. Retrospectively dubbed “Stuxnet,” the malware consisted of a worm, file shortcut, and rootkit designed to disrupt specific programming instructions (Programmable Logic Controllers, or PLCs) run on Windows operating systems to control automated manufacturing and monitoring. The Stuxnet virus worked in two waves, with the first mapping a blueprint of the plant operating systems in order for the second to effectively disrupt it. By exploiting unknown security gaps, the virus was able to destroy parts of Iran’s nuclear centrifuges, while simultaneously relaying normal readings to the plant operators.
On May 28, 2015, we convened the final debate in the Design and Violence series at MoMA. It focused on the darker recesses of designing and disrupting the Internet—hacking, orchestrating DDoS (distributed denial of service) attacks, and malware creation—crystallized by actions such as these.
Debaters Gabriella Coleman and Lawrence Lessig faced off on the motion—”Internet freedom and digital privacy will come about only through the design of better tools for civil disobedience and direct action”—which invited intense dialogue on the ethics and ethos of the World Wide Web and our digital landscape. It was both a celebration and an investigation, as it marked the end of the Design and Violence online curatorial project, and the forthcoming release of the resulting book.
From its outset in fall 2013, the Design and Violence project took “Hack/Infect” as one of its original checklist categories, defining it thusly: “to utilize the structure or code of an object or system against itself either through subversive reconfiguration or by the introduction of an active foreign element.” Following in the wake of three highly compelling and successful debates in the spring of 2014, this debate similarly used works from the online project as lenses to explore complex intersections of violence and design. In this instance, Patrick Clair’s infographic Stuxnet: Anatomy of a Virus and Google’s DDoS Digital Attack Map were used as springboards. Both projects use design to help us visualize other types of design—malware, digital blockades—that are essentially invisible, extremely complex, and often difficult to understand.
The two debaters are lions in this field. Gabriella Coleman holds the Wolfe Chair in Scientific and Technological Literacy at McGill University and has written extensively on cyber security from the perspective of those who subvert it—including a biography on the sprawling hacker collective Anonymous. Her opponent, Lawrence Lessig, trained as a constitutional lawyer and is currently the Roy L. Furman Professor of Law and Leadership at Harvard Law School, and director of the Edmond J. Safra Center for Ethics at Harvard University. Their sparring, and the ensuing audience discussion, was moderated by Design and Violence co-curators Jamer Hunt and Paola Antonelli (coauthor of this post), and organized by Michelle Millar Fisher (the other coauthor of this post).
Online aggression is nothing new. For example, DDoS attacks (online invasions that infect a network of machines, resulting in a fatal surge of activity from them to a target website, server, machine, or network resource) have been recorded since the late 1990s, and computer viruses have existed since the release of Robert Tappan Morris’s worm in 1988. The impetus for such acts runs the gamut from anti-capitalist hacktivism to state-sponsored terrorism and corporate espionage.
In our post–Edward Snowden reality, there are no ready answers for where the real “violence” in our online communications and actions lies. As Gabriella Coleman noted when she authored the response to Google’s Digital Attack Map on the Design and Violence site last year, “The patterns and flows [of this data visualization] might reveal broad geopolitical realities. At this point in time, Africa is rarely the target destination for DDoS attacks. A net positive, one might think—until considering that this happy state of affairs is predicated on digital desolation, an entrenched artifact of colonial underdevelopment. Russia, North America, much of Western Europe, and China, on the other hand, are constantly assailed. We can observe that geopolitical power is a magnet for conflict.”
The Internet is perhaps the most radical design of the last quarter century, and both Coleman and Lessig highlighted during the debate that that it is not, a priori, a wild Western frontier, untamable, anonymous, untouchable, and unknowable. The Internet is regulated by code—code is its architecture—and whoever codes this space, or uses its codes to subvert and manipulate it, may be termed its designer. These designers determine our online experiences, which laws, if any, apply to them, and how such parameters might relate to wider systems of policy and citizenship.
The debate asked whether our digital privacy and freedom (or lack thereof) should be determined from the bottom up, by using the grassroots methods of direct action and civil disobedience, or top down, by means of policy, law, and government. The debaters fine-tuned these ideas by deliberately occupying opposing poles on the spectrum for the purposes of the debate.
Coleman (arguing for the motion) maintained that, at least in the current climate of government obfuscation and legal maneuvering, actions speak louder than words. She made a passionate case for tools and direct actions as the primary way to secure a robust culture of civil liberties, suggesting that whatever gains are made in the law will be temporary—hacking “offers us the best hope for long-lasting change.”
Lessig (arguing against the motion) acknowledged that digital civil disobedience has its place, but that it should not be the only tool in ensuring Internet freedom for all. Lessig argued that the law has been corrupted because technologists and those with digital sophistication have ignored the part it can play in ensuring a healthy online landscape, and suggested that we should “make it embarrassing for senators to be as ignorant as they are about technology.”
Gray areas and middle grounds were parsed during the ensuing discussion, which centered upon methods to empower citizens and lawmakers as designers of our digital spaces. (It was also acknowledged that while the Internet has reshaped all sorts of behaviors, 4.4 out of seven billion people in the world still don’t have access.) The audience, which at the start of the night had been fairly evenly distributed between “for,” “against,” and “on the fence,” landed at night’s end in greatest majority in Coleman’s camp—perhaps cynical of the power of law when it is willfully manipulated by governments and institutions, or perhaps galvanized by the lower threshold for involvement in designing DDoS attacks versus laws to regulate them.
What emerged as unquestionable is the insufficiency of current laws, and lawmakers’ status as, at best, woefully inadequate. Stuxnet, delivered via a USB thumb drive to “air-gapped” computers (and which may have been circulating undetected for months or years prior), was described in the journal Foreign Policy as having “changed global military strategy in the 21st century,” and was dubbed by various sources “the first weapon made entirely of code.” The game-changing virus, which has been linked to a policy of covert warfare against Iran’s nuclear armament by the U.S. and Israel, is now understood to be just one of many further instances of similar malware. Most are thought to be working at the level of nation-states rather than being individually or privately designed. The creators of Stuxnet still remain publicly unknown and unprosecuted. For its designers, the stakes are not personal. Nation states are, it seems, allowed to design and implement malware with impunity in furtherance of their foreign policy.
In stark contrast, the JSTOR case became highly personal when Aaron Swartz—a preternaturally accomplished twentysomething student responsible for cofounding Reddit and codeveloping RSS in his teens—was found responsible for the security breach. At the time of the MIT-JSTOR hack, Swartz was a graduate fellow at nearby Harvard University’s Safra Research Lab on Institutional Corruption (where Lessig was director). He was strongly in favor of open-source knowledge sharing online, and JSTOR’s model—archiving journals that paid academics nothing to publish their articles (and in some cases actually costing the authors in steep rights and reproductions fees) and then charging for the service—rankled him. He was first identified as the hacker in January 2011, after local law enforcement and then officials from the U.S. Attorney’s Office got involved. JSTOR declined to prosecute when they realized that Swartz hadn’t leaked any of the academic articles he had stolen and never intended to profit from them in any way. However, the federal government—the same organization alleged to have had a hand in developing Stuxnet and other malware like it—still hit Swartz with charges of violating the Computer Fraud and Abuse Act, which could have landed him in jail for decades and carried a cumulative maximum penalty of $1 million in fines. Swartz became mired in an expensive and stressful court process (as have others in related circumstances). His insistence that his actions were political and not for personal profit fell on deaf ears. On January 11, 2013, Swartz committed suicide in his Brooklyn apartment.
What new laws and tools will we devise as billions come online? In the process of deciding, we’re coding citizenship and designing participation. The debate at MoMA underlined that the design of better tools to allow more informed and wider participation in sustaining the Internet has never seemed more vital. Forms of violence underpin not just the design of radical digital weapons, but also the structures and institutions that control access to them—and mete out the consequences for their deployment.