Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems
with Ben Swift and Terhi Nurmikko-Fuller
In this paper we introduce Camera Adversaria; a mobile app designed to disrupt the automatic surveillance of personal photographs by technology companies. The app leverages the brittleness of deep neural networks with respect to highfrequency signals, adding generative adversarial perturbations to users’ photographs. These perturbations confound image classification systems but are virtually imperceptible to human viewers. Camera Adversaria builds on methods developed by machine learning researchers as well as a growing body of work, primarily from art and design, which transgresses contemporary surveillance systems. We map the design space of responses to surveillance and identify an under-explored region where our project is situated. Finally we show that the language typically used in the adversarial perturbation literature serves to affirm corporate surveillance practices and malign resistance. This raises significant questions about the function of the research community in countenancing systems of surveillance.
Eating computers considered harmful
Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems
with Ben Swift and Terhi Nurmikko-Fuller
Contemporary computing devices contain a concoction of numerous hazardous materials. Though users are more or less protected from these substances, recycling and landfilling reintroduce them to the biosphere where they may be ingested by people. This paper calls on HCI researchers to consider these corporal interactions with computers and critiques HCI's existing responses to the e-waste problem. We propose that whether one would consider eating a particular electronic component offers a surprisingly useful heuristic for whether we ought to be producing it on mass with vanishingly short lifespans. We hypothesize that the adoption of this heuristic might affect user behaviour and present a diet plan for users who wish to take responsibility for their own e-waste by eating it. Finally we propose an alternative direction for HCI researchers to design and advocate for those affected by the material properties of e-waste.
Myopic histories and AI's culture of hyperbole
2019 Winter Institute: History, Culture and Contested Memories: Global and Local Perspectives
Historical accounts of the invention of artificial neural networks (ANNs) offer a narrative that overplays the relevance of neurophysiological origins and conceals a significant methodological genealogy from statistics. This has contributed to fractured perspectives between practitioners, non-practicing scholars and laypersons, and a significant misdirection of intellectual effort. The resurrection of ANNs under the moniker “deep learning” has seen advances in a surprising breadth of applications from computer vision and translation to generative art and poetry. It is also well understood that ANNs are inscrutable and prone to cultural biases. Legitimate concerns about the application of ANNs in institutions of cultural and political power e.g. courts, media, social networks, police departments, must compete with voices that anthropomorphise algorithms and make sensationalised claims about the end of work or even the end of humanity. This is spurred on by a culture of hyperbole and liberal attribution of agency in the literature and historical accounts that omit methodological development.
The Other Side: Algorithm as Ritual in Artificial Intelligence
Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems
with Ben Swift
Our cultural and scientific understandings of neural networks are built on a set of philosophical ideas which might turn out to be superstitions. Drawing on methodologies of defamiliarisation and performance art which have been adopted by HCI, we present an analog apparatus for the ritualistic performance of neural network algorithms. The apparatus draws on the interaction modes of the Ouija board to provide a system which involves the user in the computation. By recontextualising neural computation, the work creates critical distance with which to examine the philosophical and cultural assumptions embedded in our conception of AI.
Critical Challenges for the Visual Representation of Deep Neural Networks
Human and Machine Learning
with Ben Swift and Henry Gardner
Artificial neural networks have proved successful in a broad range of applications over the last decade. However, there remain significant concerns about their interpretability. Visual representation is one way researchers are attempting to make sense of these models and their behaviour. The representation of neural networks raises questions which cross disciplinary boundaries. This chapter draws on a growing collection of interdisciplinary scholarship regarding neural networks. We present six case studies in the visual representation of neural networks and examine the particular representational challenges posed by these algorithms. Finally we summarise the ideas raised in the case studies as a set of takeaways for researchers engaging in this area.