{ "title": "Research", "peer_reviewed": [{"title":"Who (or What) is an AI Artist?","year":2022,"authors":"Kieran Browne","publication":"Leonardo 55 (2)","publication_link":"https://doi.org/10.1162/leon_a_02092","preprint":"https://storage.googleapis.com/kieranbrowne-public-files/preprints/who_or_what_is_an_ai_artist.pdf","pages":"130--134","doi":"10.1162/leon_a_02092","abstract":"The mainstream contemporary art world is suddenly showing interest in \"AI art\". While this has enlivened the practice, there remains significant disagreement over who or what actually deserves to be called an \"AI artist\". This article examines several claimants to the term and grounds these in art history and theory. It addresses the controversial elevation of some artists over others and accounts for these choices, arguing that the art market alienates AI artists from their work. Finally, it proposes that AI art’s interactions with art institutions have not promoted new creative possibilities but have instead reinforced conservative forms and aesthetics. \n"},{"title":"NMA Bilderatlas: a practice-led model for critically examining machine vision in cultural collections","year":2020,"authors":"Kieran Browne and Katrina Grant","type":"Journal Article","preprint":"https://storage.googleapis.com/kieranbrowne-public-files/preprints/nma_bilderatlas.pdf","abstract":"The transformation of machine vision brought about by deep learning has afforded new ways to make sense of digitised images at scale. However, before adopting machine vision into the digital humanistic toolkit we ought to interrogate how this technology mediates our connection to objects of study just as it appears to bring them closer. This article raises critical issues around the application of machine vision systems in museums and cultural institutions. We describe NMA Bilderatlas, an online interface to the National Museum of Australia’s collection structured by the representational schemes of an off-the-shelf machine vision model. The interface serves as a surrogate for our critique of machine vision, which, for lack of interpretability, cannot be directly interpreted. Examining how a machine vision system structures the archive, we identify possible opportunities for cultural institutions as well as likely pitfalls that will arise if machine vision systems are applied uncritically.\n"},{"title":"Semantics and explanation: why counterfactual explanations produce adversarial examples in deep neural networks","year":2020,"authors":"Kieran Browne and Ben Swift","publication":"arXiv preprint arXiv:2012.10076","publication_link":"https://arxiv.org/abs/2012.10076","preprint":"https://storage.googleapis.com/kieranbrowne-public-files/preprints/semantics_and_explanation.pdf","type":"Journal Article","abstract":"Recent papers in explainable AI have made a compelling case for counterfactual modes of explanation. While counterfactual explanations appear to be extremely effective in some instances, they are formally equivalent to adversarial examples. This presents an apparent paradox for explainability researchers: if these two procedures are formally equivalent, what accounts for the explanatory divide apparent between counterfactual explanations and adversarial examples? We resolve this paradox by placing emphasis back on the semantics of counterfactual expressions. Producing satisfactory explanations for deep learning systems will require that we find ways to interpret the semantics of hidden layer representations in deep neural networks.\n"},{"title":"Enacting Collective Ownership Economies within Amazon’s Mechanical Turk","year":2020,"authors":"Kieran Browne and Ben Swift","publication":"Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems","preprint":"https://storage.googleapis.com/kieranbrowne-public-files/preprints/enacting_collective_ownership.pdf","abstract":"This paper details a socially engaged art project which enacts collectivist economic relations within Amazon’s Mechanical Turk platform. We paid workers on the platform to collectively author a plain-language edition of Karl Marx’s Manifesto of the Communist Party. When published, this text will become an asset owned by the workers who maintain authorship and will earn royalties on sales. This project examines the extent to which economic relations on the platform are locked into a neoliberal ideology and suggests alternative economic possibilities for crowdwork.\n"},{"title":"Camera Adversaria","year":2020,"authors":"Kieran Browne, Ben Swift and Terhi Nurmikko-Fuller","filename":"camera_adversaria.pdf","video":"https://youtu.be/cNGHZs47atU","publication":"Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems","publication_link":"https://dl.acm.org/doi/10.1145/3313831.3376434","preprint":"https://storage.googleapis.com/kieranbrowne-public-files/preprints/camera_adversaria.pdf","abstract":"In this paper we introduce Camera Adversaria; a mobile app designed to disrupt the automatic surveillance of personal photographs by technology companies. The app leverages the brittleness of deep neural networks with respect to highfrequency signals, adding generative adversarial perturbations to users’ photographs. These perturbations confound image classification systems but are virtually imperceptible to human viewers. Camera Adversaria builds on methods developed by machine learning researchers as well as a growing body of work, primarily from art and design, which transgresses contemporary surveillance systems. We map the design space of responses to surveillance and identify an under-explored region where our project is situated. Finally we show that the language typically used in the adversarial perturbation literature serves to affirm corporate surveillance practices and malign resistance. This raises significant questions about the function of the research community in countenancing systems of surveillance.\n"},{"title":"Eating computers considered harmful","year":2020,"authors":"Kieran Browne and Ben Swift","publication":"Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems","publication_link":"https://dl.acm.org/doi/abs/10.1145/3334480.3381810","preprint":"https://storage.googleapis.com/kieranbrowne-public-files/preprints/eating_computers_considered_harmful.pdf","abstract":"Contemporary computing devices contain a concoction of numerous hazardous materials. Though users are more or less protected from these substances, recycling and landfilling reintroduce them to the biosphere where they may be ingested by people. This paper calls on HCI researchers to consider these corporal interactions with computers and critiques HCI's existing responses to the e-waste problem. We propose that whether one would consider eating a particular electronic component offers a surprisingly useful heuristic for whether we ought to be producing it on mass with vanishingly short lifespans. We hypothesize that the adoption of this heuristic might affect user behaviour and present a diet plan for users who wish to take responsibility for their own e-waste by eating it. Finally we propose an alternative direction for HCI researchers to design and advocate for those affected by the material properties of e-waste.\n"},{"title":"Myopic histories and AI's culture of hyperbole","year":2019,"authors":"Kieran Browne","publication":"2019 Winter Institute: History, Culture and Contested Memories: Global and Local Perspectives","abstract":"Historical accounts of the invention of artificial neural networks (ANNs) offer a narrative that overplays the relevance of neurophysiological origins and conceals a significant methodological genealogy from statistics. This has contributed to fractured perspectives between practitioners, non-practicing scholars and laypersons, and a significant misdirection of intellectual effort. The resurrection of ANNs under the moniker “deep learning” has seen advances in a surprising breadth of applications from computer vision and translation to generative art and poetry. It is also well understood that ANNs are inscrutable and prone to cultural biases. Legitimate concerns about the application of ANNs in institutions of cultural and political power e.g. courts, media, social networks, police departments, must compete with voices that anthropomorphise algorithms and make sensationalised claims about the end of work or even the end of humanity. This is spurred on by a culture of hyperbole and liberal attribution of agency in the literature and historical accounts that omit methodological development.\n"},{"title":"The Other Side: Algorithm as Ritual in Artificial Intelligence","year":2018,"authors":"Kieran Browne and Ben Swift","publication":"Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems","publication_link":"https://dl.acm.org/doi/10.1145/3170427.3188404","preprint":"https://storage.googleapis.com/kieranbrowne-public-files/preprints/the_other_side.pdf","abstract":"Our cultural and scientific understandings of neural networks are built on a set of philosophical ideas which might turn out to be superstitions. Drawing on methodologies of defamiliarisation and performance art which have been adopted by HCI, we present an analog apparatus for the ritualistic performance of neural network algorithms. The apparatus draws on the interaction modes of the Ouija board to provide a system which involves the user in the computation. By recontextualising neural computation, the work creates critical distance with which to examine the philosophical and cultural assumptions embedded in our conception of AI.\n"},{"title":"Critical Challenges for the Visual Representation of Deep Neural Networks","year":2018,"authors":"Kieran Browne, Ben Swift and Henry Gardner","publication":"Human and Machine Learning","publication_link":"https://link.springer.com/chapter/10.1007/978-3-319-90403-0_7","preprint":"https://storage.googleapis.com/kieranbrowne-public-files/preprints/critical_challenges.pdf","abstract":"Artificial neural networks have proved successful in a broad range of applications over the last decade. However, there remain significant concerns about their interpretability. Visual representation is one way researchers are attempting to make sense of these models and their behaviour. The representation of neural networks raises questions which cross disciplinary boundaries. This chapter draws on a growing collection of interdisciplinary scholarship regarding neural networks. We present six case studies in the visual representation of neural networks and examine the particular representational challenges posed by these algorithms. Finally we summarise the ideas raised in the case studies as a set of takeaways for researchers engaging in this area.\n"}] }