AI Arseholes

23 February 2017

  • artificial intelligence
  • neural networks
  • machine learning

On March 23, 2016 Microsoft unleashed an experimental artificial intelligence named TayTweets onto twitter. Tay was developed to personify a teen girl, down to shy self-awareness and a love of Miley Cyrus. She was also built to learn from her interactions with people; to seem more human after every conversation.

Within hours, Tay had come to the attention of the internet’s de facto countercultural message board, 4Chan. The site’s community mobilised to test the limits of Tay’s learning. Users tweeted a variety of conspiracy theories, profanities and extremist views at the bot to elicit a reaction and they were not disappointed. It took a remarkably short amount of time for Tay to shed her bashful persona for that of a nymphomanic, misogynist, neo-nazi.

Only 16 hours after launch, Microsoft shut Tay down. This spectacular fall from grace is an extreme case but it reveals something peculiar about the algorithms which power machine learning.

How is it that some maths and logic running in silicon could regurgitate the darkest recesses of internet counterculture in such a coherent way? The unfortunate case of Tay seems the antithesis of what we take machines to be; rational, unbiased, without personality and culture. Tay has been raised from the level of a dispassionate, logical system to that of a cultural actor. Simultaneously this has made her fallible to culture and bias. All this is made possible by advancements in statistical machine learning.

Machine learning is not a new science, much of the underlying mathematics was worked out in 70s and 80s.1,2,3 However, it has had a recent renaissance due to the rapid increase in access to big data and computing power. Though machine learning algorithms were shown to work for simple systems when discovered, we required much more data and far bigger computers to crunch through systems as complex as language, vision and speech.

Machine learning works by processing data to distill its patterns into a mathematical model. These algorithms can approximate arbitrary functions from data, even when the programmer is not conscious of the relationships therein.

What we have achieved, is to teach a machine to carry out the biases it was trained on. Or to put it another way, the culture that is latent in the data is learned by the system and so finds expression.

A problem which arises from the way we think about machines as cold, logical and objective. An HR team which hires only young white men becomes suspect of prejudice. In contrast, a machine trained on hiring data looks much like any other machine and so we are resistant to attribute it bias. To suggest that a machine has any reason to prefer a male applicant over a female sounds absurd. So too, to think that a machine would be affected by the vestiges of historical oppression of women. And yet, the machines trained on human data are encultured. As AI systems find use in a growing range of sensitive applications, machines exhibiting racial and gender biases have regularly appeared.4,5,6

The dominant culture is largely transparent to those who inhabit it. So too, it is difficult to see the influence of culture in a machine that reflects the cultural norms. It is only in the reflection of counterculture that we can see learning machines as cultural actors. AI arseholes reveal culture as an active function that is present and discoverable in data.

Reading List

  • Blackwell, Alan F. “Interacting with an inferred world: the challenge of machine learning for humane computer interaction.” In Proceedings of The Fifth Decennial Aarhus Conference on Critical Alternatives, pp. 169-180. Aarhus University Press, 2015.
  • Kafka, Franz. The metamorphosis. WW Norton & Company, 2015.
  • Sartre, Jean-Paul. No Exit. Caedmon, 1968.
  • Woolgar, Steve. “Configuring the user: the case of usability trials.” The Sociological Review 38, no. S1 (1990): 58-99.

Notes

  1. Rosenblatt, Frank. The perceptron, a perceiving and recognizing automaton Project Para. Cornell Aeronautical Laboratory, 1957. PDF 

  2. Minsky, Marvin, and Seymour Papert. “Perceptrons: an introduction to computational geometry.” (1969). 

  3. Williams, D. R. G. H. R., and Geoffrey Hinton. “Learning representations by back-propagating errors.” Nature 323, no. 6088 (1986): 533-538. 

  4. Angwin, Julia, Jeff Larson, Surya Mattu, and Lauren Kirchner. “Machine bias: There’s software used across the country to predict future criminals. and it’s biased against blacks.” ProPublica, May 23 (2016). 

  5. kharyrandolph. Instagram. 18 April 2017 

  6. Alex Shams. Twitter. 28 November 2017 

← Back to Posts