Computing Culture; a humanities approach to artificial neural networks
Kieran Browne — Thesis Proposal Review
The problem
Semantics derived automatically from language corpora necessarily contain human biases
Caliskan, Aylin, Joanna J Bryson, and Arvind Narayanan. (2017).
The nightmare scenario
Explaining visually
Explaining visually
Explaining visually
Theory
Johanna Drucker
Humanities Approaches to Graphical Display
Rethinking our approach to visualization and the assumptions that underpin it
Constructivist notions of data
Representing ambiguity
Designing for interpretation and sense-making
The way we use and understand neural networks is built on realist models of knowledge
Proposed methods
Critique current visual language of neural networks with care for the assumptions they bring.
Design experimental visualisations and interfaces to explore the possibilities of visual representation
Questions
Artificial neural networks are a potent technology capable of extracting
and reproducing complex patterns from data, including cultural,
linguistic and visual processes previously beyond the reach of
computation. However, the internal functions of neural networks are
notoriously difficult to understand. The researchers and technicians
who design and train neural networks can rarely explain their behaviour.
This is particularly problematic in cases where neural networks learn,
and thus perpetuate cultural biases. My research inherits its
theoretical framework from the (digital) humanities, particularly the
ideas of Johanna Drucker regarding graphical display and constructionist
notions of data. I intend to make use of mixed methods in my research.
This includes critical analysis of existing representations of neural
networks and practice-led research, specifically developing experimental
visualisations and novel exploratory interfaces.
- introduction
- what's a neural network
- what's a black box
- why is the neural network a black box
- the black box getting darker
- semantics
- we want our explanations in terms of meaningful concepts
- extracting semantics
- problems extracting semantics
- marked and unmarked categories
- networks often uncritically applied to cultural expressive forms
- why is this a problem
- examples of cultural bias in neural networks
- computer science is illequipped handle questions of culture and bias
- code needs criticism
- scope
-
- theory
- Johanna Drucker
- humanities approaches to (graphical display/ interface theory)
- data vs capta
- precedents (why visualisation)
- knowledge as visually mediated
- when you have a hammer everything looks like a nail
- attempts to visualise neural networks with techniques for data vis and graphical user interfaces; but neural networks are not data and not much like normal digital tech
- methodologies
- practice led research
- research through making
- interfaces
- visualisations
- critical artworks
- critical approach to neural networks
- critical approach to data (CAPTA)
- redescribing a neural networks as cultral computing
- neural networks as cultural actors
- training as enculturing
- what are the semantics of internal representations
- originality
- related work