Camera Adversaria

Kieran Browne
Ben Swift
Terhi Nurmikko-Fuller

Surveillance Capitalism

Surveillance capitalism is not a technology itself, but a logic; one that “imbues technology and commands it into action”.

Shoshana Zuboff. The Age of Surveillance Capitalism: The fight for a human future at the new frontier of power. Profile Books (2019).


Table 1: Ways of resisting surveillance

Image classification

ƒ( ) .002 house
.012 banana
.87 bull
.01 tennisball
unsplash-logoAlex Parkes

Adversarial Perturbations

ƒ( + ) .001 house
.009 banana
.96 schoolbus
.012 tennisball
unsplash-logoAlex Parkes

Machine vision ≠ human vision

“Universal Adversarial Perturbations”
Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, Omar Fawzi, and Pascal Frossard. “Universal adversarial perturbations.” Proceedings of the IEEE conference on computer vision and pattern recognition. 2017.

“With single noise patterns [we are] able to fool a model on up to 90% of the dataset”
Kenneth T Co, Luis Muñoz-González, and Emil C Lupu. “Procedural Noise Adversarial Examples for Black-Box Attacks on Deep Neural Networks.” arXiv preprint arXiv:1810.00470 2018.


Value-laden language

“Machine learning classifiers are known to be vulnerable to inputs maliciously constructed by adversaries to force misclassification”
Sandy Huang, Nicolas Papernot, Ian Goodfellow, YanDuan, and Pieter Abbeel. Adversarial Attacks on Neural Network Policies. (2017).

Value-laden language

“This linear behavior suggests that cheap, analytical perturbations of a linear model should also damage neural networks”
Ian J. Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and Harnessing Adversarial Examples. (2014).

Value-laden language

“Know your adversary: modeling threats ‘If you know the enemy and know yourself, you need not fear the result of a hundred battles.’ (Sun Tzu, The Art of War, 500 BC)”
Battista Biggio and Fabio Roli. Wild Patterns: Ten Years after the Rise of Adversarial Machine Learning. 84 (2018), 317–331.

Value-laden language

“It is important to ensure that such algorithms are robust to malicious adversaries”
Kenneth T Co, Luis Muñoz-González, and Emil C Lupu. Procedural Noise Adversarial Examples for Black-Box Attacks on Deep Neural Networks. (2018). is ideological and most design affirms the status quo, reinforcing cultural, social, technical and economic expectations...
Anthony Dunne and Fiona Raby. Design noir: The secret life of electronic objects. Birkhäuser (2001).