Surveillance capitalism is not a technology itself, but a logic; one that “imbues technology and commands it into action”.
Shoshana Zuboff. The Age of Surveillance Capitalism: The fight for a human future at the new frontier of power. Profile Books (2019).
Anthony Dunne and Fiona Raby. Design noir:The secret life of electronic objects. Birkhäuser (2001).
Critical design is “a research through design methodology that foregrounds the ethics of design practice” and “make[s] comsumers more critical of their everyday lives”
Jeffrey Bardzell and Shaowen Bardzell. “What is critical about critical design?” Proceedings of the SIGCHI conference on human factors in computing systems. ACM 2013.
“We took cutting edge research straight out of an academic research lab and launched it, in just a little over six months.”
Chuck Rosenberg. “Improving Photo Search: A Step Across the Semantic Gap.” Google AI Blog https://ai.googleblog.com/2013/06/improving-photo-search-step-across.html
“Until recently, [smartphone] cameras behaved mostly as optical sensors... The next generation of cameras, however, will have the capability to blend hardware and computer vision algorithms that operate as well on an image’s semantic content”
Marc Levoy. “Portrait mode on the Pixel 2 andPixel 2 XL smartphones.” Google AI Blog https://ai.googleblog.com/2017/10/portrait-mode-on-pixel-2-and-pixel-2-xl.html
“Modern [Deep Learning] image recognition methods ... can recover hidden information from images protected by various forms of obfuscation [including] pixelation, blurring ... and P3 [jpeg encryption which makes] images unrecognizable by humans.”
Richard McPherson, Reza Shokri, and Vitaly Shmatikov. “Defeating image obfuscation with deep learning” arXiv preprint arXiv:1609.00408 2016.
Machine vision ≠ human vision
“Universal Adversarial Perturbations”
Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, Omar Fawzi, and Pascal Frossard. “Universal adversarial perturbations.” Proceedings of the IEEE conference on computer vision and pattern recognition. 2017.
“With single noise patterns [we are] able to fool a model on up to 90% of the dataset”
Kenneth T Co, Luis Muñoz-González, and Emil C Lupu. “Procedural Noise Adversarial Examples for Black-Box Attacks on Deep Neural Networks.” arXiv preprint arXiv:1810.00470 2018.
“Machine learning classifiers are known to be vulnerable to inputs maliciously constructed by adversaries to force misclassification”
Sandy Huang, Nicolas Papernot, Ian Goodfellow, YanDuan, and Pieter Abbeel. Adversarial Attacks on Neural Network Policies. (2017).
“This linear behavior suggests that cheap, analytical perturbations of a linear model should also damage neural networks”
Ian J. Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and Harnessing Adversarial Examples. (2014).
“Know your adversary: modeling threats ‘If you know the enemy and know yourself, you need not fear the result of a hundred battles.’ (Sun Tzu, The Art of War, 500 BC)”
Battista Biggio and Fabio Roli. Wild Patterns: Ten Years after the Rise of Adversarial Machine Learning. 84 (2018), 317–331.
“It is important to ensure that such algorithms are robust to malicious adversaries”
Kenneth T Co, Luis Muñoz-González, and Emil C Lupu. Procedural Noise Adversarial Examples for Black-Box Attacks on Deep Neural Networks. (2018).
...design is ideological and most design affirms the status quo, reinforcing cultural, social, technical and economic expectations...
Anthony Dunne and Fiona Raby. Design noir: The secret life of electronic objects. Birkhäuser (2001).