Identification, classification and control: close ties analysed in reference to artistic practices in the heart of artificial intelligence

Main Article Content

Hugo Felipe Idárraga

Artificial intelligence (AI) is preceded by a history going all the way back to former efforts in creating beings with artificial movement and intelligence. The objective of this article is to highlight how, in this history, which includes the current developments in the fields of Machine Learning (ML) and Deep Neural Networks (DNNs), the tasks of surveillance and control through the identification and classification of people, things or events in the world, have been so central in the myths or theories on automatons, homunculi, androids, robots or cyborgs, as well as in the various attempts to make them a reality. It will be asserted, furthermore, that the wishes and efforts to imagine and create these beings are neither innocent nor circumstantial; rather, that to a large extent they come from a patriarchal vision of the world in which everything existing must be subjected to a surveillance that ensures control over that which is real. To achieve this, reference will be made on the one hand to histories which show the predominance of these tasks and the vision of the world hidden behind them, and, on the other, to artistic practices and aesthetic representations which have questioned the identifying and classifying operations of AI as a means of surveillance and control.

Keywords
artificial intelligence, art, Iidentification, Cclassification, surveillance, control, adversarial examples

Article Details

How to Cite
Idárraga, Hugo Felipe. “Identification, classification and control: close ties analysed in reference to artistic practices in the heart of artificial intelligence”. Artnodes, no. 26, pp. 1-9, doi:10.7238/a.v0i26.3361.
Author Biography

Hugo Felipe Idárraga, University of Los Andes

Philosopher at the National University of Colombia, with a Master’s in Communication from the Pontifical Xavierian University. Interested in subjects relating to image theory, computer vision and artificial intelligence. He is currently studying the Master’s in Digital Humanities at the University of Los Andes.

References
Anderson, Steve F. 2017. Technologies of Vision: The War Between Data and Images. Cambridge, Massachusetts: The MIT Press.

Aristóteles. 2004. Política. Madrid: Gredos.

Benjamin, Ruha. 2019. Race After Technology: Abolitionist Tools for the New Jim Code. 1.a ed. Nueva York: Polity.

Bunz, Mercedes. 2009. «Are Hewlett-Packard computers really racist?». The Guardian, December 23, https://www.theguardian.com/media/pda/2009/dec/23/hewlett-packard.

Brown, Steven T. 2008. «Machinic Desires: Hans Bellmer’s Dolls and the Technological Uncanny in Ghost in the Shell 2: Innocence». Mechademia 3(1): 222-53. https://doi.org/10.1353/mec.0.0088.

Campolo, Alexander y Kate Crawford. 2020. «Enchanted Determinism: Power without Responsibility in Artificial Intelligence». Engaging Science, Technology, and Society 6(0): 1-19. https://doi.org/10.17351/ests2020.277.

Crawford, Kate. 2017. «The Trouble with Bias». Keynote lecture held at NIPS, Conference at Long Beach, California, December 4-9. https://www.youtube.com/watch?v=fMym_BKWQzk

Dreyfus, Hubert L. 1965. Alchemy and Artificial Intelligence. Santa Monica, CA: RAND Corporation. https://www.rand.org/pubs/papers/P3244.html.

Dreyfus, Hubert L. 1972. What computers can't do: a critique of artificial reason. New York: Harper & Row.

Dzieza, Josh. 2020. «How hard will the robots make us work?». The Verge, February 27. https://www.theverge.com/2020/2/27/21155254/automation-robots-unemployment-jobs-vs-human-google-amazon

Edwards, Paul N. 1997. The Closed World: Computers and the Politics of Discourse in Cold War America. Cambridge, Mass. London: MIT Press Ltd.

Galison, Peter. 1994. «The Ontology of the Enemy: Norbert Wiener and the Cybernetic Vision». Critical Inquiry 21(1): 228-266.

Homero. 2010. Iliada. Madrid: MESTAS Ediciones.

Haraway, Donna Jeane. 2004. Testigo_Modesto@Segundo_Milenio.HombreHembra©_Conoce_Oncoratón®. Barcelona: Editorial UOC, S.L.

Klingemann, Mario. 2018. «Neural Glitch». Quasimondo. Accesado, March 2, 2020. http://underdestruction.com/2018/10/28/neural-glitch/

Nguyen, Anh, Jason Yosinski y Jeff Clune. 2015. «Deep Neural Networks are Easily Fooled: High Confidence Predictions for Unrecognizable Images». arXiv:1412.1897 [cs], April. http://arxiv.org/abs/1412.1897.

MacColl, Le Roy Archibald. 1945. Fundamental Theory of Servomechanisms. 1.a ed. First Printing. D. Van Nostrand Company, inc.

McCorduck, Pamela. 2004. Machines Who Think: A Personal Inquiry into the History and Prospects of Artificial Intelligence. 2.a ed. Natick, Mass: A K Peters/CRC Press.

?n??ha, Mimi. 2018. «How the Arts Help Us Understand Our Relationship With Technology», video. https://www.youtube.com/watch?v=_JJ6Oj_u-Qw.

Paglen, Trevor y Kate Crawford. 2019. «Excavating AI». https://www.excavating.ai.

Platón. 2011. La República o El Estado. Traducido por Patricio de Azcárate Corral. Madrid: Austral.

Raviv, Shaun. 2020. «The Secret History of Facial Recognition». Wired Magazine, January 21. https://www.wired.com/story/secret-history-facial-recognition/

Truitt, Elly R. 2015. Medieval Robots: Mechanism, Magic, Nature, and Art. Pennsylvania:University of
Pennsylvania Press. http://muse.jhu.edu/book/39875/.

Szegedy, Christian, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow y

Rob Fergus. 2014. «Intriguing properties of neural networks». arXiv:1312.6199 [cs], February. http://arxiv.org/abs/1312.6199.

Similar Articles

<< < 1 2 3 4 5 6 7 8 9 10 > >> 

You may also start an advanced similarity search for this article.