Forging Emotions: a deep learning experiment on emotions and art
Article Sidebar
Google Scholar citations
Main Article Content
Affective computing is an interdisciplinary field that studies computational methods that relate to or influence emotion. These methods have been applied to interactive media artworks, but they have focused on affect detection rather than affect generation. For affect generation, computationally creative methods need to be explored that have recently been driven by the use of Generative Adversarial Networks (GANs), a deep learning method. The experiment presented in this paper, Forging Emotions, explores the use of visual emotion datasets and the working processes of GANs for visual affect generation, that is, for generating images that can convey or trigger specified emotions. This experiment concludes that the methodology used so far by computer science researchers to build image datasets for describing high-level concepts such as emotions is insufficient and proposes utilizing emotional networks of associations according to psychology research. Forging Emotions also concludes that to generate affect visually, merely corresponding to basic psychology findings, such as bright or dark colours, does not seem adequate. Therefore, research efforts should aim to understand the structure of trained GANs and compositional GANs in order to produce genuinely novel compositions that can convey or trigger emotions through the subject matter of generated images.
Article Details
This work is licensed under a Creative Commons Attribution 4.0 International License.
(c) Amalia Foka, 2023
Copyright
For all articles published in Artnodes that are subject to a Creative Commons Attribution 4.0 International licence, copyright is retained by the author(s). The complete text the license can be consulted at http://creativecommons.org/licenses/by/4.0/. You may copy, distribute, transmit and adapt the work, provided you attribute it (authorship, journal name, publisher) in the manner specified by the author(s) or licensor(s).
Authors are responsible for obtaining the necessary licences for the images that are subject to copyright.
Assignment of intellectual property rights
The author non exclusively transfers the rights to use (reproduce, distribute, publicly broadcast or transform) and market the work, in full or part, to the journal’s editors in all present and future formats and modalities, in all languages, for the lifetime of the work and worldwide.
I hereby declare that I am the original author of the work. The editors shall thus not be held responsible for any obligation or legal action that may derive from the work submitted in terms of violation of third parties’ rights, whether intellectual property, trade secret or any other right.
Amalia Foka, University of Ioannina
A Greek creative Artificial Intelligence researcher and educator who explores the intersection of computer science and art. Her work makes use of different artificial intelligence technologies and social media data to generate and study art. Her work has been presented and published internationally in publications such as the Leonardo Journal (MIT Press), WRO Media Art Biennale, ISEA, EVA London and many more. She is currently an Assistant Professor in Computer Science: Multimedia Applications for the Arts at the School of Fine Arts, University of Ioannina, Greece, where she teaches courses on creative coding, creative interactive systems, and generative AI art. She holds a BEng in Computer Systems Engineering (1998) and an MSc in Advanced Control (1999) from the University of Manchester Institute of Science & Technology (UMIST) in the United Kingdom. She also holds a PhD in Robotics (2005) from the Department of Computer Science of the University of Crete.
Alvarez-Melis, David and Judith Amores. “The Emotional GAN: Priming Adversarial Generation of Art with Emotion”. 31st Conference on Neural Information Processing Systems (NIPS), 2017.
Bau, David, Jun-Yan Zhu, Hendrik Strobelt, Bolei Zhou, Joshua B. Tenenbaum, William T. Freeman and Antonio Torralba. “GAN Dissection: Visualizing and Understanding Generative Adversarial Networks”. ArXiv (2018, November). DOI: https://doi.org/10.48550/arXiv.1811.10597
Calvo, Rafael A., Sidney D’Mello, Jonathan M. Gratch and Arvid Kappas (eds.). The Oxford Handbook of Affective Computing. Oxford University Press, 2015. DOI: https://doi.org/10.1093/oxfordhb/9780199942237.013.040
Chen, Xinlei, Hao Fang, Tsung-Yi Lin, Ramakrishna Vedantam, Saurabh Gupta, Piotr Dollar and C. Lawrence Zitnick. “Microsoft COCO Captions: Data Collection and Evaluation Server”. ArXiv (2015, April). DOI: https://doi.org/10.48550/arXiv.1504.00325
Dan-Glauser, Elise S. and Klaus R. Scherer. “The Geneva Affective Picture Database (GAPED): A New 730-Picture Database Focusing on Valence and Normative Significance”. Behavior Research Methods, vol. 43, no. 2 (2011): 468-477. DOI: https://doi.org/10.3758/s13428-011-0064-1
Deng, Jia, Wei Dong, Richard Socher, Li-Jia Li, Kai Li and Li Fei-Fei. “ImageNet: A large-scale hierarchical image database”. 2009 IEEE Conference on Computer Vision and Pattern Recognition. IEE (2009): 248-255. DOI: https://doi.org/10.1109/CVPR.2009.5206848
Eerola, Tuomas and Jonna K. Vuoskoski. “A comparison of the discrete and dimensional models of emotion in music.” Psychology of Music, vol. 39, no. 1 (2011): 18-49. DOI: https://doi.org/10.1177/0305735610362821
Ekman, Paul. “Basic Emotions”. Handbook of Cognition and Emotion (Chichester, UK: John Wiley & Sons, Ltd, 2005), 45-60. DOI: https://doi.org/10.1002/0470013494.ch3
Fellous, Jean-Marc and Jenefer Robinson. “A Mechanistic View of the Expression and Experience of Emotion in the Arts”. The American Journal of Psychology, vol. 119, no. 4 (2006): 668. DOI: https://doi.org/10.2307/20445371
Freitas, Donna. The Happiness Effect: How Social Media Is Driving a Generation to Appear Perfect at Any Cost. Oxford University Press, 2017.
Gonsalves, Tina, Nadia Berthouze and Matt Iacobini. “The Chameleon Project”. Second Nature, no. 3 (2010): 138-163.
Goodfellow, Ian, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville and Yoshua Bengio. “Generative Adversarial Nets”. Advances in Neural Information Processing Systems (NIPS), vol. 27 (2014): 2672-80.
Instaloader. “Download Instagram Photos and Metadata”. Instaloader (n.d.). Accessed: 22 March, 2023. https://instaloader.github.io/
Karras, Tero, Samuli Laine and Timo Aila. “A Style-Based Generator Architecture for Generative Adversarial Networks”. 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEE (2019): 4396-4405. DOI: https://doi.org/10.1109/CVPR.2019.00453
Klingemann, Mario. “Mario Klingemann, Artist Working with Code, AI and Data”. Quasimondo (2019).
Kurdi, Benedek, Shayn Lozano and Mahzarin R. Banaji. “Introducing the Open Affective Standardized Image Set (OASIS).” Behavior Research Methods, vol. 49, no. 2 (2017): 457-470. DOI: https://doi.org/10.3758/s13428-016-0715-3
Kwastek, Katja. Aesthetics of Interaction in Digital Art. MIT Press, 2013. DOI: https://doi.org/10.7551/mitpress/9592.001.0001
Lang, Peter J., Margaret M. Bradley and Bruce N. Cuthbert. International Affective Picture System (IAPS). NIMH, Center for the Study of Emotion & Attention, 2005. DOI: https://doi.org/10.1037/t66667-000
Lara-Cabrera, Raúl and David Camacho. “A taxonomy and state of the art revision on affective games.” Future Generation Computer Systems, vol. 92 (2019, March): 516-525. DOI: https://doi.org/10.1016/j.future.2017.12.056
Machajdik, Jana and Allan Hanbury. “Affective Image Classification Using Features Inspired by Psychology and Art Theory.” Proceedings of the International Conference on Multimedia - MM ’10, (2010): 83-92. New York: ACM Press. DOI: https://doi.org/10.1145/1873951.1873965
Marchewka, Artur, Łukasz Żurawski, Katarzyna Jednoróg and Anna Grabowska. “The Nencki Affective Picture System (NAPS): Introduction to a novel, standardized, wide-range, high-quality, realistic picture database.” Behavior Research Methods, vol. 46, no. 2 (2014): 596-610. DOI: https://doi.org/10.3758/s13428-013-0379-1
Mikels, Joseph A., Barbara L. Fredrickson, Gregory R. Larkin, Casey M. Lindberg, Sam J. Maglio and Patricia A. Reuter-Lorenz. “Emotional category data on images from the international affective picture system.” Behavior Research Methods, vol. 37, no. 4 (2005): 626-630. DOI: https://doi.org/10.3758/BF03192732
Mokady, Ron, Amir Hertz and Amit H. Bermano. “ClipCap: CLIP Prefix for Image Captioning”. ArXiv (2021, November). DOI: https://doi.org/10.48550/arXiv.2111.09734
Park, Minho, Hak Gu Kim and Yong Man Ro. “Photo-Realistic Facial Emotion Synthesis Using Multi-Level Critic Networks with Multi-Level Generative Model”. Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 11296 (2019): 3-15. Cham: Springer. DOI: https://doi.org/10.1007/978-3-030-05716-9_1
Radford, Alec, Luke Metz and Soumith Chintala. “Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks”. ArXiv (2015, November). DOI: https://doi.org/10.48550/arXiv.1511.06434
Ridler, Anna. “Repeating and Mistranslating: The Associations of GANs in an Art Context”. Machine Learning for Creativity and Design, NIPS 2017 Workshop (2017).
Sharma, Piyush, Nan Ding, Sebastian Goodman and Radu Soricut. “Conceptual Captions: A Cleaned, Hypernymed, Image Alt-Text Dataset For Automatic Image Captioning”. ACL 2018 - 56th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference (Long Papers), vol. 1 (2018): 2556-2565. DOI: https://doi.org/10.18653/V1/P18-1238
Wang, Junpeng. “Interpreting and Diagnosing Deep Learning Models: A Visual Analytics Approach”. The Ohio State University, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=osu1555499299957829
Williams, Duncan, Alexis Kirke, Eduardo R. Miranda, Etienne Roesch, Ian Daly and Slawomir Nasuto. “Investigating affect in algorithmic composition systems”. Psychology of Music, vol. 43, no. 6 (2015): 831-854. DOI: https://doi.org/10.1177/0305735614543282
Xu, Feiyu, Hans Uszkoreit, Yangzhou Du, Wei Fan, Dongyan Zhao and Jun Zhu. “Explainable AI: A Brief Survey on History, Research Areas, Approaches and Challenges”. Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 11839 (2019): 563-574. Cham: Springer. DOI: https://doi.org/10.1007/978-3-030-32236-6_51
You, Quanzeng, Jiebo Luo, Hailin Jin and Jianchao Yang. “Building a Large Scale Dataset for Image Emotion Recognition: The Fine Print and The Benchmark.” Proceedings of the AAAI Conference on Artificial Intelligence, vol. 30, no. 1 (2016). DOI: https://doi.org/10.1609/aaai.v30i1.9987
Similar Articles
- Pilar Rosado Rodrigo, Ferran Reverter Comes, Panoramic views on the collective visual heritage through convolutional neural networks. The exhibitions Revolutionary Arkive and Mnemosyne 2.0 by Pilar Rosado , Artnodes: No. 26: (July 2021). NODE 26. AI, Arts & Design: Questioning Learning Machines (Guest Editors: A. Burbano & R. West)
- Sonia 1972 Ríos Moyano, Leticia Crespillo Marí, Javier 1979 González Torres, Anthropological narratives of machinic otherness at the dawn of posthuman and transhuman theories. A first approach from movies and streaming series , Artnodes: No. 32: (July 2023). NODE 32. Possibles III (Editors: Pau Alsina & Andrés Burbano)
- Stanislav Milovidov, Content policy and access limitations on commercial neural networks as an incentive to artivism , Artnodes: No. 33: (January 2024). NODE 33. Media Artivism: On the Archaeology and History of Digital Culture for Social Change (Guest Editors: Carolina Fernández-Castrillo & Diego Mantoan)
- Martin Caeiro Rodríguez, Antonia María Muñiz de la Arena, Expressive cognition as a relationship experience of art and science in pre-university education , Artnodes: No. 24: (July 2019). NODE 24. After post-truth (Editor.: Jorge Luis Marzo)
- Jordan Fraser Emery, Alba Marín, Body and researcher’s gaze with 360o immersive video: an exploratory case study on the artivism in São Paulo , Artnodes: No. 33: (January 2024). NODE 33. Media Artivism: On the Archaeology and History of Digital Culture for Social Change (Guest Editors: Carolina Fernández-Castrillo & Diego Mantoan)
- Marietta Radomska, Mayra Citlalli Rojo Gómez, Margherita Pevere, Terike Haapoja, Non/Living Queerings, Undoing Certainties, and Braiding Vulnerabilities: A Collective Reflection , Artnodes: No. 27: (January 2021). Node 27. Arts in the Time of Pandemic (Guest Editors: Laura Benítez & Erich Berger)
- Victor Flores, The Metrics of Landscape. Stereo fieldwork by Francisco Afonso Chaves and other Portuguese Explorers , Artnodes: No. 21: (June 2018). NODE 21. Media Archaeology (Editors: Pau Alsina, Ana Rodríguez, Vanina Hofman)
- Fernando Herraiz-García, Silvia de Riba Mayoral, Laura Marchena Ricis, The potential of the arts to imagine another education. Learning from movements, positions and transits in the case of the Pepa Colomer School , Artnodes: No. 29: (January 2022). NODE 29. Ecology of the imagination (Guest Editor: Marina Garcés)
- Miguel Alfonso Bouhaben, Eurocentric artistic research and the epistemic-aesthetic decolonization , Artnodes: No. 21: (June 2018). NODE 21. Media Archaeology (Editors: Pau Alsina, Ana Rodríguez, Vanina Hofman)
- Bruno Caldas Vianna, Generative Art: Between the Nodes of Neuron Networks , Artnodes: No. 26: (July 2021). NODE 26. AI, Arts & Design: Questioning Learning Machines (Guest Editors: A. Burbano & R. West)
You may also start an advanced similarity search for this article.