Forging Emotions: a deep learning experiment on emotions and art

Main Article Content

Amalia Foka
https://orcid.org/0000-0001-5436-991X

Affective computing is an interdisciplinary field that studies computational methods that relate to or influence emotion. These methods have been applied to interactive media artworks, but they have focused on affect detection rather than affect generation. For affect generation, computationally creative methods need to be explored that have recently been driven by the use of Generative Adversarial Networks (GANs), a deep learning method. The experiment presented in this paper, Forging Emotions, explores the use of visual emotion datasets and the working processes of GANs for visual affect generation, that is, for generating images that can convey or trigger specified emotions. This experiment concludes that the methodology used so far by computer science researchers to build image datasets for describing high-level concepts such as emotions is insufficient and proposes utilizing emotional networks of associations according to psychology research. Forging Emotions also concludes that to generate affect visually, merely corresponding to basic psychology findings, such as bright or dark colours, does not seem adequate. Therefore, research efforts should aim to understand the structure of trained GANs and compositional GANs in order to produce genuinely novel compositions that can convey or trigger emotions through the subject matter of generated images.

Keywords:

deep learning, affective computing, visual emotion datasets, Generative Adversarial Network (GAN)

Article Details

How to Cite
Foka, Amalia. “Forging Emotions: a deep learning experiment on emotions and art”. Artnodes, no. 31, pp. 1-10, doi:10.7238/artnodes.v0i31.402397.
Author Biography

Amalia Foka, University of Ioannina

A Greek creative Artificial Intelligence researcher and educator who explores the intersection of computer science and art. Her work makes use of different artificial intelligence technologies and social media data to generate and study art. Her work has been presented and published internationally in publications such as the Leonardo Journal (MIT Press), WRO Media Art Biennale, ISEA, EVA London and many more. She is currently an Assistant Professor in Computer Science: Multimedia Applications for the Arts at the School of Fine Arts, University of Ioannina, Greece, where she teaches courses on creative coding, creative interactive systems, and generative AI art. She holds a BEng in Computer Systems Engineering (1998) and an MSc in Advanced Control (1999) from the University of Manchester Institute of Science & Technology (UMIST) in the United Kingdom. She also holds a PhD in Robotics (2005) from the Department of Computer Science of the University of Crete.

References

Alvarez-Melis, David and Judith Amores. “The Emotional GAN: Priming Adversarial Generation of Art with Emotion”. 31st Conference on Neural Information Processing Systems (NIPS), 2017.

Bau, David, Jun-Yan Zhu, Hendrik Strobelt, Bolei Zhou, Joshua B. Tenenbaum, William T. Freeman and Antonio Torralba. “GAN Dissection: Visualizing and Understanding Generative Adversarial Networks”. ArXiv (2018, November). DOI: https://doi.org/10.48550/arXiv.1811.10597

Calvo, Rafael A., Sidney D’Mello, Jonathan M. Gratch and Arvid Kappas (eds.). The Oxford Handbook of Affective Computing. Oxford University Press, 2015. DOI: https://doi.org/10.1093/oxfordhb/9780199942237.013.040

Chen, Xinlei, Hao Fang, Tsung-Yi Lin, Ramakrishna Vedantam, Saurabh Gupta, Piotr Dollar and C. Lawrence Zitnick. “Microsoft COCO Captions: Data Collection and Evaluation Server”. ArXiv (2015, April). DOI: https://doi.org/10.48550/arXiv.1504.00325

Dan-Glauser, Elise S. and Klaus R. Scherer. “The Geneva Affective Picture Database (GAPED): A New 730-Picture Database Focusing on Valence and Normative Significance”. Behavior Research Methods, vol. 43, no. 2 (2011): 468-477. DOI: https://doi.org/10.3758/s13428-011-0064-1

Deng, Jia, Wei Dong, Richard Socher, Li-Jia Li, Kai Li and Li Fei-Fei. “ImageNet: A large-scale hierarchical image database”. 2009 IEEE Conference on Computer Vision and Pattern Recognition. IEE (2009): 248-255. DOI: https://doi.org/10.1109/CVPR.2009.5206848

Eerola, Tuomas and Jonna K. Vuoskoski. “A comparison of the discrete and dimensional models of emotion in music.” Psychology of Music, vol. 39, no. 1 (2011): 18-49. DOI: https://doi.org/10.1177/0305735610362821

Ekman, Paul. “Basic Emotions”. Handbook of Cognition and Emotion (Chichester, UK: John Wiley & Sons, Ltd, 2005), 45-60. DOI: https://doi.org/10.1002/0470013494.ch3

Fellous, Jean-Marc and Jenefer Robinson. “A Mechanistic View of the Expression and Experience of Emotion in the Arts”. The American Journal of Psychology, vol. 119, no. 4 (2006): 668. DOI: https://doi.org/10.2307/20445371

Freitas, Donna. The Happiness Effect: How Social Media Is Driving a Generation to Appear Perfect at Any Cost. Oxford University Press, 2017.

Gonsalves, Tina, Nadia Berthouze and Matt Iacobini. “The Chameleon Project”. Second Nature, no. 3 (2010): 138-163.

Goodfellow, Ian, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville and Yoshua Bengio. “Generative Adversarial Nets”. Advances in Neural Information Processing Systems (NIPS), vol. 27 (2014): 2672-80.

Instaloader. “Download Instagram Photos and Metadata”. Instaloader (n.d.). Accessed: 22 March, 2023. https://instaloader.github.io/

Karras, Tero, Samuli Laine and Timo Aila. “A Style-Based Generator Architecture for Generative Adversarial Networks”. 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEE (2019): 4396-4405. DOI: https://doi.org/10.1109/CVPR.2019.00453

Klingemann, Mario. “Mario Klingemann, Artist Working with Code, AI and Data”. Quasimondo (2019).

Kurdi, Benedek, Shayn Lozano and Mahzarin R. Banaji. “Introducing the Open Affective Standardized Image Set (OASIS).” Behavior Research Methods, vol. 49, no. 2 (2017): 457-470. DOI: https://doi.org/10.3758/s13428-016-0715-3

Kwastek, Katja. Aesthetics of Interaction in Digital Art. MIT Press, 2013. DOI: https://doi.org/10.7551/mitpress/9592.001.0001

Lang, Peter J., Margaret M. Bradley and Bruce N. Cuthbert. International Affective Picture System (IAPS). NIMH, Center for the Study of Emotion & Attention, 2005. DOI: https://doi.org/10.1037/t66667-000

Lara-Cabrera, Raúl and David Camacho. “A taxonomy and state of the art revision on affective games.” Future Generation Computer Systems, vol. 92 (2019, March): 516-525. DOI: https://doi.org/10.1016/j.future.2017.12.056

Machajdik, Jana and Allan Hanbury. “Affective Image Classification Using Features Inspired by Psychology and Art Theory.” Proceedings of the International Conference on Multimedia - MM ’10, (2010): 83-92. New York: ACM Press. DOI: https://doi.org/10.1145/1873951.1873965

Marchewka, Artur, Łukasz Żurawski, Katarzyna Jednoróg and Anna Grabowska. “The Nencki Affective Picture System (NAPS): Introduction to a novel, standardized, wide-range, high-quality, realistic picture database.” Behavior Research Methods, vol. 46, no. 2 (2014): 596-610. DOI: https://doi.org/10.3758/s13428-013-0379-1

Mikels, Joseph A., Barbara L. Fredrickson, Gregory R. Larkin, Casey M. Lindberg, Sam J. Maglio and Patricia A. Reuter-Lorenz. “Emotional category data on images from the international affective picture system.” Behavior Research Methods, vol. 37, no. 4 (2005): 626-630. DOI: https://doi.org/10.3758/BF03192732

Mokady, Ron, Amir Hertz and Amit H. Bermano. “ClipCap: CLIP Prefix for Image Captioning”. ArXiv (2021, November). DOI: https://doi.org/10.48550/arXiv.2111.09734

Park, Minho, Hak Gu Kim and Yong Man Ro. “Photo-Realistic Facial Emotion Synthesis Using Multi-Level Critic Networks with Multi-Level Generative Model”. Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 11296 (2019): 3-15. Cham: Springer. DOI: https://doi.org/10.1007/978-3-030-05716-9_1

Radford, Alec, Luke Metz and Soumith Chintala. “Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks”. ArXiv (2015, November). DOI: https://doi.org/10.48550/arXiv.1511.06434

Ridler, Anna. “Repeating and Mistranslating: The Associations of GANs in an Art Context”. Machine Learning for Creativity and Design, NIPS 2017 Workshop (2017).

Sharma, Piyush, Nan Ding, Sebastian Goodman and Radu Soricut. “Conceptual Captions: A Cleaned, Hypernymed, Image Alt-Text Dataset For Automatic Image Captioning”. ACL 2018 - 56th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference (Long Papers), vol. 1 (2018): 2556-2565. DOI: https://doi.org/10.18653/V1/P18-1238

Wang, Junpeng. “Interpreting and Diagnosing Deep Learning Models: A Visual Analytics Approach”. The Ohio State University, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=osu1555499299957829

Williams, Duncan, Alexis Kirke, Eduardo R. Miranda, Etienne Roesch, Ian Daly and Slawomir Nasuto. “Investigating affect in algorithmic composition systems”. Psychology of Music, vol. 43, no. 6 (2015): 831-854. DOI: https://doi.org/10.1177/0305735614543282

Xu, Feiyu, Hans Uszkoreit, Yangzhou Du, Wei Fan, Dongyan Zhao and Jun Zhu. “Explainable AI: A Brief Survey on History, Research Areas, Approaches and Challenges”. Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 11839 (2019): 563-574. Cham: Springer. DOI: https://doi.org/10.1007/978-3-030-32236-6_51

You, Quanzeng, Jiebo Luo, Hailin Jin and Jianchao Yang. “Building a Large Scale Dataset for Image Emotion Recognition: The Fine Print and The Benchmark.” Proceedings of the AAAI Conference on Artificial Intelligence, vol. 30, no. 1 (2016). DOI: https://doi.org/10.1609/aaai.v30i1.9987

Similar Articles

<< < 12 13 14 15 16 17 18 19 20 > >> 

You may also start an advanced similarity search for this article.