A matter of meaning, life and death. Text, AI and the human condition

Main Article Content

Matteo Stocchetti

The main reason for concerns about the social impact of AI textual functions relate not primarily to the labour market or education, but to the political economy of meaning. The starting point of this critique is the analysis of ‘artificial intelligence’ as a metaphor that hide and mystifies fundamental differences between human and AI textual functions. This metaphor reduces intelligence to its computational and instrumental aspects and establishes instrumental rationality as a normative model for human intelligence. Contra these implications, I argue the case for the revaluation of meaning-making and textual functions of intelligence as an adaptive response to the problem of death which is uniquely human. These functions are politically relevant because human texts are the tools for the transformation of the subjective experience of life and death into the intersubjective sense of reality. To delegate these functions to advanced forms of computational technology is tempting but risky because the expurgation of subjectivity and, more broadly, the suppression of the dilemmas constituting the human conditions, weakens fundamental evolutionary competences, enhance the oppressive potential of instrumental reason and leads to the unfreedom of the post-political condition.

Keywords
Artificial intelligence, AI, Functional literacy, Death, Meaning of life, Poitics of the real, Instrumental rationality

Article Details

How to Cite
Stocchetti, Matteo. “A matter of meaning, life and death. Text, AI and the human condition”. Hipertext.net, 2023, no. 26, pp. 77-81, doi:10.31009/hipertext.net.2023.i26.12.
Author Biography

Matteo Stocchetti, Arcada University of Applied Science, Finland

Matteo Stocchetti. He is docent of political communication at Helsinki University and Åbo Akademie. He is also principal lectures at Arcada university of Applied sciences. Among his recent publications, Stocchetti, M. (2023). Indeterminacy, Performativity and the ‘Dialectics of the Real’. The Problem of Knowledge in the Analysis of Visual Politics. In Darren Lilleker & Anastasia Veneti (ed). Research Handbook in Visual Politics, (pp. 335-344). Cheltenham: Edward Elgar Publishing, and Stocchetti, M. (2022). Knowledge, Democracy and the Politics of (cyber)fear. Rivista di Digital Politics, 2(3), 349-366. https://www.rivisteweb.it/doi/10.53227/106450

References

Baudrillard, J. (2017). Symbolic Exchange and Death. Trans. Iain Hamilton Grant. Sage. (Original work published 1993)

Becker, E. (1973). The Denial of Death. The Free Press.

Boucher, P. (2021). What if we chose new metaphors for artificial intelligence? EPRSE - European Parliamentary Research Service. https://www.europarl.europa.eu/thinktank/en/document/EPRS_ATA(2021)690024

Carey, J. W. (1988). Communication as Culture. Essays on Media and Society. Unwin Hyman.

Dahlgren, P. (2018). Media, Knowledge and Trust: The Deepening Epistemic Crisis of Democracy. Javnost - The Public. Journal of the European Institute for Communication and Culture,25(1-2), 20-27. https://doi.org/10.1080/13183222.2018.1418819

Dyer-Witheford, N., Mikkola Kjøsen, A., & Steinhoff, J. (2019). Inhuman Power Artificial Intelligence And The Future Of Capitalism. Pluto press.

Ellul, J. (1964). The Technological Society. Alfred A. Knopf & Random House, Inc.

Feenberg, A. (2009). Critical Theory of Technology. In J. K. Olsen, H. V. F, K. B. Olsen, A. Pedersen & V. F. Hendricks (Eds.), A Companion to the Philosophy of Technology (pp. 146-153). Blackwell.

Frankl, V. E. (1984). Man’s Search for Meaning. Washington Square Press. (Original work published 1959)

Freire, P. ( 2013). Education For Critical Consciousness. Bloomsbury. (Original work published 1974)

Freud, S. (1959). Thoughts for the Time of War and Death. In S. Freud, Collected Papers (pp. 288-317). Basic Books.

Freud, S. (2005). Civilization and Its Discontents. W.W. Norton & Company. (Original work published 1930)

Fromm, E. (2003). The Fear of Freedom. Routledge. (Original work published 1942)

Goffman, E. (1974). Frame Analysis. An Essay on the Organization of Experience. Harper & Row.

Herbrechter, S. (2018). Critical Posthumanism. In R. Braidotti, & M. Hlavajova (Eds.), Posthuman Glossary (pp. 94-96). Bloomsbury.

Horkheimer, M., & Adorno, T. (2002). The dialectic of enlightenment: Philosophical fragments. Stanford University Press. (Original work published 1947)

Innis, H. A. (1986). Empire and Communication. Press Porcépic. (Original work published 1950)

Innis, H. A. (2008). The Bias of Communication. University of Toronto Press. (Original work published 1951)

Lee, D. (2020). Birth of Intelligence. From RNA to Artifical Intelligence. Oxford University Press.

McLuhan, M. (1994). Understanding Media. The Extension of Man. The MIT Press. (Original work published 1964)

Millar, I. (2021). The Psychoanalysis of Artificial Intelligence. Palgrave Macmillan.

Mims, C. (2021, July 31). Why Artificial Intelligence Isn’t Intelligent. The Wall Street Journal. https://www.wsj.com/articles/why-artificial-intelligence-isnt-intelligent-11627704050

Mumford, L. (1934). Technics and Civilization. Routledge and Kegan Paul.

Noble, S. (2023, february 11). A bridge too far: are our AI metaphors failing us? Idalab.de. https://idalab.de/artifical-intelligence-metaphors-are-failing-us

Possati, L. M. (2021). The Algorithmic Unconscious How Psychoanalysis Helps in Understanding AI. Routledge.

Roberge, J., & Castelle, M. (2021). Toward an End-to-End Sociology of 21st-Century Machine Learning. In J. Roberge, & M. Castelle, The Cultural Life of Machine Learning. And Incursion in Critical AI Studies (pp. 1-30). Palgrave Macmillan.

Stocchetti, M. (2009). ‘ War Is Love’. Gender And War Narrative in Transnational Broadcasting. In F. Festic, Betraying the Event: constructions of victimhood in contemporary cultures (pp. 117-139). Cambridge Scholars Press.

Willcocks, L. (2020, April 23). Why Misleading Metaphors Are Fooling Managers About the USe of AI. Forbes. https://www.forbes.com/sites/londonschoolofeconomics/2020/04/23/why-misleading-metaphors-are-fooling-managers-about-the-use-of-ai