Generative artificial intelligence and local media: An analysis of territorial bias in five language models

Main Article Content

Barbara Sarrionandia
Simón Peña-Fernández
Jesús-Ángel Pérez-Dasilva

The development of large language models (LLMs) has transformed the processes of information search and access in digital environments. This study examines how five LLMs (ChatGPT, Claude, Gemini, Copilot, and Perplexity) respond to queries about media-related cases that occurred in the Basque Country between 2023 and 2025, as well as to general questions on current affairs at local, regional, and national levels. The methodology was based on the use of prompts applied under controlled technical conditions, with responses evaluated in terms of length, source citation, traceability, and territorial contextualisation. The findings reveal that ChatGPT and Perplexity offer the most extensive, traceable, and contextually rich responses, while Claude and Gemini display notable opacity and limited regional coverage. A systematic tendency to prioritise national media is observed, even in queries concerning local and regional issues, thereby limiting informational diversity. These differences are not merely technical but structural, stemming from corpus design, retrieval architecture, and licensing agreements between media outlets and developers, which together create a territorial information gap in the use of generative AI.

Keywords
Bias, Information sources, Traceability, Local information, Media, Large language models, Generative artificial intelligence

Article Details

How to Cite
Sarrionandia, Barbara et al. “Generative artificial intelligence and local media: An analysis of territorial bias in five language models”. Hipertext.net, 2025, no. 31, pp. 201-13, doi:10.31009/hipertext.net.2025.i31.15.
Author Biographies

Barbara Sarrionandia, Universidad del País Vasco / Euskal Herriko Unibertsitatea

Barbara Sarrionandia holds a degree in Journalism from the University of Navarra and a Master's Degree in Secondary Education, specialising in Geo-graphy and History, from the International University of La Rioja (UNIR). Her research focuses on artificial intelligence, disinformation, and media lite-racy. Over the past two decades, she has developed her professional career in media and institutions at both national and international levels.

Simón Peña-Fernández, Universidad del País Vasco / Euskal Herriko Unibertsitatea

Simón Peña-Fernández is Full Professor in the Department of Journalism at the University of the Basque Country (UPV/EHU). His main areas of research are cyberjournalism, digital communication, and artificial intelligence. Together with Koldobika Meso, he is Principal Investigator of the research project “The Impact of Artificial Intelligence and Algorithms on Digital Media, Professionals, and Audiences” (PID2022-138391OB-I00), funded by the Spanish Ministry of Science and Innovation.

Jesús-Ángel Pérez-Dasilva, Universidad del País Vasco / Euskal Herriko Unibertsitatea

Jesús Ángel Pérez-Dasilva is Full Professor in the Department of Journalism at the University of the Basque Country (UPV/EHU). His research interests include cyberjournalism, social communication, social media, and social innovation. He is Principal Investigator of the project “The Impact of Artificial Intelligence on Basque Media and Media Professionals” (US 23/10), funded by the UPV/EHU

References

Agarwal, U., Tanmay, K., Khandelwal, A. y Choudhury, M. (2024). Ethical Reasoning and Moral Value Alignment of LLMs Depend on the Language we Prompt them in. arXiv. https://doi.org/10.48550/arxiv.2404.18460

Ai, Q., Bai, T., Cao, Z., Chang, Y., Chen, J., Chen, Z., Cheng, Z., Dong, S., Dou, Z., Feng, F., Gao, S., Guo, J., He, X., Lan, Y., Li, C., Liu, Y., Lyu, Z., Ma, W., Ma, J., ... Zhu, X. (2023). Information Retrieval Meets Large Language Models: A Strategic Report from Chinese IR Community. arXiv. https://doi.org/10.48550/arxiv.2307.09751

Algaba, A., Mazijn, C., Holst, V., Tori, F., Wenmackers, S. y Ginis, V. (2024). Large Language Models Reflect Human Citation Patterns with a Heightened Citation Bias. arXiv. https://doi.org/10.48550/arxiv.2405.15739

Amirizaniani, M. yao, J., Lavergne, A., Okada, E. S., Chadha, A., Roosta, T. y Shah, C. (2024a). LLMAuditor: A Framework for Auditing Large Language Models Using Human-in-the-Loop. arXiv. https://doi.org/10.48550/arxiv.2402.09346

Amirizaniani, M., Roosta, T., Chadha, A. y Shah, C. (2024b). AuditLLM: A Tool for Auditing Large Language Models Using Multiprobe Approach. arXiv. https://doi.org/10.48550/arxiv.2402.09334

Anagnostidis, S. y Bulian, J. (2024). How Susceptible are LLMs to Influence in Prompts? arXiv. https://doi.org/10.48550/arXiv.2408.11865

Artero-Muñoz, J. P., Zugasti, R. y Hernández-Corchete, S. (2021). Media concentration in Spain: National, sectorial, and regional groups. Estudios Sobre el Mensaje Periodístico, 27(3), 765–777. https://doi.org/10.5209/esmp.72928

Baryshnikov, P. N. (2024). What is scientific knowledge produced by Large Language Models? Philosophical Problems Of IT y Cyberspace (PhilIT&C), 1, 89-103. https://doi.org/10.17726/philit.2024.1.6

Blázquez-Ruiz, F. J. (2022). Paradoja de la transparencia en la IA. Revista Internacional de Pensamiento Político, 17, 261-272. https://doi.org/10.46661/revintpensampolit.7526

Caro-González, F. J., Gordillo, M. G. y Valencia, O. B. (2025). Concentración y desiertos de noticias: el mapa informativo andaluz. Revista CENTRA de Ciencias Sociales, 4(1). https://doi.org/10.54790/rccs.93

Castillo-Eslava, F., Mougan, C., Romero-Eche, A. y Staab, S. (2023). The Role of Large Language Models in the Recognition of Territorial Sovereignty. arXiv. https://doi.org/10.48550/arxiv.2304.06030

Chang, Y., Wang, X., Wang, J., Wu, Y. yang, L., Zhu, K., Chen, H. yi, X., Wang, C., Wang, Y. ye, W., Zhang, Y., Chang, Y. yu, P. S. yang, Q. y Xie, X. (2024). A Survey on Evaluation of Large Language Models. ACM Transactions On Intelligent Systems And Technology, 15(3), 1–45. https://doi.org/10.1145/3641289

Charles, D. D., Mogoutov, A. y Baumard, N. (2024). Towards Transparency: Exploring LLM Trainings Datasets through Visual Topic Modeling and Semantic Frame. arXiv. https://doi.org/10.48550/arxiv.2406.06574

Chelli, M., Descamps, J., Lavoué, V., Trojani, C., Azar, M., Deckert, M., et al. (2024). Hallucination Rates and Reference Accuracy of ChatGPT and Bard for Systematic Reviews: Comparative Analysis. Journal Of Medical Internet Research, 26, e53164. https://doi.org/10.2196/53164

Chiang, C. y Lee, H. (2024). Merging Facts, Crafting Fallacies: Evaluating the Contradictory Nature of Aggregated Factual Claims in Long-Form Generations. arXiv. https://doi.org/10.48550/arxiv.2402.05629

Chiang, W., Zheng, L., Sheng, Y., Angelopoulos, A. N., Li, T., Li, D., Zhang, H., Zhu, B., Jordan, M., Gonzalez, J. E. y Stoica, I. (2024). Chatbot Arena: An Open Platform for Evaluating LLMs by Human Preference. arXiv. https://doi.org/10.48550/arxiv.2403.04132

Choenni, R. y Shutova, E. (2024). Self-Alignment: Improving Alignment of Cultural Values in LLMs via In-Context Learning. arXiv. https://doi.org/10.48550/arxiv.2408.16482

Cuéllar-Rodríguez, A. (2023). Epistemology and ontology in Science: the challenge of Artificial Intelligence. Anales de la Real Academia Nacional de Farmacia, 89(03), 379–386. https://doi.org/10.53519/analesranf.2023.89.03.09

Dierickx, L., Lindén, C. y Opdahl, A. L. (2023). The Information Disorder Level (IDL) Index: A Human-Based Metric to Assess the Factuality of Machine-Generated Content. En D. Ceolin, T. Caselli y M. Tulin (eds.), Disinformation in Open Online Media. MISDOOM 2023 (pp. 60-71). https://doi.org/10.1007/978-3-031-47896-3_5

Duan, S. yi, X., Zhang, P., Lu, T., Xie, X. y Gu, N. (2023). Denevil: Towards Deciphering and Navigating the Ethical Values of Large Language Models via Instruction Learning. arXiv. https://doi.org/10.48550/arxiv.2310.11053

Esteban, J. L. G. (2014). La transformación del ecosistema mediático español: el caso de eldiario.es. Revista Mediterránea de Comunicación, 5(2), 159. https://doi.org/10.14198/medcom2014.5.2.10

Franganillo, J. (2023). La inteligencia artificial generativa y su impacto en la creación de contenidos mediáticos. Methaodos Revista de Ciencias Sociales, 11(2), m231102a10. https://doi.org/10.17502/mrcs.v11i2.710

Gabriel, S., Celikyilmaz, A., Jha, R., Choi, Y. y Gao, J. (2020). GO FIGURE: A meta evaluation of factuality in summarization. arXiv. https://arxiv.org/abs/2010.12834

Gao, Y., Baptista-Hon, D. T. y Zhang, K. (2023). The inevitable transformation of medicine and research by large language models. MedComm – Future Medicine, 2(2). https://doi.org/10.1002/mef2.49

García-Santamaría, J. V. y Pérez-Serrano, M. J. (2020). Grupos de comunicación en España: madurez y profundas transformaciones en un final de ciclo. Palabra Clave, 23(4), 1–32. https://doi.org/10.5294/pacla.2020.23.4.5

Ghodratnama, S. y Zakershahrak, M. (2023). Adapting LLMs for Efficient, Personalized Information Retrieval: Methods and Implications. arXiv. https://doi.org/10.48550/arxiv.2311.12287

Guo, Z., Jin, R., Liu, C., Huang, Y., Shi, D., Supryadi, Yu, L., Liu, Y., Li, J., Xiong, B. y Xiong, D. (2023). Evaluating Large Language Models: A Comprehensive Survey. arXiv. https://doi.org/10.48550/arxiv.2310.19736

Hadar-Shoval, D., Asraf, K., Mizrachi, Y., Haber, Y. y Elyoseph, Z. (2024). Assessing the Alignment of Large Language Models With Human Values for Mental Health Integration: Cross-Sectional Study Using Schwartz’s Theory of Basic Values. JMIR Mental Health, 11, e55988. https://doi.org/10.2196/55988

Harandizadeh, B., Salinas, A. y Morstatter, F. (2024). Risk and Response in Large Language Models: Evaluating Key Threat Categories. arXiv. https://doi.org/10.48550/arxiv.2403.14988

Hida, R., Kaneko, M. y Okazaki, N. (2024). Social Bias Evaluation for Large Language Models Requires Prompt Variations. arXiv. https://doi.org/10.48550/arxiv.2407.03129

Huang, L., et al. (2023). A Survey on Hallucination in Large Language Models. ACM Transactions On Office Information Systems. https://doi.org/10.1145/3703155

Jiao, J., Afroogh, S., Xu, Y. y Phillips, C. (2024). Navigating LLM Ethics: Advancements, Challenges, and Future Directions. arXiv. https://doi.org/10.48550/arxiv.2406.18841

Larrondo-Ureta, A. y Peña-Fernández, S. (2024). La formación de periodistas en la era de la inteligencia artificial: aproximaciones desde la epistemología de la comunicación. Anuario ThinkEPI, 18. https://doi.org/10.3145/thinkepi.2024.e18a11

Liang, Y., Xiao, J., Gan, W. y Yu, P. S. (2024). Watermarking Techniques for Large Language Models: A Survey. arXiv. https://doi.org/10.48550/arxiv.2409.00089

Lin, Y. y Chen, Y. (2023). LLM-Eval: Unified Multi-Dimensional Automatic Evaluation for Open-Domain Conversations with Large Language Models. arXiv. https://doi.org/10.48550/arxiv.2305.13711

Lindemann, N. F. (2023). Sealed knowledges: A critical approach to the usage of LLMs as search engines. En Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society (pp. 985-986). https://doi.org/10.1145/3600211.3604737

Liu, G., Wang, X. yuan, L., Chen, Y. y Peng, H. (2023a). Prudent Silence or Foolish Babble? Examining Large Language Models’ Responses to the Unknown. arXiv. https://doi.org/10.48550/arxiv.2311.09731

Liu, N. F., Zhang, T. y Liang, P. (2023b). Evaluating Verifiability in Generative Search Engines. arXiv. https://doi.org/10.48550/arxiv.2304.09848

Liu, Y., Cao, J., Liu, C., Ding, K. y Jin, L. (2024). Datasets for Large Language Models: A Comprehensive Survey. arXiv. https://doi.org/10.48550/arxiv.2402.18041

Luitse, D. y Denkena, W. (2021). The Great Transformer: Examining the Role of Large Language Models in the Political Economy of AI. Big Data y Society, 8(2). https://doi.org/10.1177/20539517211047734

Masís-González, T. M. (2024). La metáfora del shoggoth en la inteligencia artificial. Pensamiento Actual, 24(43). https://doi.org/10.15517/pa.v24i43.62864

Masud, S., Singh, S., Hangya, V., Fraser, A. y Chakraborty, T. (2024). Hate Personified: Investigating the role of LLMs in content moderation. arXiv. https://doi.org/10.48550/arxiv.2410.02657

McGowan, A., Gui, Y., Dobbs, M., Shuster, S., Cotter, M., Selloni, A., et al. (2023). ChatGPT and Bard exhibit spontaneous citation fabrication during psychiatry literature search. Psychiatry Research, 326, 115334. https://doi.org/10.1016/j.psychres.2023.115334

Menick, J., Trebacz, M., Mikulik, V., Aslanides, J., Song, F., Chadwick, M., et al. (2022). Teaching language models to support answers with verified quotes. arXiv. https://doi.org/10.48550/arxiv.2203.11147

Mihajlov-Prokopović, A., Jevtović, Z. y Jovanović, Z. (2019). Digital challenges of local media of the Nišava District. CM: Communication and Media, 14(46), 5–32. https://doi.org/10.5937/cm14-24293

Mokander, J., Schuett, J., Kirk, H. R. y Floridi, L. (2023). Auditing large language models: a three-layered approach. AI And Ethics, 4(4), 1085–1115. https://doi.org/10.1007/s43681-023-00289-2

Morato, O. y Nunes, D. J. C. (2023). O uso do design comportamental nas plataformas tecnológicas e as iniciativas de sua regulamentação: Um estudo do modelo gancho. Revista Justiça Do Direito, 37(2). https://doi.org/10.5335/rjd.v37i2.14961

Muhlgay, D., Ram, O., Magar, I., Levine, Y., Ratner, N., Belinkov, Y., et al. (2023). Generating benchmarks for factuality evaluation of language models. arXiv. https://doi.org/10.48550/arxiv.2307.06908

Muñoz-Vela, J. M. (2024). Inteligencia artificial generativa. Desafíos para la propiedad intelectual. Revista de Derecho de la UNED, (33), 17–75. https://doi.org/10.5944/rduned.33.2024.41924

Negreira-Rey, M., Vázquez-Herrero, J. y López-García, X. (2023). No people, no news: News deserts and areas at risk in Spain. Media and Communication, 11(3), 126–138. https://doi.org/10.17645/mac.v11i3.6727

Novelli, C., Casolari, F., Hacker, P., Spedicato, G. y Floridi, L. (2024). Generative AI in EU Law: Liability, Privacy, Intellectual Property, and Cybersecurity. Computer Law y Security Review, 55, 106066. https://doi.org/10.1016/j.clsr.2024.106066

Olsen, R. y Hess, K. (2024). “It’s New to Us”: Exploring Authentic Innovation in Local News Settings. Media and Communication, 12, 7444. https://doi.org/10.17645/mac.7444

Patil, R. y Gudivada, V. (2024). A Review of Current Trends, Techniques, and Challenges in Large Language Models (LLMs). Applied Sciences, 14(5), 2074. https://doi.org/10.3390/app14052074

Peña-Fernández, S., Peña-Alonso, U. y Eizmendi-Iraola, M. (2023). El discurso de los periodistas sobre el impacto de la inteligencia artificial generativa en la desinformación. Estudios Sobre el Mensaje Periodístico, 29(4), 833–841. https://doi.org/10.5209/esmp.88673

Pezeshkpour, P. y Hruschka, E. (2023). Large Language Models Sensitivity to The Order of Options in Multiple-Choice Questions. arXiv (Cornell University). https://doi.org/10.48550/arxiv.2308.11483

Piedra-Alegría, J. (2023). Anotaciones iniciales para una reflexión ética sobre la regulación de la Inteligencia Artificial en la Unión Europea. Revista de Derecho, 28, e3264. https://doi.org/10.22235/rd28.3264

Pit, P., Ma, X., Conway, M., Chen, Q., Bailey, J., Pit, H., Keo, P., Diep, W. y Jiang, Y. (2024). Whose Side Are You On? Investigating the Political Stance of Large Language Models. arXiv. https://doi.org/10.48550/arxiv.2403.13840

Rastogi, C., Ribeiro, M. T., King, N., Nori, H. y Amershi, S. (2023). Supporting human-AI collaboration in auditing LLMs with LLMs. En Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society, 913–926. https://doi.org/10.1145/3600211.3604712

Richter, L., He, X., Minervini, P. y Kusner, M. J. (2024). An Auditing Test To Detect Behavioral Shift in Language Models. arXiv. https://doi.org/10.48550/arxiv.2410.19406

Rotaru, G., Anagnoste, S. y Oancea, V. (2024). How Artificial Intelligence Can Influence Elections: Analyzing the Large Language Models (LLMs) Political Bias. En Proceedings of the International Conference on Business Excellence, 18(1), 1882–1891. https://doi.org/10.2478/picbe-2024-0158

Röttger, P., Hofmann, V., Pyatkin, V., Hinck, M., Kirk, H. R., Schütze, H. y Hovy, D. (2024). Political Compass or Spinning Arrow? Towards More Meaningful Evaluations for Values and Opinions in Large Language Models. arXiv. https://doi.org/10.48550/arxiv.2402.16786

Sarkar, D. (2024). Navigating the Knowledge Sea: Planet-scale answer retrieval using LLMs. arXiv. https://doi.org/10.48550/arxiv.2402.05318

Schwartz, I. S., Link, K. E., Daneshjou, R. y Cortés-Penfield, N. (2023). Black box warning: Large language models and the future of infectious diseases consultation. Clinical Infectious Diseases, 78(4), 860–866. https://doi.org/10.1093/cid/ciad633

Shao, M., Basit, A., Karri, R. y Shafique, M. (2024). Survey of different Large Language Model Architectures: Trends, Benchmarks, and Challenges. IEEE Access, 12, 188664-188706. https://doi.org/10.1109/access.2024.3482107

Silveira, S. A. (2020). Responsabilidade algorítmica, personalidade eletrônica e democracia. Revista Eletrônica Internacional de Economia Política da Informação da Comunicação e da Cultura, 22(2), 83–96. https://periodicos.ufs.br/eptic/article/view/12021

Sun, D. Q., Abzaliev, A., Kotek, H., Xiu, Z., Klein, C. y Williams, J. D. (2023). DELPHI: Data for Evaluating LLMs’ Performance in Handling Controversial Issues. arXiv. https://doi.org/10.48550/arxiv.2310.18130

Tlaie, A. (2024). Exploring and Steering the Moral Compass of Large Language Models. arXiv. https://doi.org/10.48550/arxiv.2405.17345

Vara-Miguel, A., Sánchez-Blanco, C., Chalezquer, C. S. S. y Negredo, S. (2021). Funding sustainable online news: Sources of revenue in digital-native and traditional media in Spain. Sustainability, 13(20), 11328. https://doi.org/10.3390/su132011328

Wan, A., Wallace, E. y Klein, D. (2024). What evidence do language models find convincing? arXiv. https://doi.org/10.48550/arxiv.2402.11782

Wang, Y., Zhong, W., Li, L., Mi, F., Zeng, X., Huang, W., Shang, L., Jiang, X. y Liu, Q. (2023). Aligning Large Language Models with Human: A Survey. arXiv. https://doi.org/10.48550/arxiv.2307.12966

Wu, K., Wu, E., Cassasola, A., Zhang, A., Wei, K., Nguyen, T., et al. (2024). How well do LLMs cite relevant medical references? An evaluation framework and analyses. arXiv. https://doi.org/10.48550/arxiv.2402.02008

Yue, X., Wang, B., Zhang, K., Chen, Z., Su, Y. y Sun, H. (2023). Automatic Evaluation of Attribution by Large Language Models. arXiv. https://doi.org/10.48550/arxiv.2305.06311

Zhang, D., et al. (2024). A survey of datasets in medicine for large language models. Intelligence y Robotics, 4(4), 457–478. https://doi.org/10.20517/ir.2024.27

Zhao, H., et al. (2024). Towards uncovering how large language model works: An explainability perspective. arXiv. https://doi.org/10.48550/arxiv.2402.10688

Zheng, X., Wang, L., Liu, Y., Ma, X., Shen, C. y Wang, C. (2025). CALM: Curiosity-Driven Auditing for Large Language Models. arXiv. https://doi.org/10.48550/arxiv.2501.02997

Zhu, Y. yuan, H., Wang, S., Liu, J., Liu, W., Deng, C., Dou, Z. y Wen, J. (2023). Large Language Models for Information Retrieval: A Survey. arXiv. https://doi.org/10.48550/arxiv.2308.07107