This study examines public perceptions of the risks associated with Generative Artificial Intelligence (GenAI), focusing on its potential 'epistemic power' - i.e., its capacity to redefine knowledge production - including its ability to blur the boundaries between real and fully synthetic content, thereby facilitating misinformation and potentially undermining democratic stability. We posit that social media use, GenAI familiarity, and the ability to discern synthetic content result in asymmetric risk perception patterns through mechanisms of digital habituation, agency, control, and fear. An online survey experiment (N = 1,800) shows that social media use and tech-savviness lower perceived risks, while unfamiliarity heightens 'epistemic fear'. Priming with GenAI political content increases risk sensitivity, especially among less AI-savvy respondents. By demonstrating the asymmetric role of exposure in modulating risk perceptions and bringing attention to the complex interplay between human agency and artificial intelligence, this study questions deterministic accounts of the epistemic power attributed to Generative AI technologies. Instead, it emphasizes how individuals' digital trajectories and modes of engagement inform their responses to AI-generated political content, but it also suggests that tech-savvy users may potentially underestimate GenAI's societal threats due to overconfidence, while less experienced users may overestimate them as a result of digital exclusion and epistemic fear.

Between AI fear and digital agency: technological familiarity and risk perception of generative AI’s epistemic power / M. Barisione, I. Rama, F. Marolla. - In: INFORMATION, COMMUNICATION & SOCIETY. - ISSN 1369-118X. - (2025). [Epub ahead of print] [10.1080/1369118x.2025.2606101]

Between AI fear and digital agency: technological familiarity and risk perception of generative AI’s epistemic power

M. Barisione
Primo
;
I. Rama;F. Marolla
Ultimo
2025

Abstract

This study examines public perceptions of the risks associated with Generative Artificial Intelligence (GenAI), focusing on its potential 'epistemic power' - i.e., its capacity to redefine knowledge production - including its ability to blur the boundaries between real and fully synthetic content, thereby facilitating misinformation and potentially undermining democratic stability. We posit that social media use, GenAI familiarity, and the ability to discern synthetic content result in asymmetric risk perception patterns through mechanisms of digital habituation, agency, control, and fear. An online survey experiment (N = 1,800) shows that social media use and tech-savviness lower perceived risks, while unfamiliarity heightens 'epistemic fear'. Priming with GenAI political content increases risk sensitivity, especially among less AI-savvy respondents. By demonstrating the asymmetric role of exposure in modulating risk perceptions and bringing attention to the complex interplay between human agency and artificial intelligence, this study questions deterministic accounts of the epistemic power attributed to Generative AI technologies. Instead, it emphasizes how individuals' digital trajectories and modes of engagement inform their responses to AI-generated political content, but it also suggests that tech-savvy users may potentially underestimate GenAI's societal threats due to overconfidence, while less experienced users may overestimate them as a result of digital exclusion and epistemic fear.
epistemic power; Generative AI; misinformation/disinformation; online survey experiment; social media; societal risks
Settore GSPS-07/A - Sociologia dei fenomeni politici
Settore GSPS-06/A - Sociologia dei processi culturali e comunicativi
Settore GSPS-02/A - Scienza politica
2025
dic-2025
Article (author)
File in questo prodotto:
File Dimensione Formato  
Barisione, Rama & Marolla (2025) - Between AI fear and digital agency technological familiarity and risk perception of generative AI s epistemic power.pdf

accesso riservato

Tipologia: Publisher's version/PDF
Licenza: Nessuna licenza
Dimensione 1.28 MB
Formato Adobe PDF
1.28 MB Adobe PDF   Visualizza/Apri   Richiedi una copia
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/2434/1206775
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
  • OpenAlex 0
social impact