As users increasingly input confidential information in their queries—often through longer and more detailed prompts when interfacing with generative Information Retrieval Systems (IRSs) and Artificial Intelligence (AI) tools—the need for effective query protection deserves further investigation in current research. With respect to the literature, this paper examines whether the use of generative Large Language Models (LLMs) offers a viable solution in light of various state-of-the-art techniques aimed at safeguarding queries from the user's privacy perspective. In particular, we investigate the effectiveness of different prompts inspired by distinct confusion-based techniques for query protection. Our study assesses how well this solution can protect user privacy while simultaneously maintaining a satisfactory trade-off with retrieval effectiveness.

Can Generative AI Adequately Protect Queries? Analyzing the Trade-off Between Privacy Awareness and Retrieval Effectiveness / L. Herranz-Celotti, B. Guembe, G. Livraga, M. Viviani (LECTURE NOTES IN COMPUTER SCIENCE). - In: Advances in Information Retrieval / [a cura di] C. Hauff, C. Macdonald, D. Jannach, G. Kazai, F.M. Nardini, F. Pinelli, F. Silvestri, N. Tonellotto. - [s.l] : Springer, 2025 Apr. - ISBN 978-3-031-88714-7. - pp. 353-361 (( Intervento presentato al 47. convegno European Conference on Information Retrieval (ECIR 2025) tenutosi a Lucca nel 2025 [10.1007/978-3-031-88714-7_34].

Can Generative AI Adequately Protect Queries? Analyzing the Trade-off Between Privacy Awareness and Retrieval Effectiveness

B. Guembe
Secondo
;
G. Livraga
Penultimo
;
2025

Abstract

As users increasingly input confidential information in their queries—often through longer and more detailed prompts when interfacing with generative Information Retrieval Systems (IRSs) and Artificial Intelligence (AI) tools—the need for effective query protection deserves further investigation in current research. With respect to the literature, this paper examines whether the use of generative Large Language Models (LLMs) offers a viable solution in light of various state-of-the-art techniques aimed at safeguarding queries from the user's privacy perspective. In particular, we investigate the effectiveness of different prompts inspired by distinct confusion-based techniques for query protection. Our study assesses how well this solution can protect user privacy while simultaneously maintaining a satisfactory trade-off with retrieval effectiveness.
Privacy; Query Protection; Generative Artificial Intelligence; Large Language Models
Settore INFO-01/A - Informatica
   Green responsibLe privACy preservIng dAta operaTIONs
   GLACIATION
   EUROPEAN COMMISSION

   KURAMi: Knowledge-based, explainable User empowerment in Releasing private data and Assessing Misinformation in online environments
   KURAMI
   MINISTERO DELL'UNIVERSITA' E DELLA RICERCA
   20225WTRFN_003

   SEcurity and RIghts in the CyberSpace (SERICS)
   SERICS
   MINISTERO DELL'UNIVERSITA' E DELLA RICERCA
   codice identificativo PE00000014
apr-2025
Book Part (author)
File in questo prodotto:
File Dimensione Formato  
hglv-ECIR2025.pdf

accesso riservato

Tipologia: Post-print, accepted manuscript ecc. (versione accettata dall'editore)
Licenza: Nessuna licenza
Dimensione 145.46 kB
Formato Adobe PDF
145.46 kB Adobe PDF   Visualizza/Apri   Richiedi una copia
978-3-031-88714-7_34.pdf

accesso riservato

Tipologia: Publisher's version/PDF
Licenza: Nessuna licenza
Dimensione 254.34 kB
Formato Adobe PDF
254.34 kB Adobe PDF   Visualizza/Apri   Richiedi una copia
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/2434/1173791
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 2
  • ???jsp.display-item.citation.isi??? ND
  • OpenAlex ND
social impact