As users increasingly input confidential information in their queries—often through longer and more detailed prompts when interfacing with generative Information Retrieval Systems (IRSs) and Artificial Intelligence (AI) tools—the need for effective query protection deserves further investigation in current research. With respect to the literature, this paper examines whether the use of generative Large Language Models (LLMs) offers a viable solution in light of various state-of-the-art techniques aimed at safeguarding queries from the user's privacy perspective. In particular, we investigate the effectiveness of different prompts inspired by distinct confusion-based techniques for query protection. Our study assesses how well this solution can protect user privacy while simultaneously maintaining a satisfactory trade-off with retrieval effectiveness.
Can Generative AI Adequately Protect Queries? Analyzing the Trade-off Between Privacy Awareness and Retrieval Effectiveness / L. Herranz-Celotti, B. Guembe, G. Livraga, M. Viviani (LECTURE NOTES IN COMPUTER SCIENCE). - In: Advances in Information Retrieval / [a cura di] C. Hauff, C. Macdonald, D. Jannach, G. Kazai, F.M. Nardini, F. Pinelli, F. Silvestri, N. Tonellotto. - [s.l] : Springer, 2025 Apr. - ISBN 978-3-031-88714-7. - pp. 353-361 (( Intervento presentato al 47. convegno European Conference on Information Retrieval (ECIR 2025) tenutosi a Lucca nel 2025 [10.1007/978-3-031-88714-7_34].
Can Generative AI Adequately Protect Queries? Analyzing the Trade-off Between Privacy Awareness and Retrieval Effectiveness
B. GuembeSecondo
;G. LivragaPenultimo
;
2025
Abstract
As users increasingly input confidential information in their queries—often through longer and more detailed prompts when interfacing with generative Information Retrieval Systems (IRSs) and Artificial Intelligence (AI) tools—the need for effective query protection deserves further investigation in current research. With respect to the literature, this paper examines whether the use of generative Large Language Models (LLMs) offers a viable solution in light of various state-of-the-art techniques aimed at safeguarding queries from the user's privacy perspective. In particular, we investigate the effectiveness of different prompts inspired by distinct confusion-based techniques for query protection. Our study assesses how well this solution can protect user privacy while simultaneously maintaining a satisfactory trade-off with retrieval effectiveness.| File | Dimensione | Formato | |
|---|---|---|---|
|
hglv-ECIR2025.pdf
accesso riservato
Tipologia:
Post-print, accepted manuscript ecc. (versione accettata dall'editore)
Licenza:
Nessuna licenza
Dimensione
145.46 kB
Formato
Adobe PDF
|
145.46 kB | Adobe PDF | Visualizza/Apri Richiedi una copia |
|
978-3-031-88714-7_34.pdf
accesso riservato
Tipologia:
Publisher's version/PDF
Licenza:
Nessuna licenza
Dimensione
254.34 kB
Formato
Adobe PDF
|
254.34 kB | Adobe PDF | Visualizza/Apri Richiedi una copia |
Pubblicazioni consigliate
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.




