Synthetic data are widely employed across diverse fields, including computer vision, robotics, and cybersecurity. However, generative models are prone to unintentionally revealing sensitive information from their training datasets, primarily due to overfitting phenomena. In this context, membership inference attacks (MIAs) have emerged as a significant privacy threat. These attacks employ binary classifiers to verify whether a specific data sample was part of the model’s training set, thereby discriminating between member and non-member samples. Despite their growing relevance, the interpretation of MIA outcomes can be misleading without a detailed understanding of the data domains involved during both model development and evaluation. To bridge this gap, we performed an analysis focused on a particular category (i.e., vehicles) to assess the effectiveness of MIA under scenarios with limited overlap in data distribution. First, we introduce a data selection strategy, based on the Fréchet Coefficient, to filter and curate the evaluation datasets, followed by the execution of membership inference attacks under varying degrees of distributional overlap. Our findings indicate that MIAs are highly effective when the training and evaluation data distributions are well aligned, but their accuracy drops significantly under distribution shifts or when domain knowledge is limited. These results highlight the limitations of current MIA methodologies in reliably assessing privacy risks in generative modeling contexts.

Synthetic and (Un)Secure: Evaluating Generalized Membership Inference Attacks on Image Data / P. Coscia, S. Ferrari, V. Piuri, A. Salman - In: Proceedings of the 22nd International Conference on Security and Cryptography. 1 / [a cura di] S. De Capitani di Vimercati, P. Samarati. - [s.l] : SciTePress, 2025 Jun. - ISBN 978-989-758-760-3. - pp. 287-297 (( Intervento presentato al 22. convegno International Conference on Security and Cryptography tenutosi a Bilbao nel 2025 [10.5220/0013657700003979].

Synthetic and (Un)Secure: Evaluating Generalized Membership Inference Attacks on Image Data

P. Coscia
;
S. Ferrari;V. Piuri;
2025

Abstract

Synthetic data are widely employed across diverse fields, including computer vision, robotics, and cybersecurity. However, generative models are prone to unintentionally revealing sensitive information from their training datasets, primarily due to overfitting phenomena. In this context, membership inference attacks (MIAs) have emerged as a significant privacy threat. These attacks employ binary classifiers to verify whether a specific data sample was part of the model’s training set, thereby discriminating between member and non-member samples. Despite their growing relevance, the interpretation of MIA outcomes can be misleading without a detailed understanding of the data domains involved during both model development and evaluation. To bridge this gap, we performed an analysis focused on a particular category (i.e., vehicles) to assess the effectiveness of MIA under scenarios with limited overlap in data distribution. First, we introduce a data selection strategy, based on the Fréchet Coefficient, to filter and curate the evaluation datasets, followed by the execution of membership inference attacks under varying degrees of distributional overlap. Our findings indicate that MIAs are highly effective when the training and evaluation data distributions are well aligned, but their accuracy drops significantly under distribution shifts or when domain knowledge is limited. These results highlight the limitations of current MIA methodologies in reliably assessing privacy risks in generative modeling contexts.
Membership Inference Attack; Generative Models; Fréchet Coefficient
Settore INFO-01/A - Informatica
Settore IINF-05/A - Sistemi di elaborazione delle informazioni
giu-2025
Book Part (author)
File in questo prodotto:
File Dimensione Formato  
secrypt25_compressed.pdf

accesso aperto

Tipologia: Publisher's version/PDF
Licenza: Creative commons
Dimensione 478.9 kB
Formato Adobe PDF
478.9 kB Adobe PDF Visualizza/Apri
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/2434/1173347
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 2
  • ???jsp.display-item.citation.isi??? ND
  • OpenAlex 1
social impact