Vertical Federated Learning (VFL, for short) is a category of Federated Learning that is gaining increasing attention in the context of Artificial Intelligence. According to this paradigm, machine/deep learning models are trained collaboratively among parties with vertically partitioned data. Typically, in a VFL scenario, the labels of the samples are kept private from all parties except the aggregating server, that is, the label owner. However, recent work discovered that by exploiting the gradient information returned by the server to bottom models, with the knowledge of only a small set of auxiliary labels on a very limited subset of training data points, an adversary could infer the private labels. These attacks are known as label inference attacks in VFL. In our work, we propose a novel framework called KD (knowledge distillation with -anonymity) that combines knowledge distillation and -anonymity to provide a defense mechanism against potential label inference attacks in a VFL scenario. Through an exhaustive experimental campaign, we demonstrate that by applying our approach, the performance of the analyzed label inference attacks decreases consistently, even by more than 60%, maintaining the accuracy of the whole VFL almost unaltered.

A defense mechanism against label inference attacks in vertical federated learning / M. Arazzi, S. Nicolazzo, A. Nocera. - In: NEUROCOMPUTING. - ISSN 0925-2312. - 624:(2025 Apr 01), pp. 129476.1-129476.13. [10.1016/j.neucom.2025.129476]

A defense mechanism against label inference attacks in vertical federated learning

S. Nicolazzo
Secondo
;
2025

Abstract

Vertical Federated Learning (VFL, for short) is a category of Federated Learning that is gaining increasing attention in the context of Artificial Intelligence. According to this paradigm, machine/deep learning models are trained collaboratively among parties with vertically partitioned data. Typically, in a VFL scenario, the labels of the samples are kept private from all parties except the aggregating server, that is, the label owner. However, recent work discovered that by exploiting the gradient information returned by the server to bottom models, with the knowledge of only a small set of auxiliary labels on a very limited subset of training data points, an adversary could infer the private labels. These attacks are known as label inference attacks in VFL. In our work, we propose a novel framework called KD (knowledge distillation with -anonymity) that combines knowledge distillation and -anonymity to provide a defense mechanism against potential label inference attacks in a VFL scenario. Through an exhaustive experimental campaign, we demonstrate that by applying our approach, the performance of the analyzed label inference attacks decreases consistently, even by more than 60%, maintaining the accuracy of the whole VFL almost unaltered.
Federated learning; Vertical Federated Learning; VFL; Label inference attack; Knowledge distillation; k-anonymity
Settore IINF-05/A - Sistemi di elaborazione delle informazioni
Settore INFO-01/A - Informatica
   SEcurity and RIghts in the CyberSpace (SERICS)
   SERICS
   MINISTERO DELL'UNIVERSITA' E DELLA RICERCA
   codice identificativo PE00000014
1-apr-2025
28-gen-2025
Article (author)
File in questo prodotto:
File Dimensione Formato  
14)NEUCOMP.pdf

accesso aperto

Descrizione: Article
Tipologia: Publisher's version/PDF
Dimensione 1.94 MB
Formato Adobe PDF
1.94 MB Adobe PDF Visualizza/Apri
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/2434/1139095
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact