As the demand for sophisticated language models (LMs) continues to grow, the necessity to deploy them efficiently across federated and edge environments becomes increasingly evident. This survey explores the nuanced interplay between federated and edge learning for large language models (LLMs), considering the evolving landscape of distributed computing. We investigate how federated learning paradigms can be tailored to accommodate the unique characteristics of LMs, ensuring collaborative model training while respecting privacy constraints inherent in federated environments. Additionally, we scrutinize the challenges posed by resource constraints at the edge, reporting on relevant literature and established techniques within the realm of LLMs for edge deployments, such as model pruning or model quantization. The future holds the potential for LMs to leverage the collective intelligence of distributed networks while respecting the autonomy and privacy of individual edge devices. Through this survey, the objective is to provide an in-depth analysis of the current state of efficient and privacy-aware LLM training and deployment in federated and edge environments, with the aim of offering valuable insights and guidance to researchers shaping the ongoing discussion in this field.

Federated and edge learning for large language models / F. Piccialli, D. Chiaro, P. Qi, V. Bellandi, E. Damiani. - In: INFORMATION FUSION. - ISSN 1566-2535. - 117:(2025), pp. 102840.1-102840.19. [10.1016/j.inffus.2024.102840]

Federated and edge learning for large language models

V. Bellandi
Penultimo
;
E. Damiani
Ultimo
2025

Abstract

As the demand for sophisticated language models (LMs) continues to grow, the necessity to deploy them efficiently across federated and edge environments becomes increasingly evident. This survey explores the nuanced interplay between federated and edge learning for large language models (LLMs), considering the evolving landscape of distributed computing. We investigate how federated learning paradigms can be tailored to accommodate the unique characteristics of LMs, ensuring collaborative model training while respecting privacy constraints inherent in federated environments. Additionally, we scrutinize the challenges posed by resource constraints at the edge, reporting on relevant literature and established techniques within the realm of LLMs for edge deployments, such as model pruning or model quantization. The future holds the potential for LMs to leverage the collective intelligence of distributed networks while respecting the autonomy and privacy of individual edge devices. Through this survey, the objective is to provide an in-depth analysis of the current state of efficient and privacy-aware LLM training and deployment in federated and edge environments, with the aim of offering valuable insights and guidance to researchers shaping the ongoing discussion in this field.
Edge learning; Edge computing; Federated learning; Large language models; Natural language processing
Settore INFO-01/A - Informatica
2025
16-dic-2024
Article (author)
File in questo prodotto:
File Dimensione Formato  
1-s2.0-S1566253524006183-main.pdf

accesso aperto

Descrizione: Article
Tipologia: Publisher's version/PDF
Dimensione 3.15 MB
Formato Adobe PDF
3.15 MB Adobe PDF Visualizza/Apri
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/2434/1127635
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 2
  • ???jsp.display-item.citation.isi??? 0
  • OpenAlex ND
social impact