Collecting high-quality training data is essential for fine-tuning Large Language Models (LLMs). However, acquiring such data is often costly and time-consuming, especially for non-English languages such as Italian. Recently, researchers have begun to explore the use of LLMs to generate synthetic datasets as a viable alternative. This study proposes a pipeline for generating synthetic data and a comprehensive approach for investigating the factors that influence the validity of synthetic data generated by LLMs by examining how model performance is affected by metrics such as prompt strategy, text length and target position in a specific task, i.e. inclusive language detection in Italian job advertisements. Our results show that, in most cases and across different metrics, the fine-tuned models trained on synthetic data consistently outperformed other models on both seed and synthetic test datasets. The study discusses the practical implications and limitations of using synthetic data for language detection tasks with LLMs.

Artificial Conversations, Real Results: Fostering Language Detection with Synthetic Data / F. Mohammadi, T. Romano, S. Maghool, P. Ceravolo (CEUR WORKSHOP PROCEEDINGS). - In: TRUST-AI 2025 : The European Workshop on Trustworthy AI 2025 / [a cura di] A. Følstad, D. Apostolou, S. Taylor, A. Palumbo, E. Tsalapati, G. Stamatellos, R. Catelli. - [s.l] : CEUR-WS, 2025. - pp. 237-246 (( 28. TRUST-AI: The European Workshop on Trustworthy AI. Organized as part of the European Conference of Artificial Intelligence Bologna 2025.

Artificial Conversations, Real Results: Fostering Language Detection with Synthetic Data

F. Mohammadi
;
S. Maghool;P. Ceravolo
2025

Abstract

Collecting high-quality training data is essential for fine-tuning Large Language Models (LLMs). However, acquiring such data is often costly and time-consuming, especially for non-English languages such as Italian. Recently, researchers have begun to explore the use of LLMs to generate synthetic datasets as a viable alternative. This study proposes a pipeline for generating synthetic data and a comprehensive approach for investigating the factors that influence the validity of synthetic data generated by LLMs by examining how model performance is affected by metrics such as prompt strategy, text length and target position in a specific task, i.e. inclusive language detection in Italian job advertisements. Our results show that, in most cases and across different metrics, the fine-tuned models trained on synthetic data consistently outperformed other models on both seed and synthetic test datasets. The study discusses the practical implications and limitations of using synthetic data for language detection tasks with LLMs.
Large Language Models; synthetic data; inference; finetuning
Settore INFO-01/A - Informatica
2025
https://ceur-ws.org/Vol-4132/short45.pdf
Book Part (author)
File in questo prodotto:
File Dimensione Formato  
short45.pdf

accesso aperto

Descrizione: paper file
Tipologia: Publisher's version/PDF
Licenza: Creative commons
Dimensione 3.04 MB
Formato Adobe PDF
3.04 MB Adobe PDF Visualizza/Apri
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/2434/1206522
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
  • OpenAlex ND
social impact