Prostate cancer (PCa) remains one of the most commonly diagnosed cancers in men globally, with incidence rates expected to rise in the coming years. Multiparametric MRI (mp-MRI) has become a key technique for detecting clinically significant prostate cancer and assessing its risk, while recent advances in artificial intelligence (AI) have shown strong potential in improving the accuracy and consistency of lesion segmentation. In this study, we investigated the effectiveness of U-Net-based deep learning models for prostate lesion segmentation using an in-house mp-MRI dataset from Centro Diagnostico Italiano (CDI). The dataset includes 311 prostate MRI studies, consisting of 60 with PI-RADS 3, 159 with PI-RADS 4, and 92 with PI-RADS 5. Each case contains three MRI sequences: T2-weighted (T2W), diffusion-weighted imaging (DWI), and apparent diffusion coefficient (ADC). To ensure consistency across modalities and enhance model performance, we first applied image registration to align all mp-MRI sequences (T2W, DWI, and ADC) anatomically. Intensity normalization was then performed using min–max scaling for T2W and ADC images, and z-score normalization for the DWI sequence, accounting for their differing signal characteristics. Finally, center-cropping was used to focus on the prostate region, effectively reducing background noise and directing the model’s attention to the most relevant anatomical area. We evaluated three U-Net-based architectures: U-Net, Dense U-Net, and Attention U-Net, under two experimental conditions. The first included only PI-RADS 4 and 5 cases, while the second included all cases with PI-RADS 3, 4, and 5. Each model was trained in four different ways: using individual sequences (ADC, DWI, or T2W) and using a multi-input U-Net configuration in which each sequence was processed by its encoder branch. For model development, 80% of the data was used for training and 20% for testing. To improve model generalizability and consistency, we employed 5-fold cross-validation during training. For models trained on PI-RADS 4 and 5 cases, the highest lesion segmentation performance was achieved by Dense U-Net using ADC images, reaching a Dice Similarity Coefficient (DSC) of 69%. This was followed by U-Net with ADC images (68%) and Dense U-Net with DWI images (66%). When PI-RADS 3 cases were also included in the training set, the best results were obtained with Dense U-Net (DSC = 68%) and U-Net (DSC = 66%) using ADC sequences. These findings demonstrate the potential of deep learning models, especially U-Net variants, in achieving accurate lesion segmentation when combined with mp-MRI.

AI-Based Prostate Lesion Segmentation on In-House mp-MRI: Impact of U-Net Variants and PI-RADS Levels / S. Fouladi, F. Darvizeh, R. Di Meo, L. Di Palma, G. Gianini, A. Maiocchi, D. Fazzini, M. Alì. - (2025). ( EuSoMII Annual Meeting : 10–11 October Heraklion, Greece 2025) [10.26226/m.686249b901453d0e51433e66].

AI-Based Prostate Lesion Segmentation on In-House mp-MRI: Impact of U-Net Variants and PI-RADS Levels

S. Fouladi
;
R. Di Meo;G. Gianini;D. Fazzini;
2025

Abstract

Prostate cancer (PCa) remains one of the most commonly diagnosed cancers in men globally, with incidence rates expected to rise in the coming years. Multiparametric MRI (mp-MRI) has become a key technique for detecting clinically significant prostate cancer and assessing its risk, while recent advances in artificial intelligence (AI) have shown strong potential in improving the accuracy and consistency of lesion segmentation. In this study, we investigated the effectiveness of U-Net-based deep learning models for prostate lesion segmentation using an in-house mp-MRI dataset from Centro Diagnostico Italiano (CDI). The dataset includes 311 prostate MRI studies, consisting of 60 with PI-RADS 3, 159 with PI-RADS 4, and 92 with PI-RADS 5. Each case contains three MRI sequences: T2-weighted (T2W), diffusion-weighted imaging (DWI), and apparent diffusion coefficient (ADC). To ensure consistency across modalities and enhance model performance, we first applied image registration to align all mp-MRI sequences (T2W, DWI, and ADC) anatomically. Intensity normalization was then performed using min–max scaling for T2W and ADC images, and z-score normalization for the DWI sequence, accounting for their differing signal characteristics. Finally, center-cropping was used to focus on the prostate region, effectively reducing background noise and directing the model’s attention to the most relevant anatomical area. We evaluated three U-Net-based architectures: U-Net, Dense U-Net, and Attention U-Net, under two experimental conditions. The first included only PI-RADS 4 and 5 cases, while the second included all cases with PI-RADS 3, 4, and 5. Each model was trained in four different ways: using individual sequences (ADC, DWI, or T2W) and using a multi-input U-Net configuration in which each sequence was processed by its encoder branch. For model development, 80% of the data was used for training and 20% for testing. To improve model generalizability and consistency, we employed 5-fold cross-validation during training. For models trained on PI-RADS 4 and 5 cases, the highest lesion segmentation performance was achieved by Dense U-Net using ADC images, reaching a Dice Similarity Coefficient (DSC) of 69%. This was followed by U-Net with ADC images (68%) and Dense U-Net with DWI images (66%). When PI-RADS 3 cases were also included in the training set, the best results were obtained with Dense U-Net (DSC = 68%) and U-Net (DSC = 66%) using ADC sequences. These findings demonstrate the potential of deep learning models, especially U-Net variants, in achieving accurate lesion segmentation when combined with mp-MRI.
Settore MEDS-22/A - Diagnostica per immagini e radioterapia
2025
File in questo prodotto:
File Dimensione Formato  
Saman_Fouladi_Poster.pdf

accesso aperto

Descrizione: poster
Tipologia: Publisher's version/PDF
Licenza: Creative commons
Dimensione 618.56 kB
Formato Adobe PDF
618.56 kB Adobe PDF Visualizza/Apri
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/2434/1220296
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
  • OpenAlex ND
social impact