BackgroundSegmentation of cardiovascular images is resource-intensive. We design an automated deep learning method for the segmentation of multiple structures from Coronary Computed Tomography Angiography (CCTA) images.MethodsImages from a multicenter registry of patients that underwent clinically-indicated CCTA were used. The proximal ascending and descending aorta (PAA, DA), superior and inferior vena cavae (SVC, IVC), pulmonary artery (PA), coronary sinus (CS), right ventricular wall (RVW) and left atrial wall (LAW) were annotated as ground truth. The U-net-derived deep learning model was trained, validated and tested in a 70:20:10 split.ResultsThe dataset comprised 206 patients, with 5.130 billion pixels. Mean age was 59.9 +/- 9.4 yrs., and was 42.7% female. An overall median Dice score of 0.820 (0.782, 0.843) was achieved. Median Dice scores for PAA, DA, SVC, IVC, PA, CS, RVW and LAW were 0.969 (0.979, 0.988), 0.953 (0.955, 0.983), 0.937 (0.934, 0.965), 0.903 (0.897, 0.948), 0.775 (0.724, 0.925), 0.720 (0.642, 0.809), 0.685 (0.631, 0.761) and 0.625 (0.596, 0.749) respectively. Apart from the CS, there were no significant differences in performance between sexes or age groups.ConclusionsAn automated deep learning model demonstrated segmentation of multiple cardiovascular structures from CCTA images with reasonable overall accuracy when evaluated on a pixel level.

Automatic segmentation of multiple cardiovascular structures from cardiac computed tomography angiography images using deep learning / L. Baskaran, S.J. Al'Aref, G. Maliakal, B.C. Lee, Z. Xu, J.W. Choi, S. Lee, J.M. Sung, F.Y. Lin, S. Dunham, B. Mosadegh, Y. Kim, I. Gottlieb, B.K. Lee, E.J. Chun, F. Cademartiri, E. Maffei, H. Marques, S. Shin, J.H. Choi, K. Chinnaiyan, M. Hadamitzky, E. Conte, D. Andreini, G. Pontone, M.J. Budoff, J.A. Leipsic, G.L. Raff, R. Virmani, H. Samady, P.H. Stone, D.S. Berman, J. Narula, J.J. Bax, H. Chang, J.K. Min, L.J. Shaw. - In: PLOS ONE. - ISSN 1932-6203. - 15:5(2020), pp. e0232573.1-e0232573.13. [10.1371/journal.pone.0232573]

Automatic segmentation of multiple cardiovascular structures from cardiac computed tomography angiography images using deep learning

E. Conte;D. Andreini;G. Pontone;
2020

Abstract

BackgroundSegmentation of cardiovascular images is resource-intensive. We design an automated deep learning method for the segmentation of multiple structures from Coronary Computed Tomography Angiography (CCTA) images.MethodsImages from a multicenter registry of patients that underwent clinically-indicated CCTA were used. The proximal ascending and descending aorta (PAA, DA), superior and inferior vena cavae (SVC, IVC), pulmonary artery (PA), coronary sinus (CS), right ventricular wall (RVW) and left atrial wall (LAW) were annotated as ground truth. The U-net-derived deep learning model was trained, validated and tested in a 70:20:10 split.ResultsThe dataset comprised 206 patients, with 5.130 billion pixels. Mean age was 59.9 +/- 9.4 yrs., and was 42.7% female. An overall median Dice score of 0.820 (0.782, 0.843) was achieved. Median Dice scores for PAA, DA, SVC, IVC, PA, CS, RVW and LAW were 0.969 (0.979, 0.988), 0.953 (0.955, 0.983), 0.937 (0.934, 0.965), 0.903 (0.897, 0.948), 0.775 (0.724, 0.925), 0.720 (0.642, 0.809), 0.685 (0.631, 0.761) and 0.625 (0.596, 0.749) respectively. Apart from the CS, there were no significant differences in performance between sexes or age groups.ConclusionsAn automated deep learning model demonstrated segmentation of multiple cardiovascular structures from CCTA images with reasonable overall accuracy when evaluated on a pixel level.
Settore MED/11 - Malattie dell'Apparato Cardiovascolare
2020
Article (author)
File in questo prodotto:
File Dimensione Formato  
Automatic.pdf

accesso aperto

Descrizione: Research Article
Tipologia: Publisher's version/PDF
Dimensione 1.32 MB
Formato Adobe PDF
1.32 MB Adobe PDF Visualizza/Apri
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/2434/955545
Citazioni
  • ???jsp.display-item.citation.pmc??? 4
  • Scopus 22
  • ???jsp.display-item.citation.isi??? 19
social impact