Purpose: Surgical workflow recognition and context-aware systems could allow better decision making and surgical planning by providing the focused information, which may eventually enhance surgical outcomes. While current developments in computer-assisted surgical systems are mostly focused on recognizing surgical phases, they lack recognition of surgical workflow sequence and other contextual element, e.g., “Instruments.” Our study proposes a hybrid approach, i.e., using deep learning and knowledge representation, to facilitate recognition of the surgical workflow. Methods: We implemented “Deep-Onto” network, which is an ensemble of deep learning models and knowledge management tools, ontology and production rules. As a prototypical scenario, we chose robot-assisted partial nephrectomy (RAPN). We annotated RAPN videos with surgical entities, e.g., “Step” and so forth. We performed different experiments, including the inter-subject variability, to recognize surgical steps. The corresponding subsequent steps along with other surgical contexts, i.e., “Actions,” “Phase” and “Instruments,” were also recognized. Results: The system was able to recognize 10 RAPN steps with the prevalence-weighted macro-average (PWMA) recall of 0.83, PWMA precision of 0.74, PWMA F1 score of 0.76, and the accuracy of 74.29% on 9 videos of RAPN. Conclusion: We found that the combined use of deep learning and knowledge representation techniques is a promising approach for the multi-level recognition of RAPN surgical workflow.

“Deep-Onto” network for surgical workflow and context recognition / H. Nakawala, R. Bianchi, L.E. Pescatori, O. De Cobelli, G. Ferrigno, E. De Momi. - In: INTERNATIONAL JOURNAL OF COMPUTER ASSISTED RADIOLOGY AND SURGERY. - ISSN 1861-6410. - 14:4(2019), pp. 685-696. [10.1007/s11548-018-1882-8]

“Deep-Onto” network for surgical workflow and context recognition

O. De Cobelli;
2019

Abstract

Purpose: Surgical workflow recognition and context-aware systems could allow better decision making and surgical planning by providing the focused information, which may eventually enhance surgical outcomes. While current developments in computer-assisted surgical systems are mostly focused on recognizing surgical phases, they lack recognition of surgical workflow sequence and other contextual element, e.g., “Instruments.” Our study proposes a hybrid approach, i.e., using deep learning and knowledge representation, to facilitate recognition of the surgical workflow. Methods: We implemented “Deep-Onto” network, which is an ensemble of deep learning models and knowledge management tools, ontology and production rules. As a prototypical scenario, we chose robot-assisted partial nephrectomy (RAPN). We annotated RAPN videos with surgical entities, e.g., “Step” and so forth. We performed different experiments, including the inter-subject variability, to recognize surgical steps. The corresponding subsequent steps along with other surgical contexts, i.e., “Actions,” “Phase” and “Instruments,” were also recognized. Results: The system was able to recognize 10 RAPN steps with the prevalence-weighted macro-average (PWMA) recall of 0.83, PWMA precision of 0.74, PWMA F1 score of 0.76, and the accuracy of 74.29% on 9 videos of RAPN. Conclusion: We found that the combined use of deep learning and knowledge representation techniques is a promising approach for the multi-level recognition of RAPN surgical workflow.
Deep learning; Knowledge representation; Robot-assisted partial nephrectomy; Surgical workflow; Surgery; Biomedical Engineering; Radiology, Nuclear Medicine and Imaging; 1707; Health Informatics; Computer Science Applications; 1707; Computer Vision and Pattern Recognition; Computer Graphics and Computer-Aided Design
Settore MED/24 - Urologia
2019
Article (author)
File in questo prodotto:
File Dimensione Formato  
Nakawala2019_Article_Deep-OntoNetworkForSurgicalWor.pdf

accesso riservato

Tipologia: Publisher's version/PDF
Dimensione 2.41 MB
Formato Adobe PDF
2.41 MB Adobe PDF   Visualizza/Apri   Richiedi una copia
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/2434/640588
Citazioni
  • ???jsp.display-item.citation.pmc??? 5
  • Scopus 51
  • ???jsp.display-item.citation.isi??? 42
social impact