Mask then classify: multi-instance segmentation for surgical instruments.

Kurmann, Thomas; Márquez-Neila, Pablo; Allan, Max; Wolf, Sebastian; Sznitman, Raphael (2021). Mask then classify: multi-instance segmentation for surgical instruments. International journal of computer assisted radiology and surgery, 16(7), pp. 1227-1236. Springer 10.1007/s11548-021-02404-2

[img]
Preview
Text
Kurmann2021_Article_MaskThenClassifyMulti-instance.pdf - Published Version
Available under License Creative Commons: Attribution (CC-BY).

Download (8MB) | Preview

PURPOSE

The detection and segmentation of surgical instruments has been a vital step for many applications in minimally invasive surgical robotics. Previously, the problem was tackled from a semantic segmentation perspective, yet these methods fail to provide good segmentation maps of instrument types and do not contain any information on the instance affiliation of each pixel. We propose to overcome this limitation by using a novel instance segmentation method which first masks instruments and then classifies them into their respective type.

METHODS

We introduce a novel method for instance segmentation where a pixel-wise mask of each instance is found prior to classification. An encoder-decoder network is used to extract instrument instances, which are then separately classified using the features of the previous stages. Furthermore, we present a method to incorporate instrument priors from surgical robots.

RESULTS

Experiments are performed on the robotic instrument segmentation dataset of the 2017 endoscopic vision challenge. We perform a fourfold cross-validation and show an improvement of over 18% to the previous state-of-the-art. Furthermore, we perform an ablation study which highlights the importance of certain design choices and observe an increase of 10% over semantic segmentation methods.

CONCLUSIONS

We have presented a novel instance segmentation method for surgical instruments which outperforms previous semantic segmentation-based methods. Our method further provides a more informative output of instance level information, while retaining a precise segmentation mask. Finally, we have shown that robotic instrument priors can be used to further increase the performance.

Item Type:

Journal Article (Original Article)

Division/Institute:

10 Strategic Research Centers > ARTORG Center for Biomedical Engineering Research
10 Strategic Research Centers > ARTORG Center for Biomedical Engineering Research > ARTORG Center - AI in Medical Imaging Laboratory
04 Faculty of Medicine > Department of Head Organs and Neurology (DKNS) > Clinic of Ophthalmology
10 Strategic Research Centers > ARTORG Center for Biomedical Engineering Research > ARTORG Center - Image Guided Therapy > ARTORG Center - Ophthalmic Technology Lab

UniBE Contributor:

Kurmann, Thomas Kevin, Márquez Neila, Pablo, Wolf, Sebastian (B), Sznitman, Raphael

Subjects:

500 Science > 570 Life sciences; biology
600 Technology > 610 Medicine & health

ISSN:

1861-6429

Publisher:

Springer

Language:

English

Submitter:

Sebastian Wolf

Date Deposited:

07 Oct 2021 15:31

Last Modified:

02 Mar 2023 23:35

Publisher DOI:

10.1007/s11548-021-02404-2

PubMed ID:

34143374

Uncontrolled Keywords:

Deep learning Instance segmentation Surgical robotics

BORIS DOI:

10.48350/159566

URI:

https://boris.unibe.ch/id/eprint/159566

Actions (login required)

Edit item Edit item
Provide Feedback