Surgical Tool Localization in endoscopic videos
- 29th April, 2022: Challenge is live and accepting applications
- 20th July, 2022: Submission instructions uploaded
The ability to automatically detect and track surgical instruments in endoscopic video will enable many transformational interventions. Assessing surgical performance and efficiency, identifying skilled tool use and choreography, and planning operational and logistical aspects of OR resources are just some of the applications that would benefit. Unfortunately obtaining the annotations needed to train machine learning models to identify and localize surgical tools is a difficult task. Annotating bounding boxes frame-by-frame in video is tedious and time consuming, yet a wide variety of surgical tools and surgeries must be captured for robust training. Moreover, ongoing annotator training is needed to stay up to date with surgical instrument innovation. In robot-assisted surgery however, potentially informative data like timestamps of instrument installation and removal can be programmatically harvested. The ability to use only tool presence labels to localize tools would significantly reduce the annotation workload needed to train robust tool detection, localization, and tracking models. In this challenge, we invite the surgical data science community to leverage tool presence data as weak labels to train machine learning models to detect and localize tools with bounding boxes in video frames.
This challenge will take place as part of EndoVis challenge at MICCAI 2022 in Singapore!
Top performing teams will be awarded cash prizes in multiple categories that can total up to $8,000. See the prizes page for more details.
Interested in participating in the challenge? Check out the getting started page!
For any questions, please post it on the forum here or use the contact us page to email direct queries.