The challenge submission process will be based on grand-challenge automated docker submission and evaluation protocol. Each team will need to produce an “algorithm” container (different for each category) and submit it on the challenge website. Please check out the official grand-challenge documentation on building your own algorithm at https://grand-challenge.org/documentation/create-your-own-algorithm/.
**Github repo with sample submission algorithm containers: **¶
Following are links to github repo's that container example submission containers along with detailed instructions for the two categories:
Category 1: https://github.com/aneeqzia-isi/surgtoolloc2022-category-1
Category 2: https://github.com/aneeqzia-isi/surgtoolloc2022-category-2
**Challenge Phases: **
The submission to each category of the challenge is divided into two phases which are detailed below.
**Preliminary testing phase: **
This phase will allow the teams to test their algorithm containers on a small dataset (2 videos) and debug any issues in making their submissions. Each team will be allowed a max of 10 tries to get their algorithm container working within the grand-challenge environment. Since the teams may not be able to see logs for the output of their algorithm submission, the teams are requested to post a question regarding their failed submission on the forum and the organizing team will post the logs back as a reply to that thread. The aim of this phase will only be for the teams to get used to grand-challenge submission process and have a working algorithm container.
**Final testing phase: **
In this phase, all teams who were successful in generating a working algorithm container from the preliminary phase will submit their final algorithm container that will be run on the complete testing dataset. There will only be 1 submission allowed per team in this phase so teams will have to ensure that they have a working code/container in the preliminary phase.
**Prediction format **
In both categories (classification and detection), the model (pacakged into the algorithm container) will need to generate a dictionary (as json file) containing predictions for each frame in the videos. Specific formats of the json files for each category are given below:
**Category #1 – Surgical tool classification: **
The output json file needs to be a list of dictionaries, containing information of tools present (from the total of 14 possible tools) in each frame of the input video. An example is given below:
[
{
"slice_nr": 0,
"needle_driver": true,
"monopolar_curved_scissor": true,
"force_bipolar": false,
"clip_applier": false,
"tip_up_fenestrated_grasper": false,
"cadiere_forceps": false,
"bipolar_forceps": false,
"vessel_sealer": false,
"suction_irrigator": false,
"bipolar_dissector": false,
"prograsp_forceps": false,
"stapler": false,
"permanent_cautery_hook_spatula": false,
"grasping_retractor": false
},
{
"slice_nr": 1,
"needle_driver": true,
"monopolar_curved_scissor": true,
"force_bipolar": false,
"clip_applier": false,
"tip_up_fenestrated_grasper": false,
"cadiere_forceps": false,
"bipolar_forceps": false,
"vessel_sealer": false,
"suction_irrigator": false,
"bipolar_dissector": false,
"prograsp_forceps": false,
"stapler": false,
"permanent_cautery_hook_spatula": false,
"grasping_retractor": false
}
]
where slice_nr is the frame number.
**Category #2 – Surgical tool classification and localization: **
The output json file needs to be a dictionary containing the set of tools detected in each frame with its correspondent bounding box corners (x, y), again generating a single json file for each video like given below:
{
"type": "Multiple 2D bounding boxes",
"boxes": [
{
"corners": [
[ 54.7, 95.5, 0.5],
[ 92.6, 95.5, 0.5],
[ 92.6, 136.1, 0.5],
[ 54.7, 136.1, 0.5]
],
"name": "slice_nr_1_needle_driver",
"probability": 0.452
},
{
"corners": [
[ 54.7, 95.5, 0.5],
[ 92.6, 95.5, 0.5],
[ 92.6, 136.1, 0.5],
[ 54.7, 136.1, 0.5]
],
"name": "slice_nr_2_monopolar_curved_scissor",
"probability": 0.783
} ], "version": { "major": 1, "minor": 0 } }
Please note that the third value of each corner coordinate is not necessary for predictions but must be kept 0.5 always to comply with the Grand Challenge automated evaluation system (which was built to also consider datasets of 3D images). To standardize the submissions, the first corner is intended to be the top left corner of the bounding box, with the subsequent corners following the clockwise direction. The “type” and “version” entries are to comply with grand-challenge automated evaluation system. Please use the "probability" entry to include the confidence score for each detected bounding box.
**Final Report : **¶
Along with your algorithm submission, all teams are asked to submit a written report explaining their chosen methodology and the results they obtained. This report will be especially important if your team is one of the finalists, in which case your model will be presented at the MICCAI read-out and in our subsequent publication on the results of the challenge.
To help you with the report, we've created a rough guide for you to follow, but feel free to alter the format to fit your needs. The guide was emailed to all registered participants and is also available here.