MRI Medical imaging benchmark datasets
- nnU-Net Revisited paper lists the following, in bold the ones it considers the best:
- ACDC, KiTS, AMOS “most suitable for benchmarking”
- BTCV, LiTS, BraTS
Datasets
ACDC
- Deep Learning Techniques for Automatic MRI Cardiac Multi-Structures Segmentation and Diagnosis: Is the Problem Solved? | IEEE Journals & Magazine | IEEE Xplore
- cardiac diagnosis
- 150 (Cardiac) MRI recordings/patients from 5 different diagnosis groups
- 3 classes: LV+RV heart cavities and myocardium
- 4D (3D+time)
- 100/50 patients train/test split
- nifti/.nii
- For each patient:
- raw+ground truth data for two frames
- 4d beating raw heart
- age etc.
- links:
- Website: ACDC Challenge
- Dataset official download link: Human Heart Project
- Their python code to load/save images to nifti (nii) format and compute metrics: https://www.creatis.insa-lyon.fr/Challenge/acdc/code/metrics_acdc.py
- Dataset structure
- nifti (.nii)
- sample: https://humanheart-project.creatis.insa-lyon.fr/database/#collection/637218c173e9f0047faa00fb/folder/6372204873e9f0047faa160b
- folders:
training/patient101/
containing- Info.cfg
- metadata about the patient
- MANDATORY_CITATION.md
- patient101_4d.nii.gz
- 3d of the beating heart (+time), viewable animated in the brainbrowser viewer1
- patient101_frame01_gt.nii.gz
- ground truth data only for frame 01
- patient101_frame01.nii.gz
- raw data only for frame 01
- patient101_frame14_gt.nii.gz
- patient101_frame14.nii.gz
- Info.cfg
KiTS-23
- existed in 2019 2021 and 2023
- KiTS23 | The 2023 Kidney Tumor Segmentation Challenge2
- The main proceedings of the conference/challenge: Kidney and Kidney Tumor Segmentation: MICCAI 2023 Challenge, KiTS 2023, Held … - Google Books
- The latest publication is from the 2021 challenge
- // (Interesting anno startegy: professional places markers around the region, non-professional makes it into a pretty segmentation shape)
- KITS2023 dataset repo: neheller/kits23: The official repository of the 2023 Kidney Tumor Segmentation Challenge (KiTS23)
- they use postprocessing after annotations
- Sample: kits23/dataset/case_00194 at main · neheller/kits23
- annotating etc. was done online and the webapp is still live: Annotate | KiTS23
- They used the ulabel anno tool: SenteraLLC/ulabel: A browser-based tool for image annotation
- Structure
- raw images must be separately downloaded from servers!
- nifti/nii
- 489 train set instances released3 — mostly similar to the files from older challenges
- 3 classes: kidney, tumor, cyst
- main metadata for all patients in kits23.json: kits23/dataset/kits23.json at main · neheller/kits23
- directories are case00000-case00588
segmentation.nii.gz
is the ground truth as used in the challenge, after postprocessing, the one we need.- ./instances/ has annotations — are the raw things annotated by humans.4
- break brainviewer but not brainbrowser
[kidney|tumor|cyst]_instance-[1|2|..?]_annotation-[1|2|3].nii.gz
- N instances — e.g. most people have two kidneys — all annotations done by 3 diff annotators, then merged into the main segmentation file.
-
test set unreleased: How to Obtain Test Data in the KiTS23 Dataset? - KiTS Challenge ↩︎
-
It’s important to note the distinction between what we call “annotations” and what we call “segmentations”. We use “annotations” to refer to the raw vectorized interactions that the user generates during an annotation session. A “segmentation,” on the other hand, refers to the rasterized output of a postprocessing script that uses “annotations” to define regions of interest.[^kits2023]
Nel mezzo del deserto posso dire tutto quello che voglio.
comments powered by Disqus