Day 797
Object detection / segmentation metrics & evaluation
- Nice description of the official COCO metrics: COCO - Common Objects in Context
- Extremely interesting: Diagnosing Error in Object Detectors, University of Illinois at Urbana-Champaign (linked above in “analysis code”)
From SO: 1
[..]the only difference between mAP for object detection and instance segmentation is that when calculating overlaps between predictions and ground truths, one uses the pixel-wise IOU rather than bounding box IOU.
ROC curve / cutoff point
Finding an optimal cutoff point in a ROC curve is largely arbitrary (or ‘depending on what you need’ based on the actual thing). A lot of ways to find this. (Nice list here, but I’d see if I can find a paper with a good overview: data visualization - How to determine best cutoff point and its confidence interval using ROC curve in R? - Cross Validated)
Detectron2 internals
Nice series of posts on how Detectron2 works inside: Digging into Detectron 2 — part 1 | by Hiroto Honda | Medium
Paper with object detection metrics comparison with the focus on COCO & open source
Untderstanding model performance by looking at examples it got wrong but was confident about
The best way to build intuition about how your model performs is by looking at predictions that it was confident about but got wrong. With FiftyOne, this is easy. For example, let’s create a view into our dataset looking at the samples with the most false positives
More examples of the same: IoU a better detection evaluation metric | by Eric Hofesmann | Towards Data Science