jump to content

Fachbereich Veterinärmedizin


Service-Navigation

    Publication Database

    Are pathologist-defined labels reproducible?
    Comparison of the TUPAC16 mitotic figure dataset with an alternative set of labels (2020)

    Art
    Vortrag
    Autoren
    Bertram, Christof A. (WE 12)
    Veta, Mitko
    Marzahl, Christian
    Stathonikos, Nikolas
    Maier, Andreas
    Klopfleisch, Robert (WE 12)
    Aubreville, Marc
    Kongress
    IMIMIC 2020, MIL3ID 2020, LABELS 2020 : Interpretable and Annotation-Efficient Learning for Medical Image Computing
    Lima, Peru, 08.10.2020
    Quelle
    Interpretable and annotation-efficient learning for medical image computing : Third International Workshop, iMIMIC 2020, Second International Workshop, MIL3iD 2020, and 5th International Workshop, LABELS 2020, held in conjunction with MICCAI 2020, Lima, Peru, October 4–8, 2020, proceedings — Jaime Cardoso, Hien Van Nguyen, Nicholas Heller et al. (Hrsg.)
    1st edition 2020 Auflage
    Cham: Springer International Publishing, 2020. Lecture notes in computer science ; 12446 — S. 204–213
    ISBN: 978-3-030-61166-8
    Sprache
    Englisch
    Verweise
    URL (Volltext): https://link.springer.com/chapter/10.1007/978-3-030-61166-8_22
    DOI: 10.1007/978-3-030-61166-8
    Kontakt
    Institut für Tierpathologie

    Robert-von-Ostertag-Str. 15
    14163 Berlin
    +49 30 838 62450
    pathologie@vetmed.fu-berlin.de

    Abstract / Zusammenfassung

    Pathologist-defined labels are the gold standard for histopathological data sets, regardless of well-known limitations in consistency for some tasks. To date, some datasets on mitotic figures are available and were used for development of promising deep learning-based algorithms. In order to assess robustness of those algorithms and reproducibility of their methods it is necessary to test on several independent datasets. The influence of different labeling methods of these available datasets is currently unknown. To tackle this, we present an alternative set of labels for the images of the auxiliary mitosis dataset of the TUPAC16 challenge. Additional to manual mitotic figure screening, we used a novel, algorithm-aided labeling process, that allowed to minimize the risk of missing rare mitotic figures in the images. All potential mitotic figures were independently assessed by two pathologists. The novel, publicly available set of labels contains 1,999 mitotic figures (+28.80%) and additionally includes 10,483 labels of cells with high similarities to mitotic figures (hard examples). We found significant difference comparing F1 scores between the original label set (0.549) and the new alternative label set (0.735) using a standard deep learning object detection architecture. The models trained on the alternative set showed higher overall confidence values, suggesting a higher overall label consistency. Findings of the present study show that pathologists-defined labels may vary significantly resulting in notable difference in the model performance. Comparison of deep learning-based algorithms between independent datasets with different labeling methods should be done with caution.