jump to content

Fachbereich Veterinärmedizin


Service-Navigation

    Publication Database

    Are fast labeling methods reliable?
    A case study of computer-aided expert annotations on microscopy slides (2020)

    Art
    Vortrag
    Autoren
    Marzahl, Christian
    Bertram, Christof A. (WE 12)
    Aubreville, Marc
    Petick, Anne
    Weiler, Kristina
    Gläsel, Agnes C.
    Fragoso-Garcia, Marco (WE 12)
    Merz, Sophie (WE 12)
    Bartenschlager, Florian (WE 12)
    Hoppe, Judith (WE 12)
    Langenhagen, Alina (WE 12)
    Jasensky, Anne-Katherine
    Voigt, Jörn
    Klopfleisch, Robert (WE 12)
    Maier, Andreas
    Kongress
    MICCAI 2020
    Lima, Peru, 04. – 08.10.2020
    Quelle
    Medical image computing and computer cssisted intervention – MICCAI 2020 : 23rd International Conference, Lima, Peru, October 4–8, 2020, proceedings, Part I — Anne L. Martel, Purang Abolmaesumi, Danail Stoyanov, Diana Mateus, Maria A. Zuluaga, S. Kevin Zhou, Daniel Racoceanu, Leo Joskowicz (Eds.) (Hrsg.)
    1st edition 2020 Auflage
    Berlin, Heidelberg: Springer, 2020. Lecture notes in computer science ; 12261 — S. 24–32
    ISBN: 978-3-030-59710-8
    Sprache
    Englisch
    Verweise
    URL (Volltext): https://link.springer.com/book/10.1007%2F978-3-030-59710-8
    DOI: 10.1007/978-3-030-59710-8
    Kontakt
    Institut für Tierpathologie

    Robert-von-Ostertag-Str. 15
    14163 Berlin
    +49 30 838 62450
    pathologie@vetmed.fu-berlin.de

    Abstract / Zusammenfassung

    Deep-learning-based pipelines have shown the potential to revolutionalize microscopy image diagnostics by providing visual augmentations and evaluations to a trained pathology expert. However, to match human performance, the methods rely on the availability of vast amounts of high-quality labeled data, which poses a significant challenge. To circumvent this, augmented labeling methods, also known as expert-algorithm-collaboration, have recently become popular. However, potential biases introduced by this operation mode and their effects for training deep neuronal networks are not entirely understood. This work aims to shed light on some of the effects by providing a case study for three pathologically relevant diagnostic settings. Ten trained pathology experts performed a labeling tasks first without and later with computer-generated augmentation. To investigate different biasing effects, we intentionally introduced errors to the augmentation. In total, the pathology experts annotated 26,015 cells on 1,200 images in this novel annotation study. Backed by this extensive data set, we found that the concordance of multiple experts was significantly increased in the computer-aided setting, versus the unaided annotation. However, a significant percentage of the deliberately introduced false labels was not identified by the experts.