jump to content

Fachbereich Veterinärmedizin


Service-Navigation

    Publication Database

    Towards a fully automated surveillance of well-being status in laboratory mice using deep learning:
    starting with facial expression analysis (2020)

    Art
    Zeitschriftenartikel / wissenschaftlicher Beitrag
    Autoren
    Andresen, Niek
    Wöllhaf, Manuel
    Hohlbaum, Katharina (WE 11)
    Lewejohann, Lars (WE 11)
    Hellwich, Olaf
    Thöne-Reineke, Christa (WE 11)
    Belik, Vitaly (WE 16)
    Quelle
    PLOS ONE
    Bandzählung: 15
    Heftzählung: 4
    Seiten: e0228059
    ISSN: 1932-6203
    Sprache
    Englisch
    Verweise
    URL (Volltext): https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0228059
    DOI: 10.1371/journal.pone.0228059
    Pubmed: 32294094
    Kontakt
    Institut für Tierschutz, Tierverhalten und Versuchstierkunde

    Königsweg 67
    14163 Berlin
    +49 30 838 61146
    tierschutz@vetmed.fu-berlin.de

    Abstract / Zusammenfassung

    Assessing the well-being of an animal is hindered by the limitations of efficient communication between humans and animals. Instead of direct communication, a variety of parameters are employed to evaluate the well-being of an animal. Especially in the field of biomedical research, scientifically sound tools to assess pain, suffering, and distress for experimental animals are highly demanded due to ethical and legal reasons. For mice, the most commonly used laboratory animals, a valuable tool is the Mouse Grimace Scale (MGS), a coding system for facial expressions of pain in mice. We aim to develop a fully automated system for the surveillance of post-surgical and post-anesthetic effects in mice. Our work introduces a semi-automated pipeline as a first step towards this goal. A new data set of images of black-furred laboratory mice that were moving freely is used and provided. Images were obtained after anesthesia (with isoflurane or ketamine/xylazine combination) and surgery (castration). We deploy two pre-trained state of the art deep convolutional neural network (CNN) architectures (ResNet50 and InceptionV3) and compare to a third CNN architecture without pre-training. Depending on the particular treatment, we achieve an accuracy of up to 99% for the recognition of the absence or presence of post-surgical and/or post-anesthetic effects on the facial expression.