zum Inhalt springen

Fachbereich Veterinärmedizin


Service-Navigation

    Publikationsdatenbank

    Towards an automated facial expression analysis in mice using deep learing (2021)

    Art
    Vortrag
    Autoren
    Hohlbaum, Katharina (WE 11)
    Andresen, Niek
    Wöllhaf, Manuell
    Lewejohann, Lars (WE 11)
    Hellwich, Olaf
    Thöne-Reineke, Christa (WE 11)
    Belik, Vitaly (WE 16)
    Kongress
    11th World Congress on Alternatives and Animal use in the life sciences
    Maastricht (Online), 23.08. – 02.09.2021
    Quelle
    ALTEX proceedings
    Bandzählung: 9
    Heftzählung: 1
    Seiten: 28 – 29
    ISSN: 2194-0479
    Sprache
    Englisch
    Verweise
    URL (Volltext): https://www.wc11maastricht.org/wc11-abstracts/
    Kontakt
    Institut für Veterinär-Epidemiologie und Biometrie

    Königsweg 67
    14163 Berlin
    +49 30 838 56034
    epi@vetmed.fu-berlin.de

    Abstract / Zusammenfassung

    The Mouse Grimace Scale (MGS), a coding system for facial expression analysis of pain in mice, has been widely accepted as an important welfare indicator among experts in laboratory animal science over the last years (Campos-Luna et al., 2019). However, since this method is usually manually applied by human observers, it can be assumed that the MGS scores are subjective to a certain extent. Therefore, the reliability of this method is frequently questioned. Moreover, if facial expressions of mice are analyzed in real time by human observers standing right in front of the cage and scoring the mice, the presence of the humans can influence their behavior. Mice are prey animals and often hide signs of negative affective states when humans are present
    (Stasiak et al., 2003). To circumvent these problems, we aimed to develop a facial expression recognition software for mice (Andresen et al., 2019). We utilized a dataset including images of adult male and female C57BL/6JRj mice, that were either anesthetized with isoflurane or ketamine/xylazine, castrated (under isoflurane, meloxicam, lidocaine/prilocaine) or untreated. The image dataset was divided into two categories, i.e., post-surgical/anesthetic effects and no post-surgical/anesthetic effects, and a binary classifier was trained to differentiate between the categories. We used three convolutional neural network (CNN) architectures (two pre-trained state of the art deep CNN: ResNet50 and InceptionV3; one CNN of our own design without pre-training) and achieved an accuracy of up to 99%, which can keep up with the human performance. Moreover, Deep Taylor decomposition – a feature visualization technique – indicated that the decision of the network was indeed mainly based on image areas depicting the mouse faces (Andresen et al., 2019). Our first steps
    towards a fully automated facial expression recognition software provide a significant contribution to refining pain and stress assessment in laboratory mice.