zum Inhalt springen

Fachbereich Veterinärmedizin


Service-Navigation

    Publikationsdatenbank

    Validation of a Time-Distributed residual LSTM–CNN and BiLSTM for equine behavior recognition using collar-worn sensors (2025)

    Art
    Zeitschriftenartikel / wissenschaftlicher Beitrag
    Autoren
    Kirsch, Katharina (WE 11)
    Strutzke, Saskia
    Klitzing, Lara (WE 11)
    Pilger, Franziska
    Thöne-Reineke, Christa (WE 11)
    Hoffmann, Gundula
    Quelle
    Computers and electronics in agriculture : COMPAG online ; an international journal
    Bandzählung: 231
    Seiten: 109999
    ISSN: 0168-1699
    Sprache
    Englisch
    Verweise
    URL (Volltext): https://www.sciencedirect.com/science/article/pii/S016816992500105X?via%3Dihub
    DOI: 10.1016/j.compag.2025.109999
    Kontakt
    Institut für Tierschutz, Tierverhalten und Versuchstierkunde

    Königsweg 67
    14163 Berlin
    +49 30 838 61146
    tierschutz@vetmed.fu-berlin.de

    Abstract / Zusammenfassung

    Equine daily behavior is a key welfare indicator, offering insights into how environmental and training conditions influence health and well-being. Continuous direct behavior observation, however, is labor-intensive and impractical for large-scale studies. While advances in wearable sensors and deep learning have revolutionized human and animal activity recognition, automated wearable sensor systems for recognizing a diverse repertoire of equine daily behaviors remain limited.
    We propose a hierarchical deep learning framework combining a Time-Distributed Residual LSTM-CNN for extracting local spatiotemporal features from short subsegments of sensor data and a bidirectional LSTM (BiLSTM) for capturing long-term temporal dependencies. Our model was validated using approximately 60 h of tri-axial accelerometer and gyroscope data collected from 10 horses wearing collar-mounted sensors. Fifteen
    daily behaviors were labeled based on video recordings. The model achieved an overall classification accuracy of > 93 % in 10-fold cross-validation and > 85 % in leave-one-subject-out cross-validation. The classification performance was significantly affected by housing conditions and the associated varying frequency of behaviors in the dataset.
    This study provides a valid framework for sensor-based automatic behavior recognition in horses, capable of capturing both local spatiotemporal and long-term temporal dependencies from raw sensor data. Our proposed framework enables scalable and reliable monitoring of equine daily behaviors and makes an important contribution to the development of automated, data-driven approaches to equine welfare assessment.