Show simple item record

dc.contributor.authorMitterwallner, Veronika
dc.contributor.authorPeters, Anne
dc.contributor.authorEdelhoff, Hendrik
dc.contributor.authorMathes, Gregor
dc.contributor.authorNguyen, Hien
dc.contributor.authorPeters, Wibke Erika Brigitta
dc.contributor.authorHeurich, Marco Dietmar
dc.contributor.authorSteinbauer, Manuel
dc.identifier.citationRemote Sensing in Ecology and Conservation. 2023, .en_US
dc.description.abstractAs human activities in natural areas increase, understanding human–wildlife interactions is crucial. Big data approaches, like large-scale camera trap studies, are becoming more relevant for studying these interactions. In addition, open-source object detection models are rapidly improving and have great potential to enhance the image processing of camera trap data from human and wildlife activities. In this study, we evaluate the performance of the open-source object detection model MegaDetector in cross-regional monitoring using camera traps. The performance at detecting and counting humans, animals and vehicles is evaluated by comparing the detection results with manual classifications of more than 300 000 camera trap images from three study regions. Moreover, we investigate structural patterns of misclassification and evaluate the results of the detection model for typical temporal analyses conducted in ecological research. Overall, the accuracy of the detection model was very high with 96.0% accuracy for animals, 93.8% for persons and 99.3% for vehicles. Results reveal systematic patterns in misclassifications that can be automatically identified and removed. In addition, we show that the detection model can be readily used to count people and animals on images with underestimating persons by −0.05, vehicles by −0.01 and animals by −0.01 counts per image. Most importantly, the temporal pattern in a long-term time series of manually classified human and wildlife activities was highly correlated with classification results of the detection model (Pearson's r = 0.996, p < 0.001) and diurnal kernel densities of activities were almost equivalent for manual and automated classification. The results thus prove the overall applicability of the detection model in the image classification process of cross-regional camera trap studies without further manual intervention. Besides the great acceleration in processing speed, the model is also suitable for long-term monitoring and allows reproducibility in scientific studies while complying with privacy regulations.en_US
dc.rightsAttribution-NonCommercial-NoDerivatives 4.0 Internasjonal*
dc.subjectcamera trapsen_US
dc.subjecthuman–wildlife interactionsen_US
dc.subjectmachine learningen_US
dc.subjectrecreation ecologyen_US
dc.subjectwildlife ecologyen_US
dc.titleAutomated visitor and wildlife monitoring with camera traps and machine learningen_US
dc.title.alternativeAutomated visitor and wildlife monitoring with camera traps and machine learningen_US
dc.typePeer revieweden_US
dc.typeJournal articleen_US
dc.rights.holder2023 The Authors.en_US
dc.subject.nsiVDP::Matematikk og Naturvitenskap: 400::Informasjons- og kommunikasjonsvitenskap: 420::Simulering, visualisering, signalbehandling, bildeanalyse: 429en_US
dc.source.journalRemote Sensing in Ecology and Conservationen_US

Files in this item


This item appears in the following Collection(s)

Show simple item record

Attribution-NonCommercial-NoDerivatives 4.0 Internasjonal
Except where otherwise noted, this item's license is described as Attribution-NonCommercial-NoDerivatives 4.0 Internasjonal