Home > Research > Publications & Outputs > Camera settings and biome influence the accurac...

Links

Text available via DOI:

View graph of relations

Camera settings and biome influence the accuracy of citizen science approaches to camera trap image classification

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Published

Standard

Camera settings and biome influence the accuracy of citizen science approaches to camera trap image classification. / Egna, N.; O'Connor, D.; Stacy-Dawes, J. et al.
In: Ecology and Evolution, Vol. 10, No. 21, 13.11.2020, p. 11954-11965.

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Harvard

Egna, N, O'Connor, D, Stacy-Dawes, J, Tobler, MW, Pilfold, N, Neilson, K, Simmons, B, Davis, EO, Bowler, M, Fennessy, J, Glikman, JA, Larpei, L, Lekalgitele, J, Lekupanai, R, Lekushan, J, Lemingani, L, Lemirgishan, J, Lenaipa, D, Lenyakopiro, J, Lesipiti, RL, Lororua, M, Muneza, A, Rabhayo, S, Ole Ranah, SM, Ruppert, K & Owen, M 2020, 'Camera settings and biome influence the accuracy of citizen science approaches to camera trap image classification', Ecology and Evolution, vol. 10, no. 21, pp. 11954-11965. https://doi.org/10.1002/ece3.6722

APA

Egna, N., O'Connor, D., Stacy-Dawes, J., Tobler, M. W., Pilfold, N., Neilson, K., Simmons, B., Davis, E. O., Bowler, M., Fennessy, J., Glikman, J. A., Larpei, L., Lekalgitele, J., Lekupanai, R., Lekushan, J., Lemingani, L., Lemirgishan, J., Lenaipa, D., Lenyakopiro, J., ... Owen, M. (2020). Camera settings and biome influence the accuracy of citizen science approaches to camera trap image classification. Ecology and Evolution, 10(21), 11954-11965. https://doi.org/10.1002/ece3.6722

Vancouver

Egna N, O'Connor D, Stacy-Dawes J, Tobler MW, Pilfold N, Neilson K et al. Camera settings and biome influence the accuracy of citizen science approaches to camera trap image classification. Ecology and Evolution. 2020 Nov 13;10(21):11954-11965. Epub 2020 Oct 6. doi: 10.1002/ece3.6722

Author

Egna, N. ; O'Connor, D. ; Stacy-Dawes, J. et al. / Camera settings and biome influence the accuracy of citizen science approaches to camera trap image classification. In: Ecology and Evolution. 2020 ; Vol. 10, No. 21. pp. 11954-11965.

Bibtex

@article{34e02c66cd874f7e95ce43e7218061f2,
title = "Camera settings and biome influence the accuracy of citizen science approaches to camera trap image classification",
abstract = "Scientists are increasingly using volunteer efforts of citizen scientists to classify images captured by motion-activated trail cameras. The rising popularity of citizen science reflects its potential to engage the public in conservation science and accelerate processing of the large volume of images generated by trail cameras. While image classification accuracy by citizen scientists can vary across species, the influence of other factors on accuracy is poorly understood. Inaccuracy diminishes the value of citizen science derived data and prompts the need for specific best-practice protocols to decrease error. We compare the accuracy between three programs that use crowdsourced citizen scientists to process images online: Snapshot Serengeti, Wildwatch Kenya, and AmazonCam Tambopata. We hypothesized that habitat type and camera settings would influence accuracy. To evaluate these factors, each photograph was circulated to multiple volunteers. All volunteer classifications were aggregated to a single best answer for each photograph using a plurality algorithm. Subsequently, a subset of these images underwent expert review and were compared to the citizen scientist results. Classification errors were categorized by the nature of the error (e.g., false species or false empty), and reason for the false classification (e.g., misidentification). Our results show that Snapshot Serengeti had the highest accuracy (97.9%), followed by AmazonCam Tambopata (93.5%), then Wildwatch Kenya (83.4%). Error type was influenced by habitat, with false empty images more prevalent in open-grassy habitat (27%) compared to woodlands (10%). For medium to large animal surveys across all habitat types, our results suggest that to significantly improve accuracy in crowdsourced projects, researchers should use a trail camera set up protocol with a burst of three consecutive photographs, a short field of view, and determine camera sensitivity settings based on in situ testing. Accuracy level comparisons such as this study can improve reliability of future citizen science projects, and subsequently encourage the increased use of such data. ",
keywords = "amazon, crowdsource, image processing, kenya, serengeti, trail camera, volunteer",
author = "N. Egna and D. O'Connor and J. Stacy-Dawes and M.W. Tobler and N. Pilfold and K. Neilson and B. Simmons and E.O. Davis and M. Bowler and J. Fennessy and J.A. Glikman and L. Larpei and J. Lekalgitele and R. Lekupanai and J. Lekushan and L. Lemingani and J. Lemirgishan and D. Lenaipa and J. Lenyakopiro and R.L. Lesipiti and M. Lororua and A. Muneza and S. Rabhayo and {Ole Ranah}, S.M. and K. Ruppert and M. Owen",
year = "2020",
month = nov,
day = "13",
doi = "10.1002/ece3.6722",
language = "English",
volume = "10",
pages = "11954--11965",
journal = "Ecology and Evolution",
issn = "2045-7758",
publisher = "John Wiley and Sons Ltd",
number = "21",

}

RIS

TY - JOUR

T1 - Camera settings and biome influence the accuracy of citizen science approaches to camera trap image classification

AU - Egna, N.

AU - O'Connor, D.

AU - Stacy-Dawes, J.

AU - Tobler, M.W.

AU - Pilfold, N.

AU - Neilson, K.

AU - Simmons, B.

AU - Davis, E.O.

AU - Bowler, M.

AU - Fennessy, J.

AU - Glikman, J.A.

AU - Larpei, L.

AU - Lekalgitele, J.

AU - Lekupanai, R.

AU - Lekushan, J.

AU - Lemingani, L.

AU - Lemirgishan, J.

AU - Lenaipa, D.

AU - Lenyakopiro, J.

AU - Lesipiti, R.L.

AU - Lororua, M.

AU - Muneza, A.

AU - Rabhayo, S.

AU - Ole Ranah, S.M.

AU - Ruppert, K.

AU - Owen, M.

PY - 2020/11/13

Y1 - 2020/11/13

N2 - Scientists are increasingly using volunteer efforts of citizen scientists to classify images captured by motion-activated trail cameras. The rising popularity of citizen science reflects its potential to engage the public in conservation science and accelerate processing of the large volume of images generated by trail cameras. While image classification accuracy by citizen scientists can vary across species, the influence of other factors on accuracy is poorly understood. Inaccuracy diminishes the value of citizen science derived data and prompts the need for specific best-practice protocols to decrease error. We compare the accuracy between three programs that use crowdsourced citizen scientists to process images online: Snapshot Serengeti, Wildwatch Kenya, and AmazonCam Tambopata. We hypothesized that habitat type and camera settings would influence accuracy. To evaluate these factors, each photograph was circulated to multiple volunteers. All volunteer classifications were aggregated to a single best answer for each photograph using a plurality algorithm. Subsequently, a subset of these images underwent expert review and were compared to the citizen scientist results. Classification errors were categorized by the nature of the error (e.g., false species or false empty), and reason for the false classification (e.g., misidentification). Our results show that Snapshot Serengeti had the highest accuracy (97.9%), followed by AmazonCam Tambopata (93.5%), then Wildwatch Kenya (83.4%). Error type was influenced by habitat, with false empty images more prevalent in open-grassy habitat (27%) compared to woodlands (10%). For medium to large animal surveys across all habitat types, our results suggest that to significantly improve accuracy in crowdsourced projects, researchers should use a trail camera set up protocol with a burst of three consecutive photographs, a short field of view, and determine camera sensitivity settings based on in situ testing. Accuracy level comparisons such as this study can improve reliability of future citizen science projects, and subsequently encourage the increased use of such data. 

AB - Scientists are increasingly using volunteer efforts of citizen scientists to classify images captured by motion-activated trail cameras. The rising popularity of citizen science reflects its potential to engage the public in conservation science and accelerate processing of the large volume of images generated by trail cameras. While image classification accuracy by citizen scientists can vary across species, the influence of other factors on accuracy is poorly understood. Inaccuracy diminishes the value of citizen science derived data and prompts the need for specific best-practice protocols to decrease error. We compare the accuracy between three programs that use crowdsourced citizen scientists to process images online: Snapshot Serengeti, Wildwatch Kenya, and AmazonCam Tambopata. We hypothesized that habitat type and camera settings would influence accuracy. To evaluate these factors, each photograph was circulated to multiple volunteers. All volunteer classifications were aggregated to a single best answer for each photograph using a plurality algorithm. Subsequently, a subset of these images underwent expert review and were compared to the citizen scientist results. Classification errors were categorized by the nature of the error (e.g., false species or false empty), and reason for the false classification (e.g., misidentification). Our results show that Snapshot Serengeti had the highest accuracy (97.9%), followed by AmazonCam Tambopata (93.5%), then Wildwatch Kenya (83.4%). Error type was influenced by habitat, with false empty images more prevalent in open-grassy habitat (27%) compared to woodlands (10%). For medium to large animal surveys across all habitat types, our results suggest that to significantly improve accuracy in crowdsourced projects, researchers should use a trail camera set up protocol with a burst of three consecutive photographs, a short field of view, and determine camera sensitivity settings based on in situ testing. Accuracy level comparisons such as this study can improve reliability of future citizen science projects, and subsequently encourage the increased use of such data. 

KW - amazon

KW - crowdsource

KW - image processing

KW - kenya

KW - serengeti

KW - trail camera

KW - volunteer

U2 - 10.1002/ece3.6722

DO - 10.1002/ece3.6722

M3 - Journal article

VL - 10

SP - 11954

EP - 11965

JO - Ecology and Evolution

JF - Ecology and Evolution

SN - 2045-7758

IS - 21

ER -