Virginia Tech
Browse
DATASET
nightly detection 2012.csv (504.24 kB)
TEXT
Fort Drum, NY Acoustic Data and R code Text File.txt (1.7 kB)
DATASET
Acoustic Data Compiled.csv (1.2 MB)
DATASET
2009 KscopevsHandID.csv (3.51 kB)
DATASET
2003 KscopevsHandID.csv (1.01 kB)
DATASET
2006 KscopevsHandID.csv (5.89 kB)
DATASET
2007 Echo-Kscope.csv (399.57 kB)
DATASET
2004 EKT Comparison.csv (3.5 kB)
TEXT
Kscope and Echoclass CM.R (19.72 kB)
TEXT
Modelling across programs.R (15.92 kB)
DATASET
nightly detection 2015.csv (317.22 kB)
DATASET
2003 EKT Comparison.csv (0.89 kB)
DATASET
2011 Echo-Kscope.csv (11.11 MB)
DATASET
2005 KscopevsHandID.csv (5.2 kB)
DATASET
2009 EKT Comparison.csv (2.93 kB)
DATASET
2014 Echo-Kscope.csv (16.31 MB)
DATASET
nightly detection 2013.csv (353.09 kB)
DATASET
All Comparison nightyear.csv (25.68 kB)
DATASET
nightly detection 2007.csv (5.11 kB)
DATASET
nightly detection 2004.csv (9.21 kB)
1/0
58 files

Fort Drum, NY Acoustic Data and R code

dataset
posted on 2021-02-25, 13:58 authored by Tomas Nocera, W. Mark Ford
With the declines in abundance and changing distribution of white-nose syndrome–affected bat species, increased reliance on acoustic monitoring is now the new “normal.” As such, the ability to accurately identify individual bat species with acoustic identification programs has become increasingly important. We assessed rates of disagreement between the three U.S. Fish and Wildlife Service–approved acoustic identification software programs (Kaleidoscope Pro 4.2.0, Echoclass 3.1, and Bat Call Identification 2.7d) and manual visual identification using acoustic data collected during summers from 2003 to 2017 at Fort Drum, New York. We assessed the percentage of agreement between programs through pairwise comparisons on a total nightly count level, individual file level (e.g., individual echolocation pass call file), and grouped maximum likelihood estimate level (e.g., probability values that a species is misclassified as present when in fact it is absent) using preplanned contrasts, Akaike Information Criterion, and annual confusion matrices. Interprogram agreement on an individual file level was low, as measured by Cohen's Kappa (0.2–0.6). However, site-night level pairwise comparative analysis indicated that program agreement was higher (40–90%) using single season occupancy metrics. In comparing analytical outcomes of our different datasets (i.e., how comparable programs and visual identification are regarding the relationship between environmental conditions and bat activity), we determined high levels of congruency in the relative rankings of the model as well as the relative level of support for each individual model. This indicated that among individual software packages, when analyzing bat calls, there was consistent ecological inference beyond the file-by-file level at the scales used by managers. Depending on objectives, we believe our results can help users choose automated software and maximum likelihood estimate thresholds more appropriate for their needs and allow for better cross-comparison of studies using different automated acoustic software. A list of the files in the dataset, including a brief description of the types of files can be found in the 'Fort Drum, NY Acoustic Data and R code Text File.txt'

History

Publisher

University Libraries, Virginia Tech

Language

  • English

Location

Fort Drum, New York