In an increasingly technocratic society, science is largely compartmentalized as a specialized, high-skilled industry. There is debate, even, within the scientific community about who can aptly hold the “scientist” calling card, predicated on the perceived significance of academic degrees and knowledge of technical jargon. Can you call yourself a scientist if you only have a Bachelor’s? A Master’s? Or is the scientist descriptor exclusively reserved for those with a PhD?
But science, policy, and social wellbeing all benefit from communal contributions to research. Community science supports the wellness of integrated social-ecological systems, bolsters community empowerment via participation in science, and works to break down the exclusionary barriers of academia and science. Increasingly, community science is being recognized by federal and regulatory bodies, with 2017’s CrowdSourcing and Citizen Science Act publicly acknowledging both the important potential of community science and the benefit of public participation in research (McLaughlin et al., 2019). However, there are still significant concerns about using community science (CS) data in academic research and environmental decision making. In an exploratory study conducted by The Wilson Center about perceived barriers to integrating CS data in federal agencies, participants (anonymous employees at federal agencies) cited perceived issues related to data quality, management, and privacy, as well as concerns about regulatory barriers, lack of institutional recognition and funding, and general absence of successful examples of integrating CS to policy (Lab TWCC, 2014). This sentiment is shared by many in both political and scientific spaces and hinders future integration of CS into environmental decision making.
In this article, I present a case study to show how community science data may be used successfully in federal environmental decision-making, focusing on the use of eBird abundance data in determining raptor conservation measures. In this example, I will illustrate how the technical strengths of eBird as a community science platform address key data quality concerns, as well as considerations for future pathways of integration in a demonstrated federal capacity.
eBird, a free, open-access community science platform documenting ornithological trends, has the potential to mitigate a few key integration concerns and has a successful track record of assisting environmental decision making at the federal level. eBird users submit observations by indicating location, date, time spent birding, and number of bird species observed. Species observations that are unusual for the region, reported outside the typical season range, or have a count that is atypically high are flagged by the algorithm and reviewed by regional editors for data validation. Media reviewers, which can include eBird users with more than 100 submissions, flag incorrect photo and song recording submissions to be sent for final review by a regional volunteer editor (“eBird data quality”, n.d).
Because of consistent funding, eBird has developed a powerful data management and storage system as well as a well-maintained reviewer pool: in turn, it benefits from nearly two million observation submissions per month during peak migration seasons (usually May-June for spring migration and August-October for fall migration). eBird addresses data quality issues through an algorithmic data flagging process that is mediated by a supplemental round of human review. This human review is conducted by volunteers primarily recruited from a pool of high-volume, high-quality submitters on eBird, as well as local birding listservs in more remote areas (Kelling, 2018).
Due to eBird’s high number of quality bird observation submissions, breadth of scope, and accessible open-source data management structure, data from the platform has been successfully used for technical environmental assessments at the federal level. A study conducted from the Cornell Lab of Ornithology and US Fish and Wildlife Service (USFWS) found that, compared to similar platforms, eBird provides the most accurate and comprehensive data and statistical modeling for Bald eagle distribution across the United States.
Case Study: Wind Turbine Collision Risk Modeling for Bald Eagles using eBird
As it is illegal to directly or indirectly cause the death of a bald or golden eagle under the Bald and Golden Eagle Protection Act, unless in direct possession of a FWS permit, research ornithologists and wind energy companies need highly comprehensive eagle abundance data to accurately assess the risk of eagle-turbine collisions to both avoid civil penalties and conserve these birds of prey. To prevent accidental Bald eagle mortality via wind turbine collisions, the USFWS initially generated a collision risk model for proposed wind energy sites based on “pre-construction estimates” using existing observational eagle abundance data (New et al., 2015). This model performed well in areas with high observed eagle abundance rates, however, the USFWS were forced to evaluate additional sources of data, including eBird, to predict collision risks for areas of lower observed abundance.
Methodology and Results
To validate eBird’s potential for eagle-turbine collision risk modeling, researchers evaluated eBird’s ability to accurately assess eagle abundance and distribution. Eagle data from eBird was validated by indexing existing bird population data, such as the MidWinter Bald Eagle Survey, to assess the accuracy of the generated models. eBird was found to be the only platform with data comprehensive enough, even as compared to observational studies, to accurately assess and estimate eagle populations in both low and high abundance areas. Thus, the scope and extensive level of detail of eBird submissions provides data sufficiently comprehensive enough to accurately predict wind turbine collision risks to eagles across the United States. The collision risk of exposure generated by eBird’s 50th percentile eagle abundance, accurately predicted 91-100% of high use areas as compared to existing datasets (Figure 1). “Low-use” regions were categorized as areas in which the eagle population index was definitively less than the 80th quantile and are indicated by the green areas of maps a, b, and c.
Due to the highly accurate abundance modeling generated by eBird data, USFWS made the technical recommendation to use eBird as the only source of data to assess eagle abundance areas for wind turbine collision risk modeling. This example is a rousing success story of integrating CS data into environmental decision making and illustrates how to mitigate perceived issues of CS integration by way of multi-faceted data validation.
As access to technology expands, science is rapidly disembarking from its esoteric backdrop and barreling towards community-based, data-driven narratives. With a simple swipe on a smart phone application, anyone with a demonstrated interest in conservation can actively participate in, and provide value to, research of ecological importance. USFWS and Cornell Lab researchers provide a framework for successfully incorporating semi-structured community science data in environmental decision making. Moving forward, federal agencies may consider inclusion pathways that view CS data as a complement to existing surveys instead of a primary source. Future use of community science data into environmental decision making may be facilitated by increased coordination between federal agencies and CS actors, a more streamlined data collection process from observation to regulatory review, as well as continued research on the validity of CS data for ecological decision making.
Image provided by Bill Moses, Hawk Mountain
Bald and golden eagle protection act: U.S. Fish & Wildlife Service (no date) FWS.gov. Available at: https://www.fws.gov/law/bald-and-golden-eagle-protection-act.
Cooke, W.W. and Merriam, C.H. (1888) “Report on bird migration in the Mississippi Valley in the years 1884 and 1885.” Available at: https://doi.org/10.5962/bhl.title.54982.
EBird data quality (no date) Help Center. Available at: https://support.ebird.org/en/support/solutions/articles/48000795278-the-ebird-review-process.
Kelling, S. (2018) “Improving data quality in ebird- the expert reviewer network,” Biodiversity Information Science and Standards, 2. Available at: https://doi.org/10.3897/biss.2.25394.
Lab, T.W.C.C. (2014) An exploratory study on barriers, Wilson Center: Commons Lab. Available at: https://stipcommunia.wordpress.com/2014/09/07/an-exploratory-study-on-barriers/.
McLaughlin, J., Benforado, J. and Liu, S. (no date) Report to Congress describes the breadth and scope of federal crowdsourcing and citizen science, CitizenScience.gov. Available at: https://www.citizenscience.gov/2019/06/18/report-to-congress-2019/#.
New, L. et al. (2015) “A collision risk model to predict avian fatalities at wind facilities: An example using golden eagles, Aquila Chrysaetos,” PLOS ONE, 10(7). Available at: https://doi.org/10.1371/journal.pone.0130978.