Findings and implications for practitioners

In the context of a global biodiversity crisis, the most common type of biodiversity citizen science program, often called “contributory” (Bonney et al., 2009), invites volunteers to participate in knowledge production with a single task: data collection. Increasingly, data is collected through digital surveys submitted using smartphone apps and desktop portals. The scale of volunteers’ contributions is exemplified by the Australian biodiversity database (the Atlas of Living Australia), where citizen scientists have submitted 50% of the 115 million biodiversity records (Roger et al., 2023).

Some literature about contributory citizen science has maintained a realist view of participation (Chilvers & Kearnes, 2020), according to which participation happens in a prescribed way (e.g., individuals only collect data). This view has framed technologies, such as biodiversity monitoring apps, as tools for making participation easy, thus supporting program goals of more participants collecting more data. In this research project, drawing on science and technology studies and human geography, I challenged the assumptions of such a view and proposed a relational view instead to consider how participation is co-created by participants. I aimed to further open the black box of contributory biodiversity citizen science by interrogating the work of volunteers and digital technologies through three focal points. 1. If not just collecting data, what are volunteers doing? 2. If not just making data collection easier, how do digital technologies shape participation in knowledge production? 3. If volunteers do not just collect, submit, and forget about data, how do they practice data care?

SUMMARY OF FINDINGS

  1. Rather than being limited to data collection, volunteers in contributory biodiversity citizen science engaged in diverse and unexpected knowledge practices, both individually and in groups, including varied ways of producing, sharing, and applying knowledge. These “additional” tasks were central to volunteers’ experience (not marginal) and often contributed to programs’ objectives.
  2. Digital participation environments made data collection easier than previous analog options (such as pen-and-paper surveys), yet offered limited to no capability for data analysis and usage by volunteers. These technologies enforced the contributory logic by performing the participant as an individual data collector whose main task was to fill out standardized digital surveys and submit them.
  3. Regardless of the limitations of digital technologies and data infrastructures, volunteers still cared about, for, and with data. Some went to great lengths to keep their data, as data was meaningful for them in multiple ways. Data was evidence of endangered animals and the importance of citizen science programs, it had personal and affective value to participants, and allowed them to care (for species and places) with data.

In summary, volunteers carried varied knowledge and data care practices, and in doing so, they co-created participation, imprinting their own logics onto citizen science. Contributing to theory and practice, this research demonstrated that, alongside the dominant logic of digitalization “making participation easy” in biodiversity monitoring, volunteers had alternative logics for their practices, which were more in line with making participation meaningful.

MAKING PARTICIPATION MEANINGFUL: IMPLICATIONS FOR PRACTITIONERS

‘Good’ citizen science and ‘good’ public participation in research and conservation cannot be thought of as a universal formula, or in a pre-defined way. Recommendations for citizen science practice can only be useful if considered as situated, co-created, evolving, balancing practices in which the ‘goods’ that are desired as outcomes (e.g., caring for species, for data, for communities, etc.) are never guaranteed (Heuts & Mol, 2013) and require work.

Taking a relational view of contributory citizen science means that we (scholars and practitioners) should consider a) how to be responsive to different participants’ practices and modes of involvement and b) how to acknowledge, frame, and support (Marres, 2015a, p. 140) the contributions of participants. This brings us to the last (and for me personally, the most fundamental) aspect of my work, the implications for practice.

Citizen science program organizers would do well to consider the diverse practices and meaning making within their volunteer cohorts. For a start, it may be beneficial to support those “additional” volunteer activities that clearly align with the program’s aims. Organizers could provide more clarity about “what is good data” from the perspective of the citizen science program, as well as guidance regarding data gaps and pressing research questions (M. M. Thompson et al., 2023). Learning about volunteers would require going beyond understanding volunteer motivations in any generic way and avoiding making assumptions about volunteer practices and levels of expertise. Organizers could promote ongoing conversations with volunteers to learn about them and how they engage with the program. This would mean programs would have to carve time to not just teach participants (e.g., about monitoring species) but also to learn from and about them (a point also raised by Naquin et al. (2025)). However, there is a risk that further digitalization of citizen science activities, such as having all training and debriefing sessions online, impacts the possibility of having more casual exchanges in which the additional or emergent practices are brought up and discussed between organizers and volunteers. If programs are further digitalized, organizers may have to become more intentional about learning what volunteers are actually doing.

A related point is that, given all the work that volunteers do, programs could find alternative ways to account for programs’ success/impact. Though for programs it may remain essential to increase the number of participants and/or amount of data contributed, it is also crucial to consider the depth and extent of participants’ involvement. There is an opportunity to engage in more qualitative assessments of involvement, such as sharing volunteers’ impact stories (Wehn et al., 2021) and conservation outcomes (Skarlatidou et al., 2019), as well as the program’s retention rate over time and engagement with other environmental collectives.

One ongoing challenge regarding digital technologies used in biodiversity citizen science relates to data life cycles and access to data. Because participants cared about and for data and went to great lengths to keep copies of meaningful data, losing access to data could deter some participants from continuing their participation. Since keeping data may be crucial to volunteer retention, programs would do well to consider how to face the issues around data life cycles, which abound in the context of constant technological transformations. This would mean, for example, keeping historical data accessible for participants and taking extra measures to back up participants’ access to data while releasing new versions of apps and portals. This also ties to the point about data access and being able to download the data participants create that was previously mentioned. Maintaining access to data can be considered both a normative and an instrumental issue for citizen science programs.


I learn through conversations. Feel free to contact me about this research project or future collaborations via email or LinkedIn.

debbie.gonzalezcanada [at] unimelb.edu.au

linkedin.com/in/debcanada

orcid.org/0000-0001-8879-8034