Relaying Volunteers’ Input on Machine Learning to Researchers

Part of Samantha Blickhan’s job at Chicago’s Adler Planetarium involves keeping tabs on a lively message board. The digital space hosts conversations between staffers, scholars, and some of the roughly 2.8 million volunteers in Zooniverse, the world’s largest platform for crowdsourced research online.
Starting around late 2023, staffers noticed a flurry of uncertainty and discomfort on the boards and in emails from volunteers. The hubbub centered around machine learning—a branch of artificial intelligence that enables computers to detect patterns in data and develop models that can assess new information. “It ranged from ‘hey, I’m a little concerned about this’ to ‘I can’t believe you’re letting this happen on your platform’,” Blickhan says.

Blickhan co-directs Zooniverse and acts as humanities research lead. According to Zooniverse ethos, anyone can make meaningful contributions to research. The platform empowers volunteers to sort through scholars’ data troves in ways that don’t require specialized training. The site has hosted over 450 projects as diverse as identifying meteors in radio data and transcribing 15th-century wills from Spain.
The digital ruckus wasn’t sparked because machine learning was new to Zooniverse. Projects from over a decade ago used it. And for that entire decade, volunteers have commented about machine learning on the forum and communicated with Zooniverse team members about it. But late 2023 marked a shift in the tenor of discussion around machine learning that coincided with two big events.

One was a wider cultural awareness about generative artificial intelligence (AI) tools like ChatGPT, which launched in late 2022. The other was a spike in popularity of machine learning methods in Zooniverse’s humanities projects, such as The Lives of Literary Characters, which used volunteer work to train a machine learning model to understand who characters are and what they do in texts.
While Zooniverse’s team could have simply continued to respond individually to volunteers as issues emerged, Blickhan says, “we knew that there was a bigger conversation to be had.”
A grant from The Kavli Foundation will enable that conversation. Blickhan is principal investigator on the grant. She and the Zooniverse team will explore the ethics of machine learning in citizen science in partnership with the University of Chicago’s Kavli Institute for Cosmological Physics (KICP); the Kavli Center for Ethics, Science, and the Public at the University of California, Berkeley; and the National Science Foundation-Simons SkAI Institute, a newly-established effort to leverage AI for interpreting astronomical datasets funded by the National Science Foundation (NSF) and Simons Foundation.
The effort will culminate in recommendations to guide the use of machine learning in citizen science projects as well as public communication about that work. Those guidelines will be hashed out in a series of small workshops and working sessions, with extensive input from both the broader scientific community and the millions-strong Zooniverse community. The recommendations will initially be implemented on Zooniverse’s platform and in SkAI projects. They’ll be available to any researcher who seeks guidelines for ethically incorporating machine learning into their work or talking about it with audiences.
“We’re really excited to strengthen our long-standing partnership with the Adler Planetarium in this new way, and forge new connections with the Kavli Center at Berkeley,” says Abigail Vieregg, director of KICP and member of the SkAI Institute.

“Zooniverse is a highly successful program, and an extremely interesting case study to explore ethics questions surrounding the use of AI in citizen science efforts, and in basic science research and communication more broadly. For KICP and the SkAI institute, I hope that we can use new insights that come from the planned workshops strengthen our own research program.”
“This project is exciting for a number of reasons,” says Brooke Smith, Director of Science and Society at The Kavli Foundation. “The topic of the ethics of using machine learning in science was raised by volunteers; it is unique to have topics in these early stages raised by publics instead of speculated by scientists. While raised by volunteers, exploration will include university scientists, and deliberations will be connected back to scientific research happening in labs.”
Blickhan adds that Zooniverse is eminently suited to be a nexus for dialogue about ethical implications of science and technology. The platform hosts scholars performing curiosity-driven and applied research in physical and biological sciences, as well as researchers in social sciences and humanities.

Zooniverse leaders are already thinking about the intersection of citizen science and AI; the platform’s co-founder, University of Minnesota astronomer Lucy Fortson, co-edited a December 2024 special issue of the journal Citizen Science Theory & Practice on the topic. Furthermore, Zooniverse’s team sees its dedicated corps of volunteers as essential partners in their mission. “They’re not our audience; they’re our collaborators,” Blickhan says.
Transparent communication with the volunteers fosters trust, as does offering multiple ways for them to participate in developing new policies and recommendations. Not every volunteer will choose to participate in conversations, but regular communications and check-ins about this work will ensure that all volunteers understand Zooniverse’s commitment to incorporating cutting-edge technologies in an intentional, ethical way, Blickhan says.
Scientists, too, will have multiple opportunities to share their thoughts about the ethical considerations of leveraging machine learning. Blickhan and her colleagues will host discussions about their work at major conferences such as the American Association for the Advancement of Science (AAAS) annual meeting.
Typically, funding calls are tied to developing a new product or project, Blickhan says. The Kavli Foundation’s funding, in contrast, will allow her and her colleagues to fully inhabit engrossing snippets of conversation about machine learning that have never quite made it to the top of the priority list. “It can be difficult to find opportunities to fund work that gives us the time and space to engage in a dialogue amongst our audiences,” she says.
Blickhan envisions that the recommendations around machine learning will be a starting point rather than a conclusion. But she hopes that they will encourage researchers to think deeply about why they are incorporating machine learning into their projects. Ideally, they’ll help volunteers understand what kinds of questions they should be asking researchers, so that they can understand the benefits and risks of a citizen science project and make informed decisions about whether to join.