Machine learning, misinformation, and citizen science

Adrian K. YEE*

*Corresponding author for this work

Research output: Journal PublicationsJournal Article (refereed)peer-review


Current methods of operationalizing concepts of misinformation in machine learning are often problematic given idiosyncrasies in their success conditions compared to other models employed in the natural and social sciences. The intrinsic value-ladenness of misinformation and the dynamic relationship between citizens’ and social scientists’ concepts of misinformation jointly suggest that both the construct legitimacy and the construct validity of these models needs to be assessed via more democratic criteria than has previously been recognized.
Original languageEnglish
Article number56
Number of pages24
JournalEuropean Journal for Philosophy of Science
Issue number4
Early online date22 Nov 2023
Publication statusPublished - Dec 2023

Bibliographical note

I thank the following for constructive feedback on ideas in this paper: Brian Baigrie, 921 Franz Huber, Michael Miller, Regina Rini, Denis Walsh, the Pittsburgh HPS fringe theory group, the York 922 University moral psychology lab, four anonymous reviewers, the Hong Kong Catastrophic Risk Centre for funding, and the Philosophy of Contemporary and Future Science research group at Lingnan University, Department of Philosophy. All errors and infelicities are mine alone.

Publisher Copyright:
© 2023, Springer Nature B.V.


  • Citizen science
  • Construct validity
  • Machine learning
  • Measurement
  • Misinformation
  • Social Epistemology


Dive into the research topics of 'Machine learning, misinformation, and citizen science'. Together they form a unique fingerprint.

Cite this