Abstract
Judgments of misinformation are made relative to the informational preferences of the communities making them. However, informational standards change over time, inducing distribution shifts that threaten the adequacy of machine learning models of misinformation. After articulating five kinds of distribution shifts, three solutions for enhancing success are discussed: larger static training sets, social engineering, and dynamic sampling. I argue that given the idiosyncratic ontology of misinformation, the first option is inadequate, the second is unethical, and thus the third is superior. However, I conclude that the prospects for machine learning models of misinformation are far weaker than most have presupposed, given that both epistemic and non-epistemic values are difficult to operationalize dynamically in machine code, rendering them surprisingly at most a species of recommender systems rather than literal truth detectors.
Original language | English |
---|---|
Journal | AI and Society |
DOIs | |
Publication status | Published - 9 May 2025 |
Bibliographical note
Publisher Copyright:© The Author(s) 2025.
Funding
Open Access Publishing Support Fund provided by Lingnan University.
Keywords
- Misinformation
- social epistemology
- philosophy of science
- machine learning
- philosophy of social science