Abstract
Governments and social scientists are increasingly developing machine learning methods to automate the process of identifying terrorists in real-time and predict future attacks. However, current operationalizations of ‘terrorist’ in artificial intelligence are difficult to justify given three issues that remain neglected: insufficient construct legitimacy, insufficient criterion validity, and insufficient construct validity. I conclude that machine learning methods should be at most used for the identification of singular individuals deemed terrorists and not for identifying possible terrorists from some more general class, nor to predict terrorist attacks more broadly, given intolerably high risks that result from such approaches.
Original language | English |
---|---|
Journal | Philosophy of Science |
Early online date | 27 Nov 2024 |
DOIs | |
Publication status | E-pub ahead of print - 27 Nov 2024 |
Keywords
- philosophy of science
- terrosim
- construct validity
- machine learning
- philosophy of artificial intelligence
- political philosophy