Machine Learning, Interpretability, & Drone Strikes

Activity: Talks or PresentationsOther Invited Talks or Presentations

Description

While automated weapons systems have become a mainstream topic in military ethics, increasing usage of machine learning methods in their development remains under analyzed from an epistemological perspective. I argue that problems of the interpretability of algorithms present uniquely difficult issues in contemporary drone strike methodology, especially in the context of recent US foreign policy, extant definitions of ‘terrorist’, and the opacity of many algorithms that either have been or may eventually be used in military contexts. I argue for an account of algorithmic interpretability that is most appropriate for enhancing our abilities to assign proportionate moral responsibility to the numerous actors involved in preparing, approving, and conducting drone strikes.
Period11 Apr 2024
Held atDepartment of Philosophy