Artificial Intelligence, Personal Ontology, and Existential Risks

Research output: Journal PublicationsJournal Article (refereed)peer-review

Abstract

Some future forms of AI may kill us all, or at least irremediably maim our hopes to enjoy the benefits of great technological advances in the future – or so some philosophers working on existential risk have recently claimed. But what does ‘our’ stand for in this context – and why could the extension of that term not include, say, superintelligent AIs as well? This paper explores several foundational issues in the recent debates on existential risks related to AI, with a particular focus on the conceptual connections between ontological theories of our nature – answers to the question, ‘What am I?’ — and recent formulations of the notion of an existential risk.
Original languageEnglish
Pages (from-to)59-70
Number of pages12
JournalPhilosophy of AI
Volume1 (2025)
DOIs
Publication statusPublished - 22 Nov 2025

Funding

This work was supported by Hong Kong RGC/GRF Project No: 13607023.

Keywords

  • Existential Risk
  • AI
  • Superintelligence
  • Personal Identity
  • Personal Ontology

Fingerprint

Dive into the research topics of 'Artificial Intelligence, Personal Ontology, and Existential Risks'. Together they form a unique fingerprint.

Cite this