On Learning Causal Structures from Non-Experimental Data without Any Faithfulness Assumption

Hanti LIN, Jiji ZHANG

Research output: Book Chapters | Papers in Conference ProceedingsConference paper (refereed)Researchpeer-review

7 Citations (Scopus)

Abstract

Consider the problem of learning, from non-experimental data, the causal (Markov equivalence) structure of the true, unknown causal Bayesian network (CBN) on a given, fixed set of (categorical) variables. This learning problem is known to be very hard, so much so that there is no learning algorithm that converges to the truth for all possible CBNs (on the given set of variables). So the convergence property has to be sacrificed for some CBNs—but for which? In response, the standard practice has been to design and employ learning algorithms that secure the convergence property for at least all the CBNs that satisfy the famous faithfulness condition, which implies sacrificing the convergence property for some CBNs that violate the faithfulness condition (Spirtes, Glymour, and Scheines, 2000). This standard design practice can be justified by assuming—that is, accepting on faith—that the true, unknown CBN satisfies the faithfulness condition. But the real question is this: Is it possible to explain, without assuming the faithfulness condition or any of its weaker variants, why it is mandatory rather than optional to follow the standard design practice? This paper aims to answer the above question in the affirmative. We first define an array of modes of convergence to the truth as desiderata that might or might not be achieved by a causal learning algorithm. Those modes of convergence concern (i) how pervasive the domain of convergence is on the space of all possible CBNs and (ii) how uniformly the convergence happens. Then we prove a result to the following effect: for any learning algorithm that tackles the causal learning problem in question, if it achieves the best achievable mode of convergence (considered in this paper), then it must follow the standard design practice of converging to the truth for at least all CBNs that satisfy the faithfulness condition—it is a requirement, not an option.
Original languageEnglish
Title of host publicationProceedings of Machine Learning Research
EditorsAryeh KONTOROVICH, Gergely NEU
Pages554-582
Number of pages29
Volume117
Publication statusPublished - Feb 2020
EventThe 31st International Conference on Algorithmic Learning Theory - San Diego, United States
Duration: 8 Feb 202011 Feb 2020
http://alt2020.algorithmiclearningtheory.org/

Publication series

NameProceedings of Machine Learning Research
Volume117
ISSN (Print)2640-3498

Conference

ConferenceThe 31st International Conference on Algorithmic Learning Theory
Abbreviated titleALT 2020
Country/TerritoryUnited States
CitySan Diego
Period8/02/2011/02/20
Internet address

Bibliographical note

We are indebted to Kevin Kelly, Clark Glymour, Frederick Eberhardt, Christopher Hitchcock, Peter Spirtes, Kun Zhang, Konstantin Genin, and three anonymous referees for their very helpful comments on earlier drafts of this paper. Lin’s research was supported by the University of California at Davis Startup Funds. Zhang’s research was supported in part by the Research Grants Council of Hong Kong under the General Research Fund LU13600715, and by a Faculty Research Grant from Lingnan University.

Keywords

  • Causal Bayesian Network
  • Causal Discovery
  • Faithfulness Condition
  • Learning Theory
  • Almost Everywhere Convergence
  • Locally Uniform Convergence

Fingerprint

Dive into the research topics of 'On Learning Causal Structures from Non-Experimental Data without Any Faithfulness Assumption'. Together they form a unique fingerprint.

Cite this