Projects per year
Abstract
Consider the problem of learning, from non-experimental data, the causal (Markov equivalence) structure of the true, unknown causal Bayesian network (CBN) on a given, fixed set of (categorical) variables. This learning problem is known to be very hard, so much so that there is no learning algorithm that converges to the truth for all possible CBNs (on the given set of variables). So the convergence property has to be sacrificed for some CBNs—but for which? In response, the standard practice has been to design and employ learning algorithms that secure the convergence property for at least all the CBNs that satisfy the famous faithfulness condition, which implies sacrificing the convergence property for some CBNs that violate the faithfulness condition (Spirtes, Glymour, and Scheines, 2000). This standard design practice can be justified by assuming—that is, accepting on faith—that the true, unknown CBN satisfies the faithfulness condition. But the real question is this: Is it possible to explain, without assuming the faithfulness condition or any of its weaker variants, why it is mandatory rather than optional to follow the standard design practice? This paper aims to answer the above question in the affirmative. We first define an array of modes of convergence to the truth as desiderata that might or might not be achieved by a causal learning algorithm. Those modes of convergence concern (i) how pervasive the domain of convergence is on the space of all possible CBNs and (ii) how uniformly the convergence happens. Then we prove a result to the following effect: for any learning algorithm that tackles the causal learning problem in question, if it achieves the best achievable mode of convergence (considered in this paper), then it must follow the standard design practice of converging to the truth for at least all CBNs that satisfy the faithfulness condition—it is a requirement, not an option.
Original language | English |
---|---|
Title of host publication | Proceedings of Machine Learning Research |
Editors | Aryeh KONTOROVICH, Gergely NEU |
Pages | 554-582 |
Number of pages | 29 |
Volume | 117 |
Publication status | Published - Feb 2020 |
Event | The 31st International Conference on Algorithmic Learning Theory - San Diego, United States Duration: 8 Feb 2020 → 11 Feb 2020 http://alt2020.algorithmiclearningtheory.org/ |
Publication series
Name | Proceedings of Machine Learning Research |
---|---|
Volume | 117 |
ISSN (Print) | 2640-3498 |
Conference
Conference | The 31st International Conference on Algorithmic Learning Theory |
---|---|
Abbreviated title | ALT 2020 |
Country/Territory | United States |
City | San Diego |
Period | 8/02/20 → 11/02/20 |
Internet address |
Bibliographical note
We are indebted to Kevin Kelly, Clark Glymour, Frederick Eberhardt, Christopher Hitchcock, Peter Spirtes, Kun Zhang, Konstantin Genin, and three anonymous referees for their very helpful comments on earlier drafts of this paper. Lin’s research was supported by the University of California at Davis Startup Funds. Zhang’s research was supported in part by the Research Grants Council of Hong Kong under the General Research Fund LU13600715, and by a Faculty Research Grant from Lingnan University.Keywords
- Causal Bayesian Network
- Causal Discovery
- Faithfulness Condition
- Learning Theory
- Almost Everywhere Convergence
- Locally Uniform Convergence
Fingerprint
Dive into the research topics of 'On Learning Causal Structures from Non-Experimental Data without Any Faithfulness Assumption'. Together they form a unique fingerprint.Projects
- 1 Finished
-
Causation, Decision, and Imprecise Probabilities
ZHANG, J. (PI) & SEIDENFELD, T. (CoI)
Research Grants Council (HKSAR)
1/01/16 → 31/12/17
Project: Grant Research