Skip to main content
Top

2018 | OriginalPaper | Chapter

Counterfactual Inference with Hidden Confounders Using Implicit Generative Models

Authors : Fujin Zhu, Adi Lin, Guangquan Zhang, Jie Lu

Published in: AI 2018: Advances in Artificial Intelligence

Publisher: Springer International Publishing

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

In observational studies, a key problem is to estimate the causal effect of a treatment on some outcome. Counterfactual inference tries to handle it by directly learning the treatment exposure surfaces. One of the biggest challenges in counterfactual inference is the existence of unobserved confounders, which are latent variables that affect both the treatment and outcome variables. Building on recent advances in latent variable modelling and efficient Bayesian inference techniques, deep latent variable models, such as variational auto-encoders (VAEs), have been used to ease the challenge by learning the latent confounders from the observations. However, for the sake of tractability, the posterior of latent variables used in existing methods is assumed to be Gaussian with diagonal covariance matrix. This specification is quite restrictive and even contradictory with the underlying truth, limiting the quality of the resulting generative models and the causal effect estimation. In this paper, we propose to take advantage of implicit generative models to detour this limitation by using black-box inference models. To make inference for the implicit generative model with intractable likelihood, we adopt recent implicit variational inference based on adversary training to obtain a close approximation to the true posterior. Experiments on simulated and real data show the proposed method matches the state-of-art.

Dont have a licence yet? Then find out more about our products and how to get one now:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literature
1.
go back to reference Imbens, G.W., Rubin, D.B.: Causal Inference in Statistics, Social, and Biomedical Sciences. Cambridge University Press, Cambridge (2015)CrossRef Imbens, G.W., Rubin, D.B.: Causal Inference in Statistics, Social, and Biomedical Sciences. Cambridge University Press, Cambridge (2015)CrossRef
2.
go back to reference Hernán, M.A., Robins, J.M.: Causal Inference. Chapman & Hall/CRC, Boca Raton (2018, forthcoming) Hernán, M.A., Robins, J.M.: Causal Inference. Chapman & Hall/CRC, Boca Raton (2018, forthcoming)
3.
go back to reference Swaminathan, A., Joachims, T.: Batch learning from logged bandit feedback through counterfactual risk minimization. J. Mach. Learn. Res. 16, 1731–1755 (2015)MathSciNetMATH Swaminathan, A., Joachims, T.: Batch learning from logged bandit feedback through counterfactual risk minimization. J. Mach. Learn. Res. 16, 1731–1755 (2015)MathSciNetMATH
4.
go back to reference Swaminathan, A., Joachims, T.: Counterfactual risk minimization: learning from logged bandit feedback. In: ICML, pp. 814–823 (2015) Swaminathan, A., Joachims, T.: Counterfactual risk minimization: learning from logged bandit feedback. In: ICML, pp. 814–823 (2015)
5.
go back to reference Bottou, L., et al.: Counterfactual reasoning and learning systems: the example of computational advertising. J. Mach. Learn. Res. 14, 3207–3260 (2013)MathSciNetMATH Bottou, L., et al.: Counterfactual reasoning and learning systems: the example of computational advertising. J. Mach. Learn. Res. 14, 3207–3260 (2013)MathSciNetMATH
6.
go back to reference Swaminathan, A., Joachims, T.: The self-normalized estimator for counterfactual learning. In: NIPS, pp. 3231–3239 (2015) Swaminathan, A., Joachims, T.: The self-normalized estimator for counterfactual learning. In: NIPS, pp. 3231–3239 (2015)
7.
go back to reference Johansson, F.D., Shalit, U., Sontag, D.: Learning representations for counterfactual inference. In: ICML (2016) Johansson, F.D., Shalit, U., Sontag, D.: Learning representations for counterfactual inference. In: ICML (2016)
8.
go back to reference Shalit, U., Johansson, F.D., Sontag, D.: Estimating individual treatment effect: generalization bounds and algorithms. In: ICML, vol. 1050, p. 28 (2017) Shalit, U., Johansson, F.D., Sontag, D.: Estimating individual treatment effect: generalization bounds and algorithms. In: ICML, vol. 1050, p. 28 (2017)
9.
go back to reference Louizos, C., Shalit, U., Mooij, J., Sontag, D., Zemel, R., Welling, M.: Causal effect inference with deep latent-variable models. In: NIPS (2017) Louizos, C., Shalit, U., Mooij, J., Sontag, D., Zemel, R., Welling, M.: Causal effect inference with deep latent-variable models. In: NIPS (2017)
10.
go back to reference Pearl, J.: Causality: Models, Reasoning and Inference. Cambridge University Press, Cambridge (2000)MATH Pearl, J.: Causality: Models, Reasoning and Inference. Cambridge University Press, Cambridge (2000)MATH
11.
go back to reference Ullman, J.B., Bentler, P.M.: Structural equation modeling. In: Handbook of Psychology, 2nd Edn (2012) Ullman, J.B., Bentler, P.M.: Structural equation modeling. In: Handbook of Psychology, 2nd Edn (2012)
12.
go back to reference Pearl, J.: Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference. Morgan Kaufmann, San Francisco (1988)MATH Pearl, J.: Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference. Morgan Kaufmann, San Francisco (1988)MATH
14.
go back to reference Tran, D., Ranganath, R., Blei, D.: Hierarchical implicit models and likelihood-free variational inference. In: NIPS, pp. 5527–5537 (2017) Tran, D., Ranganath, R., Blei, D.: Hierarchical implicit models and likelihood-free variational inference. In: NIPS, pp. 5527–5537 (2017)
15.
go back to reference Tran, D., Blei, D.M.: Implicit causal models for genome-wide association studies. In: ICLR (2018) Tran, D., Blei, D.M.: Implicit causal models for genome-wide association studies. In: ICLR (2018)
16.
go back to reference Pearl, J.: An introduction to causal inference. Int. J. Biostat. 6(2) (2010). Article 7 Pearl, J.: An introduction to causal inference. Int. J. Biostat. 6(2) (2010). Article 7
17.
18.
go back to reference Izbicki, R., Lee, A.B., Pospisil, T.: ABC-CDE: towards approximate bayesian computation with complex high-dimensional data and limited simulations. arXiv:1805.05480 [stat.ME] (2018) Izbicki, R., Lee, A.B., Pospisil, T.: ABC-CDE: towards approximate bayesian computation with complex high-dimensional data and limited simulations. arXiv:​1805.​05480 [stat.ME] (2018)
19.
go back to reference Goodfellow, I., et al.: Generative adversarial nets. In: Advances in Neural Information Processing Systems, pp. 2672–2680 (2014) Goodfellow, I., et al.: Generative adversarial nets. In: Advances in Neural Information Processing Systems, pp. 2672–2680 (2014)
20.
go back to reference Jethava, V., Dubhashi, D.: GANs for LIFE: generative adversarial networks for likelihood free inference. arXiv preprint arXiv:1711.11139 (2017) Jethava, V., Dubhashi, D.: GANs for LIFE: generative adversarial networks for likelihood free inference. arXiv preprint arXiv:​1711.​11139 (2017)
21.
go back to reference Tran, D., Hoffman, M.D., Saurous, R.A., Brevdo, E., Murphy, K., Blei, D.M.: Deep probabilistic programming. In: ICLR (2017) Tran, D., Hoffman, M.D., Saurous, R.A., Brevdo, E., Murphy, K., Blei, D.M.: Deep probabilistic programming. In: ICLR (2017)
22.
go back to reference Pearl, J., Glymour, M., Jewell, N.P.: Causal Inference in Statistics: A Primer. Wiley, Hoboken (2016)MATH Pearl, J., Glymour, M., Jewell, N.P.: Causal Inference in Statistics: A Primer. Wiley, Hoboken (2016)MATH
24.
go back to reference Mescheder, L., Nowozin, S., Geiger, A.: Adversarial variational bayes: unifying variational autoencoders and generative adversarial networks. arXiv preprint arXiv:1701.04722 (2017) Mescheder, L., Nowozin, S., Geiger, A.: Adversarial variational bayes: unifying variational autoencoders and generative adversarial networks. arXiv preprint arXiv:​1701.​04722 (2017)
25.
go back to reference Abadi, M., et al.: TensorFlow: large-scale machine learning on heterogeneous distributed systems. arXiv preprint arXiv:1603.04467 (2016) Abadi, M., et al.: TensorFlow: large-scale machine learning on heterogeneous distributed systems. arXiv preprint arXiv:​1603.​04467 (2016)
26.
28.
go back to reference Chipman, H.A., George, E.I., McCulloch, R.E.: BART: Bayesian additive regression trees. Ann. Appl. Stat. 4, 266–298 (2010)MathSciNetCrossRef Chipman, H.A., George, E.I., McCulloch, R.E.: BART: Bayesian additive regression trees. Ann. Appl. Stat. 4, 266–298 (2010)MathSciNetCrossRef
30.
go back to reference Athey, S., Tibshirani, J., Wager, S.: Generalized random forests. Ann. Stat. (2018, forthcoming) Athey, S., Tibshirani, J., Wager, S.: Generalized random forests. Ann. Stat. (2018, forthcoming)
31.
go back to reference Wager, S., Athey, S.: Estimation and inference of heterogeneous treatment effects using random forests. J. Am. Stat. Assoc. 113, 1228–1242 (2018)MathSciNetCrossRef Wager, S., Athey, S.: Estimation and inference of heterogeneous treatment effects using random forests. J. Am. Stat. Assoc. 113, 1228–1242 (2018)MathSciNetCrossRef
33.
go back to reference Shi, J., Sun, S., Zhu, J.: Kernel implicit variational inference. In: ICLR (2018) Shi, J., Sun, S., Zhu, J.: Kernel implicit variational inference. In: ICLR (2018)
Metadata
Title
Counterfactual Inference with Hidden Confounders Using Implicit Generative Models
Authors
Fujin Zhu
Adi Lin
Guangquan Zhang
Jie Lu
Copyright Year
2018
DOI
https://doi.org/10.1007/978-3-030-03991-2_47

Premium Partner