Skip to main content

2011 | Buch

Targeted Learning

Causal Inference for Observational and Experimental Data

verfasst von: Mark J. van der Laan, Sherri Rose

Verlag: Springer New York

Buchreihe : Springer Series in Statistics

insite
SUCHEN

Über dieses Buch

The statistics profession is at a unique point in history. The need for valid statistical tools is greater than ever; data sets are massive, often measuring hundreds of thousands of measurements for a single subject. The field is ready to move towards clear objective benchmarks under which tools can be evaluated. Targeted learning allows (1) the full generalization and utilization of cross-validation as an estimator selection tool so that the subjective choices made by humans are now made by the machine, and (2) targeting the fitting of the probability distribution of the data toward the target parameter representing the scientific question of interest.

This book is aimed at both statisticians and applied researchers interested in causal inference and general effect estimation for observational and experimental data. Part I is an accessible introduction to super learning and the targeted maximum likelihood estimator, including related concepts necessary to understand and apply these methods. Parts II-IX handle complex data structures and topics applied researchers will immediately recognize from their own research, including time-to-event outcomes, direct and indirect effects, positivity violations, case-control studies, censored data, longitudinal data, and genomic studies.

Inhaltsverzeichnis

Frontmatter

Targeted Learning: The Basics

Frontmatter
Chapter 1. The Open Problem

The debate over hormone replacement therapy (HRT) has been one of the biggest health discussions in recent history. Professional groups and nonprofits, such as the American College of Physicians and the American Heart Association, gave HRT their stamp of approval 15 years ago. Studies indicated that HRT was protective against osteoporosis and heart disease. HRT became big business, with millions upon millions of prescriptions filled each year. However, in 1998, the Heart and Estrogen-Progestin Replacement Study demonstrated increased risk of heart attack among women with heart disease taking HRT, and in 2002 the Women’s Health Initiative showed increased risk for breast cancer, heart disease, and stroke, among other ailments, for women on HRT. Why were there inconsistencies in the study results?

Sherri Rose, Mark J. van der Laan
Chapter 2. Defining the Model and Parameter

Targeted statistical learning from data is often concerned with the estimation of causal effects and an assessment of uncertainty for the estimator. In Chap. 1, we identified the road map we will follow to solve this estimation problem. Now, we formalize the concepts of the model and target parameter. We will introduce additional topics that may seem abstract. While we attempt to elucidate these abstractions with tangible examples, depending on your background, the material may be quite dense compared to other textbooks you have read. Do not get discouraged. Sometimes a second reading and careful notes are helpful and sufficient to illuminate these concepts. Researchers and students at UC Berkeley have also had great success discussing these topics in groups. If this is your assigned text for a course or workshop, meet outside of class with your fellow classmates. We guarantee you that the effort is worth it so you can move on to the next step in the targeted learning road map. Once you have a firm understanding of the core material in Chap. 2, you can begin the estimation steps.

Sherri Rose, Mark J. van der Laan
Chapter 3. Super Learning

This is the first chapter in our text focused on estimation within the road map for targeted learning. Now that we’ve defined the research question, including our data, the model, and the target parameter, we are ready to begin. For the estimation of a target parameter of the probability distribution of the data, such as target parameters that can be interpreted as causal effects, we implement TMLE. The first step in this estimation procedure is an initial estimate of the data-generating distribution

P

0, or the relevant part

Q

0 of

P

0 that is needed to evaluate the target parameter. This is the step presented in Chap. 3, and TMLE will be presented in Chaps. 4 and 5.

Eric C. Polley, Sherri Rose, Mark J. van der Laan
Chapter 4. Introduction to TMLE

This is the second chapter in our text to deal with estimation.We started by defining the research question. This included our data, model for the probability distribution that generated the data, and the target parameter of the probability distribution of the data. We then presented the estimation of prediction functions using super learning. This leads us to the estimation of causal effects using the TMLE. This chapter introduces TMLE, and a deeper understanding of this methodology is provided in Chap. 5. Note that we use the abbreviation

TMLE

for

targeted maximum likelihood estimation

and the

targeted maximum likelihood estimator

. Later in this text, we discuss

targeted minimum loss-based estimation

, which can also be abbreviated

TMLE

.

Sherri Rose, Mark J. van der Laan
Chapter 5. Understanding TMLE

This chapter focuses on understanding TMLE. We go into more detail than the previous chapter to demonstrate how this estimator is derived. Recall that TMLE is a two-step procedure where one first obtains an estimate of the data-generating distribution P

0

or the relevant portion Q

0

of P

0

. The second stage updates this initial fit in a step targeted toward making an optimal bias–variance tradeoff for the parameter of interest ψ(Q

0

), instead of the overall density P

0

. The procedure is double robust and can incorporate data-adaptive-likelihood-based estimation procedures to estimate Q

0

and the treatment mechanism.

Sherri Rose, Mark J. van der Laan
Chapter 6. Why TMLE?

In the previous five chapters, we covered the targeted learning road map. This included presentation of the tools necessary to estimate causal effect parameters of a data-generating distribution. We illustrated these methods with a simple data structure: O = (W, A, Y) ∼ P

0

. Our target parameter for this example was ψ P

0

= E

W, 0

[E

0

(Y | A = 1,W) - E

0

(Y | A = 0,W)],

Sherri Rose, Mark J. van der Laan

Additional Core Topics

Frontmatter
Chapter 7. Bounded Continuous Outcomes

This chapter presents a TMLE of the additive treatment effect on a bounded continuous outcome. A TMLE is based on a choice of loss function and a corresponding parametric submodel through an initial estimator, chosen so that the loss-functionspecific score of this parametric submodel at zero fluctuation equals or spans the efficient influence curve of the target parameter. Two such TMLEs are considered: one based on the squared error loss function with a linear regression model, and one based on a quasi-log-likelihood loss function with a logistic regression submodel. The problem with the first TMLE is highlighted: the linear regression model is not a submodel and thus does not respect global constraints implied by the statistical model. It is theoretically and practically demonstrated that the TMLE with the logistic regression submodel is more robust than a TMLE based on least squares linear regression. Some parts of this chapter assume familiarity with the core concepts, as presented in Chap. 5. The less theoretically trained reader should aim to navigate through these parts and focus on the practical implementation and importance of the presented TMLE procedure. This chapter is adapted from Gruber and van der Laan (2010b).

Susan Gruber, Mark J. van der Laan
Chapter 8. Direct Effects and Effect Among the Treated

Researchers are frequently interested in assessing the direct effect of one variable on an outcome of interest, where this effect is not mediated through a set of intermediate variables. In this chapter, we will examine direct effects in a gender salary equity study example. Such studies provide one measure of the equity of procedures used to set salaries and of decisions in promoting and advancing faculty based on performance measures. The goal is to assess whether gender, as determined at birth, has a direct effect on the salary at which a faculty member is hired, not mediated through intermediate performance of the subject up until the time the subject gets hired. If such a direct effect exists, then that means the salary was set in response not only to merit but also to the gender of the person, indicating a gender inequality issue.

Alan E. Hubbard, Nicholas P. Jewell, Mark J. van der Laan
Chapter 9. Marginal Structural Models

In many applications, one would like to estimate the effect of a treatment or exposure on various subpopulations. For example, one may be interested in these questions: What is the effect of an antidepressant medication on Hamilton Depression Rating Scale (HAM-D) score for those who enter a study with severe depression, and for those who enter with moderate depression? What is the effect of a cancer therapy for those who test positive for overexpression of a particular gene and for those who test negative for overexpression of that gene? What is the impact of low adherence to antiretroviral therapy on viral load for HIV-positive individuals who have just achieved viral suppression and for those who have maintained continuous viral suppression for 1 year?

Michael Rosenblum
Chapter 10. Positivity

The identifiability of causal effects requires sufficient variability in treatment or exposure assignment within strata of confounders. The causal inference literature refers to the assumption of adequate exposure variability within confounder strata as the assumption of positivity or experimental treatment assignment. Positivity violations can arise for two reasons. First, it may be theoretically impossible for individuals with certain covariate values to receive a given exposure of interest. For example, certain patient characteristics may constitute an absolute contraindication to receipt of a particular treatment. The threat to causal inference posed by such structural or theoretical violations of positivity does not improve with increasing sample size. Second, violations or near violations of positivity can arise in finite samples due to chance. This is a particular problem in small samples but also occurs frequently in moderate to large samples when the treatment is continuous or can take multiple levels, or when the covariate adjustment set is large or contains continuous or multilevel covariates. Regardless of the cause, causal effects may be poorly or nonidentified when certain subgroups in a finite sample do not receive some of the treatment levels of interest. In this chapter we will use the term “sparsity” to refer to positivity violations and near-violations arising from either of these causes, recognizing that other types of sparsity can also threaten valid inference.

Maya L. Petersen, Kristin E. Porter, Susan Gruber, Yue Wang, Mark J. van der Laan

TMLE and Parametric Regression in RCTs

Frontmatter
Chapter 11. Robust Analysis of RCTs Using Generalized Linear Models

It is typical in RCTs for extensive information to be collected on subjects prior to randomization. For example, age, ethnicity, socioeconomic status, history of disease, and family history of disease may be recorded. Baseline information can be leveraged to obtain more precise estimates of treatment effects than the standard unadjusted estimator. This is often done by carrying out model-based analyses at the end of the trial, where baseline variables predictive of the primary study outcome are included in the model. As shown in Moore and van der Laan (2007), such analyses have potential to improve precision and, if carried out in the appropriate manner, give asymptotically unbiased, locally efficient estimates of the marginal treatment effect even when the model used is arbitrarily misspecified.

Michael Rosenblum
Chapter 12. Targeted ANCOVA Estimator in RCTs

In many randomized experiments the primary goal is to estimate the average treatment effect, defined as the difference in expected responses between subjects assigned to a treatment group and subjects assigned to a control group. Linear regression is often recommended for use in RCTs as an attempt to increase precision when estimating an average treatment effect on a (nonbinary) outcome by exploiting baseline covariates. The coefficient in front of the treatment variable is then reported as the estimate of the average treatment effect, assuming that no interactions between treatment and covariates were included in the linear regression model.

Daniel B. Rubin, Mark J. van der Laan

Case-Control Studies

Frontmatter
Chapter 13. Independent Case-Control Studies

Case-control study designs are frequently used in public health and medical research to assess potential risk factors for disease. These study designs are particularly attractive to investigators researching rare diseases, as they are able to sample known cases of disease vs. following a large number of subjects and waiting for disease onset in a relatively small number of individuals.

Sherri Rose, Mark J. van der Laan
Chapter 14. Why Match? Matched Case-Control Studies

Individually matched case-control study designs are common in public health and medicine, and conditional logistic regression in a parametric statistical model is the tool most commonly used to analyze these studies. In an individually matched case-control study, the population of interest is identified, and cases are randomly sampled. Each of these cases is then matched to one or more controls based on a variable (or variables)

believed

to be a confounder. The main potential benefit of matching in case-control studies is a gain in efficiency, not the elimination of confounding. Therefore, when are these study designs truly beneficial?

Sherri Rose, Mark J. van der Laan
Chapter 15. Nested Case-Control Risk Score Prediction

Risk scores are calculated to identify those patients at the highest level of risk for an outcome. In some cases, interventions are implemented for patients at high risk. Standard practice for risk score prediction relies heavily on parametric regression. Generating a good estimator of the function of interest using parametric regression can be a significant challenge. As discussed in Chap. 3, high-dimensional data are increasingly common in epidemiology, and researchers may have dozens, hundreds, or thousands of potential predictors that are possibly related to the outcome.

Sherri Rose, Bruce Fireman, Mark J. van der Laan

RCTs with Survival Outcomes

Frontmatter
Chapter 16. Super Learning for Right-Censored Data

The super learner was introduced in Chap. 3 as a loss-based estimator of a parameter of the data-generating distribution, defined as the minimizer of the expectation (risk) of a loss function over all candidate parameter values in the parameter space. This chapter demonstrates how the super learner framework can also be applied to estimate parameters such as conditional hazards or survival functions of a failure time, given a vector of baseline covariates, based on right-censored data. We strongly advise readers to familiarize themselves with the concepts presented in Chap. 3 before reading this chapter, as we assume the reader has a firm grasp of that material.

Eric C. Polley, Mark J. van der Laan
Chapter 17. RCTs with Time-to-Event Outcomes

RCTs are often designed with the goal of investigating a causal effect of a new treatment drug vs. the standard of care on a time-to-event outcome. Possible outcomes are time to death, time to virologic failure, and time to recurrence of cancer. The data collected on a subject accumulates over time until the minimum of the time of analysis (end of study), the time the subject drops out of the study, or until the event of interest is observed. Typically, for a large proportion of the subjects recruited into the trial, the subject is right censored before the event of interest is observed, i.e., the time of analysis or the time the subject drops out of the study occurs before the time until the event of interest. The dropout time of the subject can be related to the actual time to failure one would have observed if the person had not dropped out prematurely. In this case, the standard unadjusted estimator of a causal effect of treatment on a survival time, such as the difference of the treatment-specific Kaplan–Meier survival curves at a particular point in time, is not only inefficient by not utilizing the available covariate information, but it is also biased due to informative dropout.

Kelly L. Moore, Mark J. van der Laan
Chapter 18. RCTs with Time-to-Event Outcomes and Effect Modification Parameters

Current methods used to evaluate effect modification in time-to-event data, such as the Cox proportional hazards model or its discrete time analog the logistic failure time model, posit highly restrictive parametric statistical models and attempt to estimate parameters that are specific to the model proposed. These methods, as a result of their parametric nature, tend to be biased and force practitioners to estimate parameters that are convenient rather than parameters they are actually interested in estimating. The TMLE improves on the currently implemented methods in both robustness, its ability to provide unbiased estimates, and flexibility, allowing practitioners to estimate parameters that directly answer their question of interest.

Ori M. Stitelman, Victor De Gruttola, C. William Wester, Mark J. van der Laan

C-TMLE

Frontmatter
Chapter 19. C-TMLE of an Additive Point Treatment Effect

C-TMLE is an extension of TMLE that pursues an optimal strategy for estimation of the nuisance parameter required in the targeting step. This latter step involves maximizing an empirical criterion over a parametric working model indexed by a nuisance parameter.

Susan Gruber, Mark J. van der Laan
Chapter 20. C-TMLE for Time-to-Event Outcomes

In this chapter, the C-TMLE for the treatment-specific survival curve based on rightcensored data will be presented. It is common that one wishes to assess the effect of a treatment or exposure on the time it takes for an event to occur based on an observational database. Chapters 17 and 18 discussed the treatment-specific survival curve in RCTs. The TMLE presented there improves upon common methods for analyzing time-to-event data in robustness, efficiency, and interpretability of parameter estimates. Observational data differ from RCTs in that the exposure/treatment is not set according to a known mechanism. Moreover, in situations where there is dependent censoring the censoring mechanism is also unknown. As a consequence, in observational studies, the TMLE needs to estimate the treatment and censoring mechanism, and this needs to be done in such a way that the resulting targeted bias reduction carried out by the TMLE is fully effective.

Ori M. Stitelman, Mark J. van der Laan
Chapter 21. Propensity-Score-Based Estimators and C-TMLE

In order to estimate the average treatment effect

E

0[

E

0(

Y

|

A

= 1,

W

) -

E

0(

Y

|

A

= 0,

W

)] of a single time-point treatment

A

based on observing

n

i.i.d. copies of

O

= (

W

,

A

,

Y

), one might use inverse probability of treatment (i.e., propensity score) weighting of an estimator of the conditional mean of the outcome (i.e., response surface) as a function of the pretreatment covariates. Alternatively, one might use a TMLE defined by a choice of initial estimator, a parametric submodel that codes fluctuations of the initial estimator, and a loss function used to determine the amount of fluctuation, where either the choice of submodel or the loss function will involve inverse probability of treatment weighting. Asymptotically, such double robust estimators may have appealing properties. They can be constructed such that if either the model of the response surface or the model of the probability of treatment assignment is correct, the estimatosr will provide a consistent estimator of the average treatment effect. And if both models are correct, the weighted estimator will be asymptotically efficient. Such estimators are called double robust and locally efficient (Robins et al. 1994, 1995; Robins and Rotnitzky 1995; van der Laan and Robins 2003).

Jasjeet S. Sekhon, Susan Gruber, Kristin E. Porter, Mark J. van der Laan

Genomics

Frontmatter
Chapter 22. Targeted Methods for Biomarker Discovery

The use of biomarkers in disease diagnosis and treatment has grown rapidly in recent years, as microarray and sequencing technologies capable of detecting biological signatures have become more effective research tools. In an attempt to create a level of quality assurance with respect to biological and more specifically biomarker research, the FDA has called for the development of a standard protocol for biomarker qualification (Food and Drug Administration 2006). Such a protocol would define “evidentiary” standards for biomarker usage in areas of drug development and disease treatment and provide a standardized assessment of a biomarker’s significance and biological interpretation. This is especially relevant for RCTs, where the protocol would prohibit the use of unauthenticated biomarkers to determine treatment regime, resulting in safer and more reliable treatment decisions (Food and Drug Administration 2006). Consequentially, identifying accurate and flexible analysis tools to assess biomarker importance is essential. In this chapter, we present a measure of variable importance based on a flexible semiparametric model as a standardized measure for biomarker importance. We estimate this measure with the TMLE.

Catherine Tuglus, Mark J. van der Laan
Chapter 23. Finding Quantitative Trait Loci Genes

The goal of quantitative trait loci (QTL) mapping is to identify genes underlying an observed trait in the genome using genetic markers. In experimental organisms, the QTL mapping experiment usually involves crossing two inbred lines with substantial differences in a trait, and then scoring the trait in the segregating progeny. A series of markers along the genome is genotyped in the segregating progeny, and associations between the trait and the QTL can be evaluated using the marker information. Of primary interest are the positions and effect sizes of QTL genes.

Hui Wang, Sherri Rose, Mark J. van der Laan

Longitudinal Data Structures

Frontmatter
Chapter 24. Case Study: Longitudinal HIV Cohort Data

In this chapter, we introduce a case study based on the treatment of HIV infection. A series of scientific questions concerning how best to detect and manage antiretroviral treatment failure in resource-limited settings are used to illustrate the general road map for targeted learning. We emphasize the translation of background knowledge into a formal causal and statistical model and the translation of scientific questions into target causal parameters and corresponding statistical parameters of the distribution of the observed data. Readers may be interested in first reading the longitudinal sections of Appendix A for a rigorous treatment of longitudinal TMLE and related topics.

Maya L. Petersen, Mark J. van der Laan
Chapter 25. Probability of Success of an In Vitro Fertilization Program

About 9 to 15% of couples have difficulty in conceiving a child, i.e., do not conceive within 12 months of attempting pregnancy (Boivin et al. 2007). In response to subfertility, assisted reproductive technology has developed over the last 30 years, resulting in in vitro fertilization (IVF) techniques (the first “test-tube baby” was born in 1978). Nowadays, more than 40,000 IVF cycles are performed each year in France and more than 63,000 in the USA (Adamson et al. 2006). Yet, how to quantify the success in assisted reproduction still remains a matter of debate. One could, for instance, rely on the number of pregnancies or deliveries per IVF cycle. However, an IVF program often consists of several successive IVF cycles. So, instead of considering each IVF cycle separately, one could rather rely on an evaluation of the whole program. IVF programs are emotionally and physically burdensome. Providing the patients with the most adequate and accurate measure of success is therefore an important issue that we propose to address in this chapter.

Antoine Chambaz
Chapter 26. Individualized Antiretroviral Initiation Rules

In this chapter, TMLE is illustrated with a data analysis from a longitudinal observational study to investigate “when to start” antiretroviral therapy to reduce the incidence of AIDS-defining cancer (ADC), defined as Kaposi sarcoma, non-Hodgkin’s lymphoma, or invasive cervical cancer, in a population of HIV-infected patients. A key clinical question regarding the management of HIV/AIDS is when to start combination antiretroviral therapy (ART), defined in the Department of Health and Human Services (2004) guidelines as a regimen containing three or more individual antiretroviral medications. CD4+ T-cell count levels have been the primary marker used to determine treatment eligibility, although other factors have also been considered, such as HIV RNA levels, history of an AIDS-defining illness (Centers for Disease Control and Prevention 1992), and ability of the patient to adhere to therapy. The primary outcomes considered in ART treatment guidelines described above have always been reductions in HIV-related morbidity and mortality. Until recently, however, guidelines have not considered the effect of CD4 thresholds on the risk of specific comorbidities, such as ADC. In this analysis, we therefore evaluate how different CD4-based ART initiation strategies influence the burden of ADC. We are analyzing ADC here since it is well established that these malignancies are closely linked to immunodeficiency.

Romain Neugebauer, Michael J. Silverberg, Mark J. van der Laan

Advanced Topics

Frontmatter
Chapter 27. Cross-Validated Targeted Minimum-Loss-Based Estimation

In previous chapters, we introduced targeted maximum likelihood estimation in semiparametric models, which incorporates adaptive estimation (e.g., loss-based super learning) of the relevant part of the data-generating distribution and subsequently carries out a targeted bias reduction by maximizing the log-likelihood, or minimizing another loss-specific empirical risk, over a “clever” parametric working model through the initial estimator, treating the initial estimator as offset. This updating process may need to be iterated to convergence. The target parameter of the resulting updated estimator is then evaluated, and is called the targeted minimum-loss- based estimator (also TMLE) of the target parameter of the data-generating distribution. This estimator is, by definition, a substitution estimator, and, under regularity conditions, is a double robust semiparametric efficient estimator.

Wenjing Zheng, Mark J. van der Laan
Chapter 28. Targeted Bayesian Learning

TMLE is a loss-based semiparametric estimation method that yields a substitution estimator of a target parameter of the probability distribution of the data that solves the efficient influence curve estimating equation and thereby yields a double robust locally efficient estimator of the parameter of interest under regularity conditions. The Bayesian paradigm is concerned with including the researcher’s prior uncertainty about the probability distribution through a prior distribution on a statistical model for the probability distribution, which combined with the likelihood yields a posterior distribution of the probability distribution that reflects the researcher’s posterior uncertainty. Just like model-based maximum likelihood learning, Bayesian learning is intrinsically nontargeted by working with the prior and posterior distributions of the whole probability distribution of the observed data structure and is thereby very susceptible to bias due to model misspecification or nontargeted model selection.

Iván Díaz Muñoz, Alan E. Hubbard, Mark J. van der Laan
Chapter 29. TMLE in Adaptive Group Sequential Covariate-Adjusted RCTs

This chapter is devoted to group sequential covariate-adjusted RCTs analyzed through the prism of TMLE. By

adaptive covariate-adjusted design

we mean an RCT group sequential design that allows the investigator to dynamically modify its course through data-driven adjustment of the randomization probability based on data accrued so far, without negatively impacting the statistical integrity of the trial. Moreover, the patient’s baseline covariates may be taken into account for the random treatment assignment. This definition is slightly adapted from Golub (2006). In particular, we assume following the definition of prespecified sampling plans given in Emerson (2006) that, prior to collection of the data, the trial protocol specifies the parameter of scientific interest, the inferential method, and confidence level to be used when constructing a confidence interval for the latter parameter.

Antoine Chambaz, Mark J. van der Laan

Appendices

Frontmatter
Appendix A. Foundations of TMLE

An estimator of a parameter is a mapping from the data set to the parameter space. Estimators that are empirical means of a function of the unit data structure are asymptotically consistent and normally distributed due to the CLT. Such estimators are called linear in the empirical probability distribution. Most estimators are not linear, but many are approximately linear in the sense that they are linear up to a negligible (in probability) remainder term. One states that the estimator is asymptotically linear, and the relevant function of the unit data structure, centered to have mean zero, is called the influence curve of the estimator. How does one prove that an estimator is asymptotically linear? One key step is to realize that an estimator is a mapping from a possibly very large collection of empirical means of functions of the unit data structure into the parameter space. Such a collection of empirical means is called an empirical process whose behavior with respect to uniform consistency and the uniform CLT is established in empirical process theory. In this section we present succinctly that (1) a uniform central limit theorem for the vector of empirical means, combined with (2) differentiability of the estimator as a mapping from the vector of empirical means into the parameter space yields the desired asymptotic linearity. This method for establishing the asymptotic linearity and normality of the estimator is called the functional delta method (van der Vaart and Wellner 1996; Gill 1989).

Mark J. van der Laan, Sherri Rose
Appendix B. Introduction to R Code Implementation

This appendix includes a brief introduction to the implementation of super learning and the TMLE in R. Packages and supplementary code are posted online at http://www.targetedlearningbook.com. We conclude with a few coding guides for data structures and research questions presented in Parts II–IX. The book’sWeb site will be a continually updated resource for new code, demonstrations, and packages.

Mark J. van der Laan, Sherri Rose
Backmatter
Metadaten
Titel
Targeted Learning
verfasst von
Mark J. van der Laan
Sherri Rose
Copyright-Jahr
2011
Verlag
Springer New York
Electronic ISBN
978-1-4419-9782-1
Print ISBN
978-1-4419-9781-4
DOI
https://doi.org/10.1007/978-1-4419-9782-1