Imbens, Guido
Overview
Works:  108 works in 473 publications in 1 language and 2,446 library holdings 

Roles:  Author, Thesis advisor, Honoree 
Classifications:  HB1, 519.54 
Publication Timeline
.
Most widely held works by
Guido Imbens
Causal inference for statistics, social, and biomedical sciences : an introduction by
Guido Imbens(
Book
)
19 editions published between 2014 and 2015 in English and held by 200 WorldCat member libraries worldwide
Most questions in social and biomedical sciences are causal in nature: what would happen to individuals, or to groups, if part of their environment were changed? In this groundbreaking text, two worldrenowned experts present statistical methods for studying such questions. This book starts with the notion of potential outcomes, each corresponding to the outcome that would be realized if a subject were exposed to a particular treatment or regime. In this approach, causal effects are comparisons of such potential outcomes. The fundamental problem of causal inference is that we can only observe one of the potential outcomes for a particular subject. The authors discuss how randomized experiments allow us to assess causal effects and then turn to observational studies. They lay out the assumptions needed for causal inference and describe the leading analysis methods, including, matching, propensityscore methods, and instrumental variables. Many detailed applications are included, with special focus on practical aspects for the empirical researcher. Provided by publisher
19 editions published between 2014 and 2015 in English and held by 200 WorldCat member libraries worldwide
Most questions in social and biomedical sciences are causal in nature: what would happen to individuals, or to groups, if part of their environment were changed? In this groundbreaking text, two worldrenowned experts present statistical methods for studying such questions. This book starts with the notion of potential outcomes, each corresponding to the outcome that would be realized if a subject were exposed to a particular treatment or regime. In this approach, causal effects are comparisons of such potential outcomes. The fundamental problem of causal inference is that we can only observe one of the potential outcomes for a particular subject. The authors discuss how randomized experiments allow us to assess causal effects and then turn to observational studies. They lay out the assumptions needed for causal inference and describe the leading analysis methods, including, matching, propensityscore methods, and instrumental variables. Many detailed applications are included, with special focus on practical aspects for the empirical researcher. Provided by publisher
The longterm gains from GAIN : a reanalysis of the impacts of the California GAIN program by
V. Joseph Hotz(
Book
)
24 editions published between 2000 and 2006 in English and held by 85 WorldCat member libraries worldwide
In this paper, we explore ways of combining experimental data and nonexperimental methods to estimate the differential effects of components of training programs. We show how data from a multisite experimental evaluation in which subjects are randomly assigned to any treatment versus a control group who receives no treatment can be combined with nonexperimental regressionadjustment methods to estimate the differential effects of particular types of treatments. We also devise tests of the validity of using the latter methods. We use these methods and tests to reanalyze data from the MDRC Evaluation of California's Greater Avenues to Independence (GAIN) program. While not designed to estimate the differential effects of the Labor Force Attachment (LFA) training and Human Capital Development (HCD) training components used in this program, we show how data from this experimental evaluation can be used in conjunction with nonexperimental methods to estimate such effects. We present estimates of both the short and longterm differential effects of these two training components on employment and earnings. We find that while there are shortterm positive differential effects of LFA versus HCD, the latter training component is relatively more beneficial in the longerterm
24 editions published between 2000 and 2006 in English and held by 85 WorldCat member libraries worldwide
In this paper, we explore ways of combining experimental data and nonexperimental methods to estimate the differential effects of components of training programs. We show how data from a multisite experimental evaluation in which subjects are randomly assigned to any treatment versus a control group who receives no treatment can be combined with nonexperimental regressionadjustment methods to estimate the differential effects of particular types of treatments. We also devise tests of the validity of using the latter methods. We use these methods and tests to reanalyze data from the MDRC Evaluation of California's Greater Avenues to Independence (GAIN) program. While not designed to estimate the differential effects of the Labor Force Attachment (LFA) training and Human Capital Development (HCD) training components used in this program, we show how data from this experimental evaluation can be used in conjunction with nonexperimental methods to estimate such effects. We present estimates of both the short and longterm differential effects of these two training components on employment and earnings. We find that while there are shortterm positive differential effects of LFA versus HCD, the latter training component is relatively more beneficial in the longerterm
Estimating the effect of unearned income on labor supply, earnings, savings, and consumption : evidence from a survey of lottery
players by
Guido Imbens(
Book
)
19 editions published in 1999 in English and held by 68 WorldCat member libraries worldwide
Knowledge of the effect of unearned income on economic behavior of individuals in general, and on labor supply in particular, is of great importance to policy makers. Estimation of income effects, however, is a difficult problem because income is not randomly assigned and exogenous changes in income are difficult to identify. Here we exploit the randomized assignment of large amounts of money over long periods of time through lotteries. We carried out a survey of people who played the lottery in the mideighties and estimate the effect of lottery winnings on their subsequent earnings, labor supply, consumption, and savings. We find that winning a modest prize ($15,000 per year for twenty years) does not affect labor supply or earnings substantially. Winning such a prize does not considerably reduce savings. Winning a much larger prize ($80,000 rather than $15,000 per year) reduces labor supply as measured by hours, as well as participation and social security earnings; elasticities for hours and earnings are around 0.20 and for participation around 0.14. Winning a large versus modest amount also leads to increased expenditures on cars and larger home values, although mortgages values appear to increase by approximately the same amount. Winning $80,000 increases overall savings, although savings in retirement accounts are not significantly affected. The results do not vary much by gender, age, or prior employment status. There is some evidence that for those with zero earnings prior to winning the lottery there is a positive effect of winning a small prize on subsequent labor market participation
19 editions published in 1999 in English and held by 68 WorldCat member libraries worldwide
Knowledge of the effect of unearned income on economic behavior of individuals in general, and on labor supply in particular, is of great importance to policy makers. Estimation of income effects, however, is a difficult problem because income is not randomly assigned and exogenous changes in income are difficult to identify. Here we exploit the randomized assignment of large amounts of money over long periods of time through lotteries. We carried out a survey of people who played the lottery in the mideighties and estimate the effect of lottery winnings on their subsequent earnings, labor supply, consumption, and savings. We find that winning a modest prize ($15,000 per year for twenty years) does not affect labor supply or earnings substantially. Winning such a prize does not considerably reduce savings. Winning a much larger prize ($80,000 rather than $15,000 per year) reduces labor supply as measured by hours, as well as participation and social security earnings; elasticities for hours and earnings are around 0.20 and for participation around 0.14. Winning a large versus modest amount also leads to increased expenditures on cars and larger home values, although mortgages values appear to increase by approximately the same amount. Winning $80,000 increases overall savings, although savings in retirement accounts are not significantly affected. The results do not vary much by gender, age, or prior employment status. There is some evidence that for those with zero earnings prior to winning the lottery there is a positive effect of winning a small prize on subsequent labor market participation
Regression discontinuity designs : a guide to practice by
Guido Imbens(
Book
)
25 editions published between 2007 and 2010 in English and held by 66 WorldCat member libraries worldwide
In Regression Discontinuity (RD) designs for evaluating causal effects of interventions, assignment to a treatment is determined at least partly by the value of an observed covariate lying on either side of a fixed threshold. These designs were first introduced in the evaluation literature by Thistlewaite and Campbell (1960). With the exception of a few unpublished theoretical papers, these methods did not attract much attention in the economics literature until recently. Starting in the late 1990s, there has been a large number of studies in economics applying and extending RD methods. In this paper we review some of the practical and theoretical issues involved in the implementation of RD methods
25 editions published between 2007 and 2010 in English and held by 66 WorldCat member libraries worldwide
In Regression Discontinuity (RD) designs for evaluating causal effects of interventions, assignment to a treatment is determined at least partly by the value of an observed covariate lying on either side of a fixed threshold. These designs were first introduced in the evaluation literature by Thistlewaite and Campbell (1960). With the exception of a few unpublished theoretical papers, these methods did not attract much attention in the economics literature until recently. Starting in the late 1990s, there has been a large number of studies in economics applying and extending RD methods. In this paper we review some of the practical and theoretical issues involved in the implementation of RD methods
Reemployment probabilities over the business cycle by
Guido Imbens(
Book
)
13 editions published between 1993 and 2006 in English and held by 59 WorldCat member libraries worldwide
Using a Cox proportional hazard model that allows for a flexible time dependence that can incorporate both seasonal and business cycle effects, we analyze the determinants of reemployment probabilities of young workers from 19781989. We find considerable changes in the chances of young workers finding jobs over the business cycle, however, the characteristics of those starting jobless spells do not vary much over time. Therefore, government programs that target specific demographic groups may change individuals' positions within the queue of job seekers but will probably have a more limited impact on the overall reemployment probability. Living in an area with high local unemployment reduces reemployment chances as does being in a long spell of nonemployment. However, when we allow for an interaction between the length of time of a jobless spell and the local unemployment rate we find the interaction term is positive. In other words, while workers appear to be scarred by a long spell of unemployment, the damage seems to be reduced if they are unemployed in an area with high overall unemployment
13 editions published between 1993 and 2006 in English and held by 59 WorldCat member libraries worldwide
Using a Cox proportional hazard model that allows for a flexible time dependence that can incorporate both seasonal and business cycle effects, we analyze the determinants of reemployment probabilities of young workers from 19781989. We find considerable changes in the chances of young workers finding jobs over the business cycle, however, the characteristics of those starting jobless spells do not vary much over time. Therefore, government programs that target specific demographic groups may change individuals' positions within the queue of job seekers but will probably have a more limited impact on the overall reemployment probability. Living in an area with high local unemployment reduces reemployment chances as does being in a long spell of nonemployment. However, when we allow for an interaction between the length of time of a jobless spell and the local unemployment rate we find the interaction term is positive. In other words, while workers appear to be scarred by a long spell of unemployment, the damage seems to be reduced if they are unemployed in an area with high overall unemployment
Nonparametric applications of Bayesian inference by
Gary Chamberlain(
Book
)
12 editions published between 1995 and 1996 in English and held by 45 WorldCat member libraries worldwide
The paper evaluates the usefulness of a nonparametric approach to Bayesian inference by presenting two applications. The approach is due to Ferguson (1973, 1974) and Rubin (1981). Our first application considers an educational choice problem. We focus on obtaining a predictive distribution for earnings corresponding to various levels of schooling. This predictive distribution incorporates the parameter uncertainty, so that it is relevant for decision making under uncertainty in the expected utility framework of microeconomics. The second application is to quantile regression. Our point here is to examine the potential of the nonparametric framework to provide inferences without making asymptotic approximations. Unlike in the first application, the standard asymptotic normal approximation turns out to not be a good guide. We also consider a comparison with a bootstrap approach
12 editions published between 1995 and 1996 in English and held by 45 WorldCat member libraries worldwide
The paper evaluates the usefulness of a nonparametric approach to Bayesian inference by presenting two applications. The approach is due to Ferguson (1973, 1974) and Rubin (1981). Our first application considers an educational choice problem. We focus on obtaining a predictive distribution for earnings corresponding to various levels of schooling. This predictive distribution incorporates the parameter uncertainty, so that it is relevant for decision making under uncertainty in the expected utility framework of microeconomics. The second application is to quantile regression. Our point here is to examine the potential of the nonparametric framework to provide inferences without making asymptotic approximations. Unlike in the first application, the standard asymptotic normal approximation turns out to not be a good guide. We also consider a comparison with a bootstrap approach
Efficient estimation of average treatment effects using the estimated propensity score by
Keisuke Hirano(
Book
)
13 editions published between 2000 and 2003 in English and held by 41 WorldCat member libraries worldwide
We are interested in estimating the average effect of a binary treatment on a scalar outcome. If assignment to the treatment is independent of the potential outcomes given pretreatment variables, biases associated with simple treatmentcontrol average comparisons can be removed by adjusting for differences in the pretreatment variables. Rosenbaum and Rubin (1983, 1984) show that adjusting solely for differences between treated and control units in a scalar function of the pretreatment, the propensity score, also removes the entire bias associated with differences in pretreatment variables. Thus it is possible to obtain unbiased estimates of the treatment effect without conditioning on a possibly highdimensional vector of pretreatment variables. Although adjusting for the propensity score removes all the bias, this can come at the expense of efficiency. We show that weighting with the inverse of a nonparametric estimate of the propensity score, rather than the true propensity score, leads to efficient estimates of the various average treatment effects. This result holds whether the pretreatment variables have discrete or continuous distributions. We provide intuition for this result in a number of ways. First we show that with discrete covariates, exact adjustment for the estimated propensity score is identical to adjustment for the pretreatment variables. Second, we show that weighting by the inverse of the estimated propensity score can be interpreted as an empirical likelihood estimator that efficiently incorporates the information about the propensity score. Finally, we make a connection to results to other results on efficient estimation through weighting in the context of variable probability sampling
13 editions published between 2000 and 2003 in English and held by 41 WorldCat member libraries worldwide
We are interested in estimating the average effect of a binary treatment on a scalar outcome. If assignment to the treatment is independent of the potential outcomes given pretreatment variables, biases associated with simple treatmentcontrol average comparisons can be removed by adjusting for differences in the pretreatment variables. Rosenbaum and Rubin (1983, 1984) show that adjusting solely for differences between treated and control units in a scalar function of the pretreatment, the propensity score, also removes the entire bias associated with differences in pretreatment variables. Thus it is possible to obtain unbiased estimates of the treatment effect without conditioning on a possibly highdimensional vector of pretreatment variables. Although adjusting for the propensity score removes all the bias, this can come at the expense of efficiency. We show that weighting with the inverse of a nonparametric estimate of the propensity score, rather than the true propensity score, leads to efficient estimates of the various average treatment effects. This result holds whether the pretreatment variables have discrete or continuous distributions. We provide intuition for this result in a number of ways. First we show that with discrete covariates, exact adjustment for the estimated propensity score is identical to adjustment for the pretreatment variables. Second, we show that weighting by the inverse of the estimated propensity score can be interpreted as an empirical likelihood estimator that efficiently incorporates the information about the propensity score. Finally, we make a connection to results to other results on efficient estimation through weighting in the context of variable probability sampling
Hierarchical Bayes models with many instrumental variables by
Gary Chamberlain(
Book
)
9 editions published in 1996 in English and held by 38 WorldCat member libraries worldwide
In this paper, we explore Bayesian inference in models with many instrumental variables that are potentially weakly correlated with the endogenous regressor. The prior distribution has a hierarchical (nested) structure. We apply the methods to the AngristKrueger (AK, 1991) analysis of returns to schooling using instrumental variables formed by interacting quarter of birth with state/year dummy variables. Bound, Jaeger, and Baker (1995) show that randomly generated instrumental variables, designed to match the AK data set, give twostage least squares results that look similar to the results based on the actual instrumental variables. Using a hierarchical model with the AK data, we find a posterior distribution for the parameter of interest that is tight and plausible. Using data with randomly generated instruments, the posterior distribution is diffuse. Most of the information in the AK data can in fact be extracted with quarter of birth as the single instrumental variable. Using artificial data patterned on the AK data, we find that if all the information had been in the interactions between quarter of birth and state/year dummies, then the hierarchical model would still have led to precise inferences, whereas the single instrument model would have suggested that there was no information in the data. We conclude that hierarchical modeling is a conceptually straightforward way of efficiently combining many weak instrumental variables
9 editions published in 1996 in English and held by 38 WorldCat member libraries worldwide
In this paper, we explore Bayesian inference in models with many instrumental variables that are potentially weakly correlated with the endogenous regressor. The prior distribution has a hierarchical (nested) structure. We apply the methods to the AngristKrueger (AK, 1991) analysis of returns to schooling using instrumental variables formed by interacting quarter of birth with state/year dummy variables. Bound, Jaeger, and Baker (1995) show that randomly generated instrumental variables, designed to match the AK data set, give twostage least squares results that look similar to the results based on the actual instrumental variables. Using a hierarchical model with the AK data, we find a posterior distribution for the parameter of interest that is tight and plausible. Using data with randomly generated instruments, the posterior distribution is diffuse. Most of the information in the AK data can in fact be extracted with quarter of birth as the single instrumental variable. Using artificial data patterned on the AK data, we find that if all the information had been in the interactions between quarter of birth and state/year dummies, then the hierarchical model would still have led to precise inferences, whereas the single instrument model would have suggested that there was no information in the data. We conclude that hierarchical modeling is a conceptually straightforward way of efficiently combining many weak instrumental variables
Information theoretic approaches to inference in moment condition models by
Guido Imbens(
Book
)
9 editions published in 1995 in English and held by 36 WorldCat member libraries worldwide
Onestep efficient GMM estimation has been developed in the recent papers of Back and Brown (1990), Imbens (1993) and Qin and Lawless (1994). These papers emphasized methods that correspond to using Owen's (1988) method of empirical likelihood to reweight the data so that the reweighted sample obeys all the moment restrictions at the parameter estimates. In this paper we consider an alternative KLIC motivated weighting and show how it and similar discrete reweightings define a class of unconstrained optimization problems which includes GMM as a special case. Such KLICmotivated reweightings introduce M auxiliary tilting' parameters, where M is the number of moments; parameter and overidentification hypotheses can be recast in terms of these tilting parameters. Such tests, when appropriately conditioned on the estimates of the original parameters, are often startlingly more effective than their conventional counterparts. This is apparently due to the local ancillarity of the original parameters for the tilting parameters
9 editions published in 1995 in English and held by 36 WorldCat member libraries worldwide
Onestep efficient GMM estimation has been developed in the recent papers of Back and Brown (1990), Imbens (1993) and Qin and Lawless (1994). These papers emphasized methods that correspond to using Owen's (1988) method of empirical likelihood to reweight the data so that the reweighted sample obeys all the moment restrictions at the parameter estimates. In this paper we consider an alternative KLIC motivated weighting and show how it and similar discrete reweightings define a class of unconstrained optimization problems which includes GMM as a special case. Such KLICmotivated reweightings introduce M auxiliary tilting' parameters, where M is the number of moments; parameter and overidentification hypotheses can be recast in terms of these tilting parameters. Such tests, when appropriately conditioned on the estimates of the original parameters, are often startlingly more effective than their conventional counterparts. This is apparently due to the local ancillarity of the original parameters for the tilting parameters
Imposing moment restrictions from auxiliary data by weighting by
Guido Imbens(
Book
)
5 editions published in 1996 in English and held by 31 WorldCat member libraries worldwide
In this paper we analyze estimation of coefficients in regression models under moment restrictions where the moment restrictions are derived from auxiliary data. Our approach is similar to those that have been used in statistics for analyzing contingency tables with known marginals. These methods are useful in cases where data from a small, potentially nonrepresentative data set can be supplemented with auxiliary information from another data set which may be larger and/or more representative of the target population. The moment restrictions yield weights for each observation that can subsequently be used in weighted regression analysis. We discuss the interpretation of these weights both under the assumption that the target population and the sampled population are the same, as well as under the assumption that these popula tions differ. We present an application based on omitted ability bias in estimation of wage regressions. The National Longitudinal Survey Young Men's Cohort (NLS), as well as containing information for each observation on earn ings, education and experience, records data on two test scores that may be considered proxies for ability. The NLS is a small data set, however, with a high attrition rate. We investigate how to mitigate these problems in the NLS by forming moments from the joint distribution of education, experience and earnings in the 1% sample of the 1980 U.S. Census and using these moments to construct weights for weighted regression analysis of the NLS. We analyze the impacts of our weighted regression techniques on the estimated coefficients and standard errors on returns to education and experience in the NLS control ling for ability, with and without assuming that the NLS and the Census samples are random samples from the same population
5 editions published in 1996 in English and held by 31 WorldCat member libraries worldwide
In this paper we analyze estimation of coefficients in regression models under moment restrictions where the moment restrictions are derived from auxiliary data. Our approach is similar to those that have been used in statistics for analyzing contingency tables with known marginals. These methods are useful in cases where data from a small, potentially nonrepresentative data set can be supplemented with auxiliary information from another data set which may be larger and/or more representative of the target population. The moment restrictions yield weights for each observation that can subsequently be used in weighted regression analysis. We discuss the interpretation of these weights both under the assumption that the target population and the sampled population are the same, as well as under the assumption that these popula tions differ. We present an application based on omitted ability bias in estimation of wage regressions. The National Longitudinal Survey Young Men's Cohort (NLS), as well as containing information for each observation on earn ings, education and experience, records data on two test scores that may be considered proxies for ability. The NLS is a small data set, however, with a high attrition rate. We investigate how to mitigate these problems in the NLS by forming moments from the joint distribution of education, experience and earnings in the 1% sample of the 1980 U.S. Census and using these moments to construct weights for weighted regression analysis of the NLS. We analyze the impacts of our weighted regression techniques on the estimated coefficients and standard errors on returns to education and experience in the NLS control ling for ability, with and without assuming that the NLS and the Census samples are random samples from the same population
Recent developments in the econometrics of program evaluation by
Guido Imbens(
)
11 editions published in 2008 in English and held by 31 WorldCat member libraries worldwide
Many empirical questions in economics and other social sciences depend on causal effects of programs or policy interventions. In the last two decades much research has been done on the econometric and statistical analysis of the effects of such programs or treatments. This recent theoretical literature has built on, and combined features of, earlier work in both the statistics and econometrics literatures. It has by now reached a level of maturity that makes it an important tool in many areas of empirical research in economics, including labor economics, public finance, development economics, industrial organization and other areas of empirical microeconomics. In this review we discuss some of the recent developments. We focus primarily on practical issues for empirical researchers, as well as provide a historical overview of the area and give references to more technical research
11 editions published in 2008 in English and held by 31 WorldCat member libraries worldwide
Many empirical questions in economics and other social sciences depend on causal effects of programs or policy interventions. In the last two decades much research has been done on the econometric and statistical analysis of the effects of such programs or treatments. This recent theoretical literature has built on, and combined features of, earlier work in both the statistics and econometrics literatures. It has by now reached a level of maturity that makes it an important tool in many areas of empirical research in economics, including labor economics, public finance, development economics, industrial organization and other areas of empirical microeconomics. In this review we discuss some of the recent developments. We focus primarily on practical issues for empirical researchers, as well as provide a historical overview of the area and give references to more technical research
Nonparametric demand analysis with an application to the demand for fish by
Joshua David Angrist(
Book
)
4 editions published in 1995 in English and held by 27 WorldCat member libraries worldwide
Instrumental variables (IV) estimation of a demand equation using time series data is shown to produce a weighted average derivative of heterogeneous potential demand functions. This result adapts recent work on the causal interpretation of twostage least squares estimates to the simultaneous equations context and generalizes earlier research on average derivative estimation to models with endogenous regressors. The paper also shows how to compute the weights underlying IV estimates of average derivatives in a simultaneous equations model. These ideas are illustrated using data from the Fulton Fish market in New York City to estimate an average elasticity of wholesale demand for fresh fish. The weighting function underlying IV estimates of the demand equation is graphed and interpreted. The empirical example illustrates the essentially local and contextspecific nature of instrumental variables estimates of structural parameters in simultaneous equations models
4 editions published in 1995 in English and held by 27 WorldCat member libraries worldwide
Instrumental variables (IV) estimation of a demand equation using time series data is shown to produce a weighted average derivative of heterogeneous potential demand functions. This result adapts recent work on the causal interpretation of twostage least squares estimates to the simultaneous equations context and generalizes earlier research on average derivative estimation to models with endogenous regressors. The paper also shows how to compute the weights underlying IV estimates of average derivatives in a simultaneous equations model. These ideas are illustrated using data from the Fulton Fish market in New York City to estimate an average elasticity of wholesale demand for fresh fish. The weighting function underlying IV estimates of the demand equation is graphed and interpreted. The empirical example illustrates the essentially local and contextspecific nature of instrumental variables estimates of structural parameters in simultaneous equations models
A martingale representation for matching estimators by
Alberto Abadie(
)
10 editions published in 2009 in English and held by 27 WorldCat member libraries worldwide
Matching estimators are widely used in statistical data analysis. However, the distribution of matching estimators has been derived only for particular cases (Abadie and Imbens, 2006). This article establishes a martingale representation for matching estimators. This representation allows the use of martingale limit theorems to derive the asymptotic distribution of matching estimators. As an illustration of the applicability of the theory, we derive the asymptotic distribution of a matching estimator when matching is carried out without replacement, a result previously unavailable in the literature
10 editions published in 2009 in English and held by 27 WorldCat member libraries worldwide
Matching estimators are widely used in statistical data analysis. However, the distribution of matching estimators has been derived only for particular cases (Abadie and Imbens, 2006). This article establishes a martingale representation for matching estimators. This representation allows the use of martingale limit theorems to derive the asymptotic distribution of matching estimators. As an illustration of the applicability of the theory, we derive the asymptotic distribution of a matching estimator when matching is carried out without replacement, a result previously unavailable in the literature
Bias from classical and other forms of measurement error by
Dean Robert Hyslop(
Book
)
6 editions published in 2000 in English and held by 26 WorldCat member libraries worldwide
Abstract: We consider the implications of a specific alternative to the classical measurement error model, in which the data are optimal predictions based on some information set. One motivation for this model is that if respondents are aware of their ignorance they may interpret the question what is the value of this variable?' as what is your best estimate of this variable?', and provide optimal predictions of the variable of interest given their information set. In contrast to the classical measurement error model, this model implies that the measurement error is uncorrelated with the reported value and, by necessity, correlated with the true value of the variable. In the context of the linear regression framework, we show that measurement error can lead to over as well as underestimation of the coefficients of interest. Critical for determining the bias is the model for the individual reporting the mismeasured variables, the individual's information set, and the correlation structure of the errors. We also investigate the implications of instrumental variables methods in the presence of measurement error of the optimal prediction error form and show that such methods may in fact introduce bias. Finally, we present some calculations indicating that the range of estimates of the returns to education consistent with amounts of measurement error found in previous studies. This range can be quite wide, especially if one allows for correlation between the measurement errors
6 editions published in 2000 in English and held by 26 WorldCat member libraries worldwide
Abstract: We consider the implications of a specific alternative to the classical measurement error model, in which the data are optimal predictions based on some information set. One motivation for this model is that if respondents are aware of their ignorance they may interpret the question what is the value of this variable?' as what is your best estimate of this variable?', and provide optimal predictions of the variable of interest given their information set. In contrast to the classical measurement error model, this model implies that the measurement error is uncorrelated with the reported value and, by necessity, correlated with the true value of the variable. In the context of the linear regression framework, we show that measurement error can lead to over as well as underestimation of the coefficients of interest. Critical for determining the bias is the model for the individual reporting the mismeasured variables, the individual's information set, and the correlation structure of the errors. We also investigate the implications of instrumental variables methods in the presence of measurement error of the optimal prediction error form and show that such methods may in fact introduce bias. Finally, we present some calculations indicating that the range of estimates of the returns to education consistent with amounts of measurement error found in previous studies. This range can be quite wide, especially if one allows for correlation between the measurement errors
Identification and estimation of triangular simultaneous equations models without additivity by
Guido Imbens(
Book
)
6 editions published in 2002 in English and held by 23 WorldCat member libraries worldwide
This paper investigates identification and inference in a nonparametric structural model with instrumental variables and nonadditive errors. We allow for nonadditive errors because the unobserved heterogeneity in marginal returns that often motivates concerns about endogeneity of choices requires objective functions that are nonadditive in observed and unobserved components. We formulate several independence and monotonicity conditions that are sufficient for identification of a number of objects of interest, including the average conditional response, the average structural function, as well as the full structural response function. For inference we propose a twostep series estimator. The first step consists of estimating the conditional distribution of the endogenous regressor given the instrument. In the second step the estimated conditional distribution function is used as a regressor in a nonlinear control function approach. We establish rates of convergence, asymptotic normality, and give a consistent asymptotic variance estimator
6 editions published in 2002 in English and held by 23 WorldCat member libraries worldwide
This paper investigates identification and inference in a nonparametric structural model with instrumental variables and nonadditive errors. We allow for nonadditive errors because the unobserved heterogeneity in marginal returns that often motivates concerns about endogeneity of choices requires objective functions that are nonadditive in observed and unobserved components. We formulate several independence and monotonicity conditions that are sufficient for identification of a number of objects of interest, including the average conditional response, the average structural function, as well as the full structural response function. For inference we propose a twostep series estimator. The first step consists of estimating the conditional distribution of the endogenous regressor given the instrument. In the second step the estimated conditional distribution function is used as a regressor in a nonlinear control function approach. We establish rates of convergence, asymptotic normality, and give a consistent asymptotic variance estimator
The role of propensity score in estimating doseresponse functions by
Guido Imbens(
Book
)
9 editions published in 1999 in English and held by 22 WorldCat member libraries worldwide
Estimation of average treatment effects in observational, or nonexperimental in pretreatment variables. If the number of pretreatment variables is large, and their distribution varies substantially with treatment status, standard adjustment methods such as covariance adjustment are often inadequate. Rosenbaum and Rubin (1983) propose an alternative method for adjusting for pretreatment variables based on the propensity score conditional probability of receiving the treatment given pretreatment variables. They demonstrate that adjusting solely for the propensity score removes all the bias associated with differences in pretreatment variables between treatment and control groups. The RosenbaumRubin proposals deal exclusively with the case where treatment takes on only two values. In this paper an extension of this methodology is proposed that allows for estimation of average causal effects with multivalued treatments while maintaining the advantages of the propensity score approach
9 editions published in 1999 in English and held by 22 WorldCat member libraries worldwide
Estimation of average treatment effects in observational, or nonexperimental in pretreatment variables. If the number of pretreatment variables is large, and their distribution varies substantially with treatment status, standard adjustment methods such as covariance adjustment are often inadequate. Rosenbaum and Rubin (1983) propose an alternative method for adjusting for pretreatment variables based on the propensity score conditional probability of receiving the treatment given pretreatment variables. They demonstrate that adjusting solely for the propensity score removes all the bias associated with differences in pretreatment variables between treatment and control groups. The RosenbaumRubin proposals deal exclusively with the case where treatment takes on only two values. In this paper an extension of this methodology is proposed that allows for estimation of average causal effects with multivalued treatments while maintaining the advantages of the propensity score approach
Matching on the estimated propensity score by
Alberto Abadie(
Book
)
8 editions published between 2009 and 2010 in English and held by 15 WorldCat member libraries worldwide
Propensity score matching estimators (Rosenbaum and Rubin, 1983) are widely used in evaluation research to estimate average treatment effects. In this article, we derive the large sample distribution of propensity score matching estimators. Our derivations take into account that the propensity score is itself estimated in a first step, prior to matching. We prove that first step estimation of the propensity score affects the large sample distribution of propensity score matching estimators. Moreover, we derive an adjustment to the large sample variance of propensity score matching estimators that corrects for first step estimation of the propensity score. In spite of the great popularity of propensity score matching estimators, these results were previously unavailable in the literature
8 editions published between 2009 and 2010 in English and held by 15 WorldCat member libraries worldwide
Propensity score matching estimators (Rosenbaum and Rubin, 1983) are widely used in evaluation research to estimate average treatment effects. In this article, we derive the large sample distribution of propensity score matching estimators. Our derivations take into account that the propensity score is itself estimated in a first step, prior to matching. We prove that first step estimation of the propensity score affects the large sample distribution of propensity score matching estimators. Moreover, we derive an adjustment to the large sample variance of propensity score matching estimators that corrects for first step estimation of the propensity score. In spite of the great popularity of propensity score matching estimators, these results were previously unavailable in the literature
more
fewer
Audience Level
0 

1  
Kids  General  Special 
Related Identities
Useful Links
Associated Subjects
Aid to families with dependent children programsEvaluation Analysis of covariance Bayesian statistical decision theory Business cycles California Causation Consumer behavior Econometric models Econometrics Economic policyMathematical models Economics EconomicsStatistical methods Employment (Economic theory) Error analysis (Mathematics) Estimation theory Evaluation Human servicesEvaluation Inference Information theory in economics Instrumental variables (Statistics) Labor supply Lottery winners Martingales (Mathematics) Mathematical statistics MedicineResearchMethodology Moments method (Statistics) Nonparametric statistics Occupational trainingEconomic aspects Public welfare Public welfareEconometric models Regression analysis Saving and investment Social sciencesResearch Social sciencesResearchMethodology TherapeuticsMathematical models Unemployment UnemploymentEconometric models United States Welfare recipientsEducation Welfare recipientsEmployment YouthEmployment YouthEmploymentEconometric models
Alternative Names
Guido Imbens Dutch American econometrician
Imbens, G. W.
Imbens, G. W. 1963
Imbens, G. W. (Guido Wilhelmus)
Imbens, Guido.
Imbens, Guido M. 1963
Imbens, Guido W.
Imbens, Guido W. 1963
Imbens, Guido Wilhelmus 1963
Languages