WorldCat Identities

Imbens, Guido

Overview
Works: 85 works in 225 publications in 1 language and 1,640 library holdings
Roles: Author, Creator, Editor, Honoree
Classifications: HB1, 330.072
Publication Timeline
.
Most widely held works by Guido Imbens
Re-employment probabilities over the business cycle by Guido Imbens( Book )
7 editions published in 1993 in English and held by 68 WorldCat member libraries worldwide
Abstract: Using a Cox proportional hazard model that allows for a flexible time dependence that can incorporate both seasonal and business cycle effects, we analyze the determinants of re-employment probabilities of young workers from 1978-1989. We find considerable changes in the chances of young workers finding jobs over the business cycle, however, the characteristics of those starting jobless spells do not vary much over time. Therefore, government programs that target specific demographic groups may change individuals' positions within the queue of job seekers but will probably have a more limited impact on the overall re-employment probability. Living in an area with high local unemployment reduces re-employment chances as does being in a long spell of non-employment. However, when we allow for an interaction between the length of time of a jobless spell and the local unemployment rate we find the interaction term is positive. In other words, while workers appear to be scarred by a long spell of unemployment, the damage seems to be reduced if they are unemployed in an area with high overall unemployment
The long-term gains from GAIN : a re-analysis of the impacts of the California GAIN program by V. Joseph Hotz( Book )
5 editions published in 2000 in English and held by 65 WorldCat member libraries worldwide
This paper discusses California's Greater Avenues to Independence (GAIN) programs of the early 1990s in Riverside County. These programs emphasized skill accumulation and had positive effects on employment, earnings, and welfare receipt
Regression discontinuity designs a guide to practice by Guido Imbens( )
9 editions published in 2007 in English and held by 63 WorldCat member libraries worldwide
In Regression Discontinuity (RD) designs for evaluating causal effects of interventions, assignment to a treatment is determined at least partly by the value of an observed covariate lying on either side of a fixed threshold. These designs were first introduced in the evaluation literature by Thistlewaite and Campbell (1960). With the exception of a few unpublished theoretical papers, these methods did not attract much attention in the economics literature until recently. Starting in the late 1990s, there has been a large number of studies in economics applying and extending RD methods. In this paper we review some of the practical and theoretical issues involved in the implementation of RD methods
Optimal bandwidth choice for the regression discontinuity estimator by Guido Imbens( )
12 editions published between 2009 and 2010 in English and held by 61 WorldCat member libraries worldwide
We investigate the problem of optimal choice of the smoothing parameter (bandwidth) for the regression discontinuity estimator. We focus on estimation by local linear regression, which was shown to be rate optimal (Porter, 2003). Investigation of an expected-squared-error-loss criterion reveals the need for regularization. We propose an optimal, data dependent, bandwidth choice rule. We illustrate the proposed bandwidth choice using data previously analyzed by Lee (2008), as well as in a simulation study based on this data set. The simulations suggest that the proposed rule performs well
Estimating the effect of unearned income on labor supply, earnings, savings, and consumption : evidence from a survey of lottery players by Guido Imbens( Book )
7 editions published in 1999 in English and held by 60 WorldCat member libraries worldwide
Knowledge of the effect of unearned income on economic behavior of individuals in general, and on labor supply in particular, is of great importance to policy makers. Estimation of income effects, however, is a difficult problem because income is not randomly assigned and exogenous changes in income are difficult to identify. Here we exploit the randomized assignment of large amounts of money over long periods of time through lotteries. We carried out a survey of people who played the lottery in the mid-eighties and estimate the effect of lottery winnings on their subsequent earnings, labor supply, consumption, and savings. We find that winning a modest prize ($15,000 per year for twenty years) does not affect labor supply or earnings substantially. Winning such a prize does not considerably reduce savings. Winning a much larger prize ($80,000 rather than $15,000 per year) reduces labor supply as measured by hours, as well as participation and social security earnings; elasticities for hours and earnings are around -0.20 and for participation around -0.14. Winning a large versus modest amount also leads to increased expenditures on cars and larger home values, although mortgages values appear to increase by approximately the same amount. Winning $80,000 increases overall savings, although savings in retirement accounts are not significantly affected. The results do not vary much by gender, age, or prior employment status. There is some evidence that for those with zero earnings prior to winning the lottery there is a positive effect of winning a small prize on subsequent labor market participation
Evaluating the differential effects of alternative welfare-to-work training components a re-analysis of the California gain program by V. Joseph Hotz( )
5 editions published in 2006 in English and held by 52 WorldCat member libraries worldwide
"In this paper, we explore ways of combining experimental data and non-experimental methods to estimate the differential effects of components of training programs. We show how data from a multi-site experimental evaluation in which subjects are randomly assigned to any treatment versus a control group who receives no treatment can be combined with non-experimental regression-adjustment methods to estimate the differential effects of particular types of treatments. We also devise tests of the validity of using the latter methods. We use these methods and tests to re-analyze data from the MDRC Evaluation of California's Greater Avenues to Independence (GAIN) program. While not designed to estimate the differential effects of the Labor Force Attachment (LFA) training and Human Capital Development (HCD) training components used in this program, we show how data from this experimental evaluation can be used in conjunction with non-experimental methods to estimate such effects. We present estimates of both the short- and long-term differential effects of these two training components on employment and earnings. We find that while there are short-term positive differential effects of LFA versus HCD, the latter training component is relatively more beneficial in the longer-term"--National Bureau of Economic Research web site
Hierarchical Bayes models with many instrumental variables by Gary Chamberlain( Book )
5 editions published in 1996 in English and held by 51 WorldCat member libraries worldwide
Abstract: In this paper, we explore Bayesian inference in models with many instrumental variables that are potentially weakly correlated with the endogenous regressor. The prior distribution has a hierarchical (nested) structure. We apply the methods to the Angrist-Krueger (AK, 1991) analysis of returns to schooling using instrumental variables formed by interacting quarter of birth with state/year dummy variables. Bound, Jaeger, and Baker (1995) show that randomly generated instrumental variables, designed to match the AK data set, give two-stage least squares results that look similar to the results based on the actual instrumental variables. Using a hierarchical model with the AK data, we find a posterior distribution for the parameter of interest that is tight and plausible. Using data with randomly generated instruments, the posterior distribution is diffuse. Most of the information in the AK data can in fact be extracted with quarter of birth as the single instrumental variable. Using artificial data patterned on the AK data, we find that if all the information had been in the interactions between quarter of birth and state/year dummies, then the hierarchical model would still have led to precise inferences, whereas the single instrument model would have suggested that there was no information in the data. We conclude that hierarchical modeling is a conceptually straightforward way of efficiently combining many weak instrumental variables
Nonparametric applications of Bayesian inference by Gary Chamberlain( Book )
5 editions published in 1996 in English and held by 50 WorldCat member libraries worldwide
Abstract: The paper evaluates the usefulness of a nonparametric approach to Bayesian inference by presenting two applications. The approach is due to Ferguson (1973, 1974) and Rubin (1981). Our first application considers an educational choice problem. We focus on obtaining a predictive distribution for earnings corresponding to various levels of schooling. This predictive distribution incorporates the parameter uncertainty, so that it is relevant for decision making under uncertainty in the expected utility framework of microeconomics. The second application is to quantile regression. Our point here is to examine the potential of the nonparametric framework to provide inferences without making asymptotic approximations. Unlike in the first application, the standard asymptotic normal approximation turns out to not be a good guide. We also consider a comparison with a bootstrap approach
Recent developments in the econometrics of program evaluation by Guido Imbens( )
7 editions published in 2008 in English and held by 46 WorldCat member libraries worldwide
Many empirical questions in economics and other social sciences depend on causal effects of programs or policiy interventions. In the last two decades much research has been done on the econometric and statistical analysis of the effects of such programs or treatments. This recent theoretical literature has built on, and combined features of, earlier work in both the statistics and econometrics literatures. It has by now reached a level of maturity that makes it an important tool in many areas of empirical research in economics, including labor economics, public finance, development economics, industrial organization and other areas of empirical micro-economics. In this review we discuss some of the recent developments. We focus primarily on practical issues for empirical researchers, as well as provide a historical overview of the area and give references to more technical research
Imposing moment restrictions from auxiliary data by weighting by Guido Imbens( Book )
2 editions published in 1996 in English and held by 44 WorldCat member libraries worldwide
Abstract: In this paper we analyze estimation of coefficients in regression models under moment restrictions where the moment restrictions are derived from auxiliary data. Our approach is similar to those that have been used in statistics for analyzing contingency tables with known marginals. These methods are useful in cases where data from a small, potentially non-representative data set can be supplemented with auxiliary information from another data set which may be larger and/or more representative of the target population. The moment restrictions yield weights for each observation that can subsequently be used in weighted regression analysis. We discuss the interpretation of these weights both under the assumption that the target population and the sampled population are the same, as well as under the assumption that these popula- tions differ. We present an application based on omitted ability bias in estimation of wage regressions. The National Longitudinal Survey Young Men's Cohort (NLS), as well as containing information for each observation on earn- ings, education and experience, records data on two test scores that may be considered proxies for ability. The NLS is a small data set, however, with a high attrition rate. We investigate how to mitigate these problems in the NLS by forming moments from the joint distribution of education, experience and earnings in the 1% sample of the 1980 U.S. Census and using these moments to construct weights for weighted regression analysis of the NLS. We analyze the impacts of our weighted regression techniques on the estimated coefficients and standard errors on returns to education and experience in the NLS control- ling for ability, with and without assuming that the NLS and the Census samples are random samples from the same population
Information theoretic approaches to inference in moment condition models by Guido Imbens( Book )
4 editions published in 1995 in English and held by 43 WorldCat member libraries worldwide
One-step efficient GMM estimation has been developed in the recent papers of Back and Brown (1990), Imbens (1993) and Qin and Lawless (1994). These papers emphasized methods that correspond to using Owen's (1988) method of empirical likelihood to reweight the data so that the reweighted sample obeys all the moment restrictions at the parameter estimates. In this paper we consider an alternative KLIC motivated weighting and show how it and similar discrete reweightings define a class of unconstrained optimization problems which includes GMM as a special case. Such KLIC-motivated reweightings introduce M auxiliary tilting' parameters, where M is the number of moments; parameter and overidentification hypotheses can be recast in terms of these tilting parameters. Such tests, when appropriately conditioned on the estimates of the original parameters, are often startlingly more effective than their conventional counterparts. This is apparently due to the local ancillarity of the original parameters for the tilting parameters
Non-parametric demand analysis with an application to the demand for fish by Joshua David Angrist( Book )
2 editions published in 1995 in English and held by 43 WorldCat member libraries worldwide
Instrumental variables (IV) estimation of a demand equation using time series data is shown to produce a weighted average derivative of heterogeneous potential demand functions. This result adapts recent work on the causal interpretation of two-stage least squares estimates to the simultaneous equations context and generalizes earlier research on average derivative estimation to models with endogenous regressors. The paper also shows how to compute the weights underlying IV estimates of average derivatives in a simultaneous equations model. These ideas are illustrated using data from the Fulton Fish market in New York City to estimate an average elasticity of wholesale demand for fresh fish. The weighting function underlying IV estimates of the demand equation is graphed and interpreted. The empirical example illustrates the essentially local and context-specific nature of instrumental variables estimates of structural parameters in simultaneous equations models
A martingale representation for matching estimators by Alberto Abadie( )
9 editions published in 2009 in English and held by 42 WorldCat member libraries worldwide
Matching estimators are widely used in statistical data analysis. However, the distribution of matching estimators has been derived only for particular cases (Abadie and Imbens, 2006). This article establishes a martingale representation for matching estimators. This representation allows the use of martingale limit theorems to derive the asymptotic distribution of matching estimators. As an illustration of the applicability of the theory, we derive the asymptotic distribution of a matching estimator when matching is carried out without replacement, a result previously unavailable in the literature
Identification and estimation of triangular simultaneous equations models without additivity by Guido Imbens( )
4 editions published in 2002 in English and held by 42 WorldCat member libraries worldwide
Abstract: This paper investigates identification and inference in a nonparametric structural model with instrumental variables and non-additive errors. We allow for non-additive errors because the unobserved heterogeneity in marginal returns that often motivates concerns about endogeneity of choices requires objective functions that are non-additive in observed and unobserved components. We formulate several independence and monotonicity conditions that are sufficient for identification of a number of objects of interest, including the average conditional response, the average structural function, as well as the full structural response function. For inference we propose a two-step series estimator. The first step consists of estimating the conditional distribution of the endogenous regressor given the instrument. In the second step the estimated conditional distribution function is used as a regressor in a nonlinear control function approach. We establish rates of convergence, asymptotic normality, and give a consistent asymptotic variance estimator
Instrumental variables estimation of quantile treatment effects by Alberto Abadie( )
3 editions published between 1997 and 1998 in English and held by 42 WorldCat member libraries worldwide
Abstract: This paper introduces an instrumental variables estimator for the effect of a binary treatment on the quantiles of potential outcomes. The quantile treatment effects (QTE) estimator accommodates exogenous covariates and reduces to quantile regression as a special case when treatment status is exogenous. Asymptotic distribution theory and computational methods are derived. QTE minimizes a piecewise linear objective function for which a local minimum can be obtained using a modified Barrodale-Roberts algorithm. The QTE estimator is illustrated by estimating the effect of childbearing on the distribution of family income
Jackknife instrumental variables estimation by Joshua David Angrist( Book )
2 editions published in 1995 in English and held by 41 WorldCat member libraries worldwide
Two-stage-least-squares (2SLS) estimates are biased towards OLS estimates. This bias grows with the degree of over-identification and can generate highly misleading results. In this paper we propose two simple alternatives to 2SLS and limited-information-maximum-likelihood (LIML) estimators for models with more instruments than endogenous regressors. These estimators can be interpreted as instrumental variables procedures using an instrument that is independent of disturbances even in finite samples. Independence is achieved by using a leave-one-out' jackknife-type fitted value in place of the usual first-stage equation. The new estimators are first-order equivalent to 2SLS but with finite-sample properties superior to those of 2SLS and similar to LIML when there are many instruments. Moreover, the jackknife estimators appear to be less sensitive than LIML to deviations from the linear reduced form used in classical simultaneous equations models
Bias from classical and other forms of measurement error by Dean Robert Hyslop( Book )
3 editions published in 2000 in English and held by 41 WorldCat member libraries worldwide
Abstract: We consider the implications of a specific alternative to the classical measurement error model, in which the data are optimal predictions based on some information set. One motivation for this model is that if respondents are aware of their ignorance they may interpret the question what is the value of this variable?' as what is your best estimate of this variable?', and provide optimal predictions of the variable of interest given their information set. In contrast to the classical measurement error model, this model implies that the measurement error is uncorrelated with the reported value and, by necessity, correlated with the true value of the variable. In the context of the linear regression framework, we show that measurement error can lead to over- as well as under-estimation of the coefficients of interest. Critical for determining the bias is the model for the individual reporting the mismeasured variables, the individual's information set, and the correlation structure of the errors. We also investigate the implications of instrumental variables methods in the presence of measurement error of the optimal prediction error form and show that such methods may in fact introduce bias. Finally, we present some calculations indicating that the range of estimates of the returns to education consistent with amounts of measurement error found in previous studies. This range can be quite wide, especially if one allows for correlation between the measurement errors
Complementarity and aggregate implications of assortative matching a nonparametric analysis by Bryan S Graham( )
5 editions published in 2009 in English and held by 40 WorldCat member libraries worldwide
This paper presents methods for evaluating the effects of reallocating an indivisible input across production units, taking into account resource constraints by keeping the marginal distribution of the input fixed. When the production technology is nonseparable, such reallocations, although leaving the marginal distribution of the reallocated input unchanged by construction, may nonetheless alter average output. Examples include reallocations of teachers across classrooms composed of students of varying mean ability. We focus on the effects of reallocating one input, while holding the assignment of another, potentially complementary, input fixed. We introduce a class of such reallocations -- correlated matching rules -- that includes the status quo allocation, a random allocation, and both the perfect positive and negative assortative matching allocations as special cases. We also characterize the effects of local (relative to the status quo) reallocations. For estimation we use a two-step approach. In the first step we nonparametrically estimate the production function. In the second step we average the estimated production function over the distribution of inputs induced by the new assignment rule. These methods build upon the partial mean literature, but require extensions involving boundary issues. We derive the large sample properties of our proposed estimators and assess their small sample properties via a limited set of Monte Carlo experiments
Efficient estimation of average treatment effects using the estimated propensity score by Keisuke Hirano( )
3 editions published in 2000 in English and held by 40 WorldCat member libraries worldwide
Abstract: We are interested in estimating the average effect of a binary treatment on a scalar outcome. If assignment to the treatment is independent of the potential outcomes given pretreatment variables, biases associated with simple treatment-control average comparisons can be removed by adjusting for differences in the pre-treatment variables. Rosenbaum and Rubin (1983, 1984) show that adjusting solely for differences between treated and control units in a scalar function of the pre-treatment, the propensity score, also removes the entire bias associated with differences in pre-treatment variables. Thus it is possible to obtain unbiased estimates of the treatment effect without conditioning on a possibly high-dimensional vector of pre-treatment variables. Although adjusting for the propensity score removes all the bias, this can come at the expense of efficiency. We show that weighting with the inverse of a nonparametric estimate of the propensity score, rather than the true propensity score, leads to efficient estimates of the various average treatment effects. This result holds whether the pre-treatment variables have discrete or continuous distributions. We provide intuition for this result in a number of ways. First we show that with discrete covariates, exact adjustment for the estimated propensity score is identical to adjustment for the pre-treatment variables. Second, we show that weighting by the inverse of the estimated propensity score can be interpreted as an empirical likelihood estimator that efficiently incorporates the information about the propensity score. Finally, we make a connection to results to other results on efficient estimation through weighting in the context of variable probability sampling
Matching on the estimated propensity score by Alberto Abadie( )
5 editions published in 2009 in English and held by 40 WorldCat member libraries worldwide
Propensity score matching estimators (Rosenbaum and Rubin, 1983) are widely used in evaluation research to estimate average treatment effects. In this article, we derive the large sample distribution of propensity score matching estimators. Our derivations take into account that the propensity score is itself estimated in a first step, prior to matching. We prove that first step estimation of the propensity score affects the large sample distribution of propensity score matching estimators. Moreover, we derive an adjustment to the large sample variance of propensity score matching estimators that corrects for first step estimation of the propensity score. In spite of the great popularity of propensity score matching estimators, these results were previously unavailable in the literature
 
moreShow More Titles
fewerShow Fewer Titles
Audience Level
0
Audience Level
1
  Kids General Special  
Audience level: 0.92 (from 0.80 for Optimal ba ... to 0.96 for Instrument ...)
Alternative Names
Imbens, G. W.
Imbens, G. W. 1963-
Imbens, G. W. (Guido Wilhelmus)
Imbens, Guido M. 1963-
Imbens, Guido W. 1963-
Imbens, Guido Wilhelmus 1963-
Languages
English (104)