skip to content

Chernozhukov, Victor

Overview
Works: 127 works in 210 publications in 1 language and 398 library holdings
Roles: Author, Editor
Classifications: HB1, 330.072
Publication Timeline
Key
Publications about Victor Chernozhukov
Publications by Victor Chernozhukov
Most widely held works by Victor Chernozhukov
Quantile regression under misspecification with an application to the U.S. wage structure by Joshua David Angrist( Book )
11 editions published in 2004 in English and held by 44 libraries worldwide
Quantile regression(QR) fits a linear model for conditional quantiles, just as ordinary least squares (OLS) fits a linear model for conditional means. An attractive feature of OLS is that it gives the minimum mean square error linear approximation to the conditional expectation function even when the linear model is misspecified. Empirical research using quantile regression with discrete covariates suggests that QR may have a similar property, but the exact nature of the linear approximation has remained elusive. In this paper, we show that QR can be interpreted as minimizing a weighted mean-squared error loss function for specification error. The weighting function is an average density of the dependent variable near the true conditional quantile. The weighted least squares interpretation of QR is used to derive an omitted variables bias formula and a partial quantile correlation concept, similar to the relationship between partial correlation and OLS. We also derive general asymptotic results for QR processes allowing for misspecification of the conditional quantile function, extending earlier results from a single quantile to the entire process. The approximation properties of QR are illustrated through an analysis of the wage structure and residual inequality in US Census data for 1980, 1990, and 2000. The results suggest continued residual inequality growth in the 1990s, primarily in the upper half of the wage distribution and for college graduates
Learning and disagreement in an uncertain world by Daron Acemoglu( Book )
10 editions published in 2006 in English and held by 26 libraries worldwide
Most economic analyses presume that there are limited differences in the prior beliefs of individuals, an assumption most often justified by the argument that sufficient common experiences and observations will eliminate disagreements. We investigate this claim using a simple model of Bayesian learning. Two individuals with different priors observe the same infinite sequence of signals about some underlying parameter. Existing results in the literature establish that when individuals are certain about the interpretation of signals, under very mild conditions there will be asymptotic agreement -- their assessments will eventually agree. In contrast, we look at an environment in which individuals are uncertain about the interpretation of signals, meaning that they have non-degenerate probability distributions over the conditional distribution of signals given the underlying parameter. When priors on the parameter and the conditional distribution of signals have full support, we prove the following results: (1) Individuals will never agree, even after observing the same infinite sequence of signals. (2) Before observing the signals, they believe with probability 1 that their posteriors about the underlying parameter will fail to converge
Quantile regression with censoring and endogeneity by Victor Chernozhukov( Book )
9 editions published in 2011 in English and held by 12 libraries worldwide
In this paper, we develop a new censored quantile instrumental variable (CQIV) estimator and describe its properties and computation. The CQIV estimator combines Powell (1986) censored quantile regression (CQR) to deal semiparametrically with censoring, with a control variable approach to incorporate endogenous regressors. The CQIV estimator is obtained in two stages that are nonadditive in the unobservables. The first stage estimates a nonadditive model with infinite dimensional parameters for the control variable, such as a quantile or distribution regression model. The second stage estimates a nonadditive censored quantile regression model for the response variable of interest, including the estimated control variable to deal with endogeneity. For computation, we extend the algorithm for CQR developed by Chernozhukov and Hong (2002) to incorporate the estimation of the control variable. We give generic regularity conditions for asymptotic normality of the CQIV estimator and for the validity of resampling methods to approximate its asymptotic distribution. We verify these conditions for quantile and distribution regression estimation of the control variable. We illustrate the computation and applicability of the CQIV estimator with numerical examples and an empirical application on estimation of Engel curves for alcohol
Inference on counterfactual distributions by Victor Chernozhukov( Computer File )
6 editions published between 2008 and 2013 in English and held by 6 libraries worldwide
In this paper we develop procedures for performing inference in regression models about how potential policy interventions affect the entire marginal distribution of an outcome of interest. These policy interventions consist of either changes in the distribution of covariates related to the outcome holding the conditional distribution of the outcome given covariates fixed, or changes in the conditional distribution of the outcome given covariates holding the marginal distribution of the covariates fixed. Under either of these assumptions, we obtain uniformly consistent estimates and functional central limit theorems for the counterfactual and status quo marginal distributions of the outcome as well as other function-valued effects of the policy, including, for example, the effects of the policy on the marginal distribution function, quantile function, and other related functionals. We construct simultaneous confidence sets for these functions; these sets take into account the sampling variation in the estimation of the relationship between the outcome and covariates. Our procedures rely on, and our theory covers, all main regression approaches for modeling and estimating conditional distributions, focusing especially on classical, quantile, duration, and distribution regressions. Our procedures are general and accommodate both simple unitary changes in the values of a given covariate as well as changes in the distribution of the covariates or the conditional distribution of the outcome given covariates of general form. We apply the procedures to examine the effects of labor market institutions on the U.S. wage distribution. Keywords: Policy effects, counterfactual distribution, quantile regression, duration regression, distribution regression. JEL Classifications: C14, C21, C41, J31, J71
Program evaluation with high-dimensional data ( file )
4 editions published between 2013 and 2015 in English and held by 4 libraries worldwide
In this paper, we provide efficient estimators and honest confidence bands for a variety of treatment effects including local average (LATE) and local quantile treatment effects (LQTE) in data-rich environments. We can handle very many control variables, endogenous receipt of treatment, heterogeneous treatment effects, and function-valued outcomes. Our framework covers the special case of exogenous receipt of treatment, either conditional on controls or unconditionally as in randomized control trials. In the latter case, our approach produces efficient estimators and honest bands for (functional) average treatment effects (ATE) and quantile treatment effects (QTE). To make informative inference possible, we assume that key reduced form predictive relationships are approximately sparse. This assumption allows the use of regularization and selection methods to estimate those relations, and we provide methods for post-regularization and post-selection inference that are uniformly valid (honest) across a wide-range of models. We show that a key ingredient enabling honest inference is the use of orthogonal or doubly robust moment conditions in estimating certain reduced form functional parameters. We illustrate the use of the proposed methods with an application to estimating the effect of 401(k) eligibility and participation on accumulated assets. The results on program evaluation are obtained as a consequence of more general results on honest inference in a general moment condition framework, where we work with possibly a continuum of moments. We provide results on honest inference for (function-valued) parameters within this general framework where modern machine learning methods are used to fit the nonparametric/highdimensional components of the model. These include a number of supporting new results that are of major independent interest: namely, we (1) prove uniform validity of a multiplier bootstrap, (2) offer a uniformly valid functional delta method, and (3) provide results for sparsity-based estimation of regression functions for function-valued outcomes
Improving estimates of monotone functions by rearrangement by Victor Chernozhukov( Book )
2 editions published between 2007 and 2008 in English and held by 4 libraries worldwide
Suppose that a target function is monotonic, namely, weakly increasing, and an original estimate of the target function is available, which is not weakly increasing. Many common estimation methods used in statistics produce such estimates. We show that these estimates can always be improved with no harm using rearrangement techniques: The rearrangement methods, univariate and multivariate, transform the original estimate to a monotonic estimate, and the resulting estimate is closer to the true curve in common metrics than the original estimate. We illustrate the results with a computational example and an empirical example dealing with age-height growth charts. Keywords: Monotone function, improved approximation, multivariate rearrangement, univariate rearrangement, growth chart, quantile regression, mean regression, series, locally linear, kernel methods. JEL Classifications: Primary 62G08; Secondary 46F10, 62F35, 62P10
Conditional quantile processes based on series or many regressors by Alexandre Belloni( Computer File )
4 editions published between 2011 and 2016 in English and held by 4 libraries worldwide
Quantile regression (QR) is a principal regression method for analyzing the impact of covariates on outcomes. The impact is described by the conditional quantile function and its functionals. In this paper we develop the nonparametric QR-series framework, covering many regressors as a special case, for performing inference on the entire conditional quantile function and its linear functionals. In this framework, we approximate the entire conditional quantile function by a linear combination of series terms with quantile-specific coefficients and estimate the function-valued coefficients from the data. We develop large sample theory for the QR-series coefficient process, namely we obtain uniform strong approximations to the QR-series coefficient process by conditionally pivotal and Gaussian processes. Based on these two strong approximations, or couplings, we develop four resampling methods (pivotal, gradient bootstrap, Gaussian, and weighted bootstrap) that can be used for inference on the entire QR-series coefficient function. We apply these results to obtain estimation and inference methods for linear functionals of the conditional quantile function, such as the conditional quantile function itself, its partial derivatives, average partial derivatives, and conditional average partial derivatives. Specifically, we obtain uniform rates of convergence and show how to use the four resampling methods mentioned above for inference on the functionals. All of the above results are for function-valued parameters, holding uniformly in both the quantile index and the covariate value, and covering the pointwise case as a by-product. We demonstrate the practical utility of these results with an empirical example, where we estimate the price elasticity function and test the Slutsky condition of the individual demand for gasoline, as indexed by the individual unobserved propensity for gasoline consumption
Conditional extremes and near-extremes : concepts, asymptotic theory, and economic applications by Victor Chernozhukov( Archival Material )
3 editions published between 2000 and 2001 in English and held by 4 libraries worldwide
This paper develops a theory of high and low (extremal) quantile regression: the linear models, estimation, and inference. In particular, the models coherently combine the convenient, flexible linearity with the extreme-value-theoretic restrictions on tails and the general heteroscedasticity forms. Within these models, the limit laws for extremal quantile regression statistics are obtained under the rank conditions (experiments) constructed to reflect the extremal or rare nature of tail events. An inference framework is discussed. The results apply to cross-section (and possibly dependent) data. The applications, ranging from the analysis of babies' very low birth weights, (S, s) models, tail analysis in heteroscedastic regression models, outlier-robust inference in auction models, and decision-making under extreme uncertainty, provide the motivation and applications of this theory. Keywords: Quantile regression, extreme value theory, tail analysis, (S, s) models, auctions, price search, Extreme Risk. JEL Classifications: C13, C14, C21, C41, C51, C53, D21, D44, D81
Inference on treatment effects after selection amongst high-dimensional controls by Alexandre Belloni( Computer File )
4 editions published between 2012 and 2013 in English and held by 4 libraries worldwide
We propose robust methods for inference on the effect of a treatment variable on a scalar outcome in the presence of very many controls. Our setting is a partially linear model with possibly non-Gaussian and heteroscedastic disturbances where the number of controls may be much larger than the sample size. To make informative inference feasible, we require the model to be approximately sparse; that is, we require that the effect of confounding factors can be controlled for up to a small approximation error by conditioning on a relatively small number of controls whose identities are unknown. The latter condition makes it possible to estimate the treatment effect by selecting approximately the right set of controls. We develop a novel estimation and uniformly valid inference method for the treatment effect in this setting, called the 'post-double-selection' method. Our results apply to Lasso-type methods used for covariate selection as well as to any other model selection method that is able to find a sparse model with good approximation properties. The main attractive feature of our method is that it allows for imperfect slection of the controls and provides confidence intervals that are valid uniformly across a large class of models. In contrast, standard post-model selection estimators fail to provide uniform inference even in simple cases with a small, fixed number of controls. Thus our method resolves the problem of uniform inference after model selection for a large, interesting class of models. We also present a simple generalisation of our method to a fully heterogeneous model with a binary treatment variable. We illustrate the use of the developed methods with numerical simulations and an application that considers the effect of abortion crime rates
Posterior inference in curved exponential families under increasing dimensions by Alexandre Belloni( Book )
3 editions published between 2007 and 2013 in English and held by 4 libraries worldwide
N this work we study the large sample properties of the posterior-based inference in the curved exponential family under increasing dimension. The curved structure arises from the imposition of various restrictions, such as moment restrictions, on the model, and plays a fundamental role in various branches of data analysis. We establish conditions under which the posterior distribution is approximately normal, which in turn implies various good properties of estimation and inference procedures based on the posterior. We also discuss the multinomial model with moment restrictions, which arises in a variety of econometric applications. In our analysis, both the parameter dimension and the number of moments are increasing with the sample size. Keywords: Bayesian Infrence, Frequentist Properties. JEL Classifications: C13, C51, C53, D11, D21, D44
Inference on parameter sets in econometric models by Victor Chernozhukov( Book )
2 editions published in 2006 in English and held by 4 libraries worldwide
This paper provides confidence regions for minima of an econometric criterion function Q([mu]). The minima form a set of parameters, [Theta]I, called the identified set. In economic applications, [Theta]I represents a class of economic models that are consistent with the data. Our inference procedures are criterion function based and so our confidence regions, which cover [Theta]I with a prespecified probability, are appropriate level sets of Qn([mu]), the sample analog of Q([mu]). When [Theta]I is a singleton, our confidence sets reduce to the conventional confidence regions based on inverting the likelihood or other criterion functions. We show that our procedure is valid under general yet simple conditions, and we provide feasible resampling procedure for implementing the approach in practice. We then show that these general conditions hold in a wide class of parametric econometric models. In order to verify the conditions, we develop methods of analyzing the asymptotic behavior of econometric criterion functions under set identification and also characterize the rates of convergence of the confidence regions to the identified set. We apply our methods to regressions with in terval data and set identified method of moments problems. We illustrate our methods in an empirical Monte Carlo study based on Current Population Survey data. Keywords: Set estimator, level sets, interval regression, subsampling bootsrap. JEL Classifications: C13, C14, C21, C41, C51, C53
Inference for extremal conditional quantile models, with an application to market and birthweight risks by Victor Chernozhukov( Computer File )
3 editions published in 2011 in English and held by 3 libraries worldwide
Quantile regression is an increasingly important empirical tool in economics and other sciences for analyzing the impact of a set of regressors on the conditional distribution of an outcome. Extremal quantile regression, or quantile regression applied to the tails, is of interest in many economic and financial applications, such as conditional value-at-risk, production efficiency, and adjustment bands in (S,s) models. In this paper we provide feasible inference tools for extremal conditional quantile models that rely upon extreme value approximations to the distribution of self-normalized quantile regression statistics. The methods are simple to implement and can be of independent interest even in the non-regression case. We illustrate the results with two empirical examples analyzing extreme fluctuations of a stock return and extremely low percentiles of live infants' birthweights in the range between 250 and 1500 grams. -- Quantile Regression ; Feasible Inference ; Extreme Value Theory
Gaussian approximation of suprema of empirical processes by Victor Chernozhukov( file )
3 editions published between 2012 and 2016 in English and held by 3 libraries worldwide
We develop a new direct approach to approximating suprema of general empirical processes by a sequence of suprema of Gaussian processes, without taking the route of approximating empirical processes themselves in the sup-norm. We prove an abstract approximation theorem that is applicable to a wide variety of problems, primarily in statistics. Especially, the bound in the main approximation theorem is non-asymptotic and the theorem does not require uniform boundedness of the class of functions. The proof of the approximation theorem builds on a new coupling inequality for maxima of sums of random vectors, the proof of which depends on an effective use of Stein's method for normal approximation, and some new empirical processes techniques. We study applications of this approximation theorem to local empirical processes and series estimation in nonparametric regression where the classes of functions change with the sample size and are not Donsker-type. Importantly, our new technique is able to prove the Gaussian approximation for the supremum type statistics under considerably weak regularity conditions, especially concerning the bandwidth and the number of series functions, in those examples
Identification and Efficient Semiparametric Estimation of a Dynamic Discrete Game by Patrick L Bajari( Book )
5 editions published in 2015 in English and held by 3 libraries worldwide
In this paper, we study the identification and estimation of a dynamic discrete game allowing for discrete or continuous state variables. We first provide a general nonparametric identification result under the imposition of an exclusion restriction on agent payoffs. Next we analyze large sample statistical properties of nonparametric and semiparametric estimators for the econometric dynamic game model. We also show how to achieve semiparametric efficiency of dynamic discrete choice models using a sieve based conditional moment framework. Numerical simulations are used to demonstrate the finite sample properties of the dynamic game estimators. An empirical application to the dynamic demand of the potato chip market shows that this technique can provide a useful tool to distinguish long term demand from short term demand by heterogeneous consumers
Testing many moment inequalities by Victor Chernozhukov( file )
3 editions published between 2013 and 2016 in English and held by 3 libraries worldwide
This paper considers the problem of testing many moment inequalities where the number of moment inequalities, denoted by p, is possibly much larger than the sample size n. There are a variety of economic applications where the problem of testing many moment in- equalities appears; a notable example is a market structure model of Ciliberto and Tamer (2009) where p = 2m+1 with m being the number of firms. We consider the test statistic given by the maximum of p Studentized (or t-type) statistics, and analyze various ways to compute critical values for the test statistic. Specifically, we consider critical values based upon (i) the union bound combined with a moderate deviation inequality for self-normalized sums, (ii) the multiplier and empirical bootstraps, and (iii) two-step and three-step variants of (i) and (ii) by incorporating selection of uninformative inequalities that are far from being binding and novel selection of weakly informative inequalities that are potentially binding but do not provide first order information. We prove validity of these methods, showing that under mild conditions, they lead to tests with error in size decreasing polynomially in n while allowing for p being much larger than n; indeed p can be of order exp(nc) for some c > 0. Importantly, all these results hold without any restriction on correlation structure between p Studentized statistics, and also hold uniformly with respect to suitably large classes of underlying distributions. Moreover, when p grows with n, we show that all of our tests are (minimax) optimal in the sense that they are uniformly consistent against alternatives whose "distance" from the null is larger than the threshold (2(log p)/n)1/2, while any test can only have trivial power in the worst case when the distance is smaller than the threshold. Finally, we show validity of a test based on block multiplier bootstrap in the case of dependent data under some general mixing conditions
Intersection bounds : estimation and inference by Victor Chernozhukov( Computer File )
3 editions published between 2009 and 2012 in English and held by 3 libraries worldwide
We develop a practical and novel method for inference on intersection bounds, namely bounds defined by either the infimum or supremum of a parametric or nonparametric function, or equivalently, the value of a linear programming problem with a potentially infinite constraint set. Our approach is especially convenient in models comprised of a continuum of inequalities that are separable in parameters, and also applies to models with inequalities that are non-separable in parameters. Since analog estimators for intersection bounds can be severely biased in finite samples, routinely underestimating the length of the identified set, we also offer a (downward/upward) median unbiased estimator of these (upper/lower) bounds as a natural by-product of our inferential procedure. Furthermore, our method appears to be the first and currently only method for inference in nonparametric models with a continuum of inequalities. We develop asymptotic theory for our method based on the strong approximation of a sequence of studentized empirical processes by a sequence of Gaussian or other pivotal processes. We provide conditions for the use of nonparametric kernel and series estimators, including a novel result that establishes strong approximation for general series estimators, which may be of independent interest. We illustrate the usefulness of our method with Monte Carlo experiments and an empirical example. -- Bound analysis ; conditional moments ; partial identification ; strong approximation ; infinite dimensional constraints ; linear programming ; concentration inequalities ; anti-concentration inequalities
Local identification of nonparametric and semiparametric models ( Computer File )
3 editions published between 2011 and 2012 in English and held by 3 libraries worldwide
In parametric models a sufficient condition for local identification is that the vector of moment conditions is differentiable at the true parameter with full rank derivative matrix. We show that additional conditions are often needed in nonlinear, nonparametric models to avoid nonlinearities overwhelming linear effects. We give restrictions on a neighborhood of the true value that are sufficient for local identification. We apply these results to obtain new, primitive identification conditions in several important models, including nonseparable quantile instrumental variable (IV) models, single-index IV models, and semiparametric consumption-based asset pricing models
Symposium on transportation methods ( Book )
1 edition published in 2010 in English and held by 3 libraries worldwide
Conditional value-at-risk : aspects of modeling and estimation by Victor Chernozhukov( Book )
2 editions published in 2000 in English and held by 3 libraries worldwide
This paper considers flexible conditional (regression) measures of market risk. Value-at-Risk modeling is cast in terms of the quantile regression function - the inverse of the conditional distribution function. A basic specification analysis relates its functional forms to the benchmark models of returns and asset pricing. We stress important aspects of measuring very high and intermediate conditional risk. An empirical application illustrates. Keywords: Conditional Quantiles, Quantile Regression, Extreme Quantiles, Extreme Value Theory, Extreme Risk. JEL Classifications: C14, C13, C21, C51, C53, G12, G19
Symposium on computation on nash equilibria in finite games ( Book )
1 edition published in 2010 in English and held by 2 libraries worldwide
 
moreShow More Titles
fewerShow Fewer Titles
Alternative Names
Chernozhukov, V.
Chernozhukov, Victor Victorovich
Languages
English (82)
Close Window

Please sign in to WorldCat 

Don't have an account? You can easily create a free account.