Sep 11, 2014 - longer the

RESURRECTING WEIGHTED LEAST SQUARES By Joseph P. Romano Michael Wolf

Technical Report No. 2014-11 September 2014

Department of Statistics STANFORD UNIVERSITY Stanford, California 94305-4065

RESURRECTING WEIGHTED LEAST SQUARES By Joseph P. Romano Stanford University Michael Wolf University of Zurich

Technical Report No. 2014-11 September 2014

This research was supported in part by National Science Foundation grant DMS 0707085.

Department of Statistics STANFORD UNIVERSITY Stanford, California 94305-4065 http://statistics.stanford.edu

Resurrecting Weighted Least Squares Joseph P. Romano ∗ Departments of Statistics and Economics Stanford University [email protected]

Michael Wolf Department of Economics University of Zurich [email protected]

September 2014

Abstract Linear regression models form the cornerstone of applied research in economics and other scientific disciplines. When conditional heteroskedasticity is present, or at least suspected, the practice of reweighting the data has long been abandoned in favor of estimating model parameters by ordinary least squares (OLS), in conjunction with using heteroskedasticity consistent (HC) standard errors. However, we argue for reintroducing the practice of reweighting the data, since doing so can lead to large efficiency gains of the resulting weighted least squares (WLS) estimator over OLS even when the model for reweighting the data is misspecified. Efficiency gains manifest in a first-order asymptotic sense and thus should be considered in current empirical practice. Crucially, we also derive how asymptotically valid inference based on the WLS estimator can be obtained even when the model for reweighting the data is misspecified. The idea is that, just like the OLS estimator, the WLS estimator can also be accompanied by HC standard errors without knowledge of the functional form of conditional heteroskedasticity. A Monte Carly study demonstrates attractive finite-sample properties of our proposals compared to the status quo, both in terms of estimation and making inference.

KEY WORDS: Conditional heteroskedasticity, HC standard errors, weighted least squares. JEL classification codes: C12, C13, C21.

∗

Research supported by NSF Grant DMS-0707085.

1

1

Introduction

Despite constant additions to the toolbox of applied researchers, linear regression models remain the cornerstone of empirical work in economics and other scientific disciplines. Any introductory course in econometrics starts with an assumption of conditional homoskedasticity: the conditional variance of the error terms does not depend on the regressors. In such an idyllic situation, one should estimate the model parameters by ordinary least squares (OLS) and use the conventional inference produced by any of the multitude of software packages. Unfortunately, in many applications, applied researchers are plagued by conditional heteroskedasticity: the conditional variance of the error term is a function of the regressors. A simple example is a wage regression where wages (or perhaps log wages) are regressed on experience plus a constant. In most professions, there is a larger variation in wages for workers with many years of experience compared to workers with few years of experience. Therefore, in such a case, the conditional variance of the error term is an increasing function of experience. In the presence of conditional heteroskedasticity, the OLS estimator still has attractive properties, such as being unbiased and being consistent (under mild regularity conditions). However, it is no longer the best linear unbiased estimator (BLUE). Even more problematic, conventional inference generally is no longer valid: confidence intervals do not have the correct coverage probabilities and hypothesis tests do not have the correct null rejection probabilities, even asymptotically. In early days, econometricians prescribed the cure of weighted least squares (WLS). It consisted of modeling the functional form of conditional heteroskedasticity, reweighting the data (both the response variable and the regressors), and running OLS combined with conventional inference with the weighted data. The rationale was that ‘correctly’ weighting the data (based on the true conditional variance model) results in efficiency gains over the OLS estimator. Furthermore, conventional inference based on the ‘correctly’ weighted data is valid, at least asymptotically. Then came White (1980) who changed the game with one of the most influential and widely-cited papers in the field. He promoted heteroskedasticity consistent (HC) standard errors for the OLS estimator. His alternative cure consists of retaining the OLS estimator (that is, not weighting the data) but using HC standard errors instead of the conventional standard errors. The resulting inference is (asymptotically) valid in the presence of conditional heteroskedasticity of unknown form, which has been a major selling point. Indeed, the earlier cure had the nasty side effect of invalid inference if the applied researcher did not model the conditional heteroskedasticity correctly (arguably, a common occurrence). As the years have passed, weighting the data has become out of fashion and applied researchers have instead largely favored the cure prescribed by White (1980) and his followers. The bad publicity for WLS is still ongoing. As an example, consider Angrist and Pischke (2010, Section 3.4.1) who discourage applied researchers from weighting the data with the following arguments, among others.

2

1. “If the conditional variance model is a poor approximation or if the estimates of it are very noisy, WLS estimators may have worse finite-sample properties than unweighted estimators.” 2. “The inferences you draw [. . . ] may therefore be misleading, and the hoped-for efficiency gain may not materialize.” 3. “Any efficiency gain from weighting is likely to be modest, and incorrectly or poorly estimated weights can do more harm than good.” Alas, not everyone has converted and a few lone warriors defending WLS remain. At the forefront is Leamer (2010, p.43) who calls the current practice “White-washing” and argues that “. . . we should be doing the hard work of modeling the heteroskedasticity [. . . ] to determine if sensible reweighting of the observations materially changes the locations of the estimates of interest as well as the widths of the confidence intervals.” In this paper, we offer a new, third cure, which is a simple combination of the two previous cures: use WLS combined with HC standard errors. The aim of this cure is to offer the best of both worlds. First, sensibly weighting the data can lead to noticeable efficiency gains over OLS, even if the conditional variance model is misspecified. Second, combining WLS with HC standard errors allows for valid inference, even if the conditional variance model is misspecified. The cure we offer is a simple and natural one. But, to the best of our knowledge, it has not been offered before. For example, Hayashi (2000, Section 2.8) describes the approach of estimating a parametric specification of conditional heteroskedasticity, but the corresponding inference for the parameter vector assumes the parametric specification is correctly specified; otherwise, it may not be valid. Our approach is similar in that it also specifies a parametric specification for the skedastic function, but it is different in that it produces valid inference under general forms of conditional heteroskedasticity even when the parametric specification does not include the true skedastic function. As a bonus, we also propose a new estimator: adaptive least squares (ALS). Our motivation is a follows. Under conditional homoskedasticity, OLS is the optimal estimator and one should not weight the data at all. Using WLS in such a setting will lead to an efficiency loss, at least in small and moderate samples, because of the noise in the estimated conditional variance model. As a remedy, we propose to first carry out a test of conditional heteroskedasticity, based on the very conditional variance model intended to be used in weighting the data. If the test rejects, use WLS; otherwise, stick with OLS. In this way, one will only use WLS when it is worthwhile doing so, that is, when there is sufficient evidence in the data supporting the conditional variance model. Crucially, independent of the outcome of the test, always use HC standard errors.1 The remainder of the paper is organized as follows. Section 2 introduces the model. Section 3 1

Tests for conditional heteroskedasticity had come with a different prescription in the past. Namely, if the test

rejects, use OLS with HC standard errors, otherwise, use OLS with the conventional standard errors; for example, see Hayashi (2000, p.132). But such a practice is not recommended, since it has poor finite-sample properties under conditional heteroskedasticity in small and moderate samples; for example, see Long and Ervin (2000, Section 4.3). The reason is that when the test has low power, an invalid inference method will be chosen with non-negligible

3

describes the various estimators and derives the asymptotic distribution of the WLS estimator when the weighting of the data is possibly incorrect. Section 4 establishes validity of our proposed inference based on the WLS estimator when the weighting of the data is possibly incorrect. Section 5 examines finite-sample performance via a Monte Carlo study. Section 6 briefly discusses possible variations and extensions. Finally, Section 7 concludes. An Appendix contains details on various inference methods and all mathematical proofs.

2

The Model

We maintain the following set of assumptions throughout the paper. (A1) The linear model is of the form yi = x0i β + εi

(i = 1, . . . , n) ,

(2.1)

where xi ∈ RK is a vector of explanatory variables (regressors), β ∈ RK is a coefficient vector, and εi is the unobservable error term with certain properties to be specified below. n (A2) The sample (yi , x0i ) i=1 is independent and identically distributed (i.i.d.). (A3) All the regressors are predetermined in the sense that they are orthogonal to the contemporaneous error term: E(εi |xi ) = 0 .

(2.2)

Of course, under the i.i.d. assumption (A2) it then also holds that E(εi |x1 , . . . , xn ) = 0 , that is, the regressors are strictly exogenous. (A4) The K × K matrix Σxx ..= E(xi x0i ) is nonsingular (and hence finite). Furthermore,

Pn

0 i=1 xi xi

is invertible with probability one. (A5) The K × K matrix Ω ..= E(ε2i xi x0i ) is nonsingular (and hence) finite. (A6) There exists a nonrandom function v : RK → R+ such that E(ε2i |xi ) = v(xi ) .

(2.3)

Therefore, the skedastic function v(·) determines the functional form of the conditional hetero-skedasticity. Note that under (A6), Ω = E v(xi ) · xi x0i . probability. Instead, we use tests for conditional heteroskedasticity for an honorable purpose and thereby restore some of their lost appeal.

4

It is useful to introduce the customary vector-matrix notations y1 ε1 x01 x11 . . . x1K . . . . .. . . . . . . . y .= ... . . , ε .= . , X . = . = . 0 yn εn xn xn1 . . . xnK

,

so that equation (2.1) can be written more compactly as y = Xβ + ε . Furthermore, assumptions (A2), (A3), and (A5) imply that v(x1 ) .. Var(ε|X) = . v(xn )

(2.4)

.

Remark 2.1 (Justifying the I.I.D. Assumption). The application of WLS relies upon Var(ε|X) being a diagonal matrix. For the sake of theory, it is possible to generalize the set of assumptions (A2)–(A5) such that this condition is still satisfied. For the sake of simplicity, however, we prefer to maintain the set of assumptions (A2)–(A5), which are based on the key assumption (A2) of observing a random sample. Our reasoning here is that virtually all applications of WLS are restricted to such a setting, a leading example being cross-sectional studies. Therefore, allowing for more general settings would mainly serve to impress theoreticians as opposed to keeping it simple for our target audience, namely applied researchers.

3

Estimators: OLS, WLS, and ALS

3.1

Description of the Estimators

The ubiquitous estimator of β is the ordinary least squares (OLS) estimator βˆOLS ..= (X 0 X)−1 X 0 y . Under the maintained assumptions, the OLS is unbiased and consistent. This is the good news. The bad news is that it is not efficient under conditional heteroskedasticity (that is, when the skedastic function v(·) is not constant). A more efficient estimator can be obtained by reweighting the data (yi , x0i ) and then applying OLS in the transformed model y x0 εi p i = p i β+p . v(xi ) v(xi ) v(xi )

5

(3.1)

Letting

v(x1 ) ..

V ..=

,

. v(xn )

the resulting estimator can be written as βˆBLUE ..= (X 0 V −1 X)−1 X 0 V −1 y .

(3.2)

It is the best linear unbiased estimator (BLUE) and is consistent; in particular, it is more efficient than the OLS estimator. But outside of textbooks, this ‘oracle’ estimator mainly exists in utopia, since the skedastic function v(·) is typically unknown. A feasible approach is to estimate the skedastic function v(·) from the data in some way and to then apply OLS in the model y x0 εi p i = p i β+p , vˆ(xi ) vˆ(xi ) vˆ(xi )

(3.3)

where vˆ(·) denotes the estimator of v(·). The resulting estimator is the weighted least squares (WLS) estimator.2 Letting Vˆ ..=

vˆ(x1 ) ..

,

. vˆ(xn )

the WLS estimator can be written as βˆWLS ..= (X 0 Vˆ −1 X)−1 X 0 Vˆ −1 y . It is not necessarily unbiased. If vˆ(·) is a consistent estimator of v(·), than WLS is asymptotically more efficient than OLS. But even if vˆ(·) is an inconsistent estimator of v(·), WLS can result in large efficiency gains over OLS in the presence of noticeable conditional heteroskedasticity; see Section 5. Using OLS is straightforward and has become the status quo in applied economic research. But foregoing potentially large efficiency gains ‘on principle’ would seem an approach to data analysis that is hard to justify. Remark 3.1 (Adaptive Least Squares). Under conditional homoskedasticity — that is, when the skedastic function v(·) is constant — OLS is generally more efficient than WLS in finite samples. But, under certain assumptions on the scheme to estimate the skedastic function, OLS and WLS are asymptotically equivalent in this case. On the other hand, under (noticeable) conditional heteroskedasticity, WLS is generally more efficient, both in finite samples and even in a first-order asymptotic sense. (Such claims will be justified mathematically later.) 2

Another convention is to call weighted least squares estimator what we call best linear unbiased estimator and to

call feasible weighted least squares estimator what we call weighted least squares estimator.

6

Therefore, it is tempting to decide based on the data which route to take: OLS or WLS. Specifically, we suggest applying a test for conditional heteroskedasticity. Several such tests exists, the most popular ones being the tests of Breusch and Pagan (1979) and White (1980); also see Koenker (1981) and Koenker and Bassett (1982). If the null hypothesis of conditional homoskedasticity it not rejected by such a test, use the OLS estimator; otherwise, use the WLS estimator. We call the resulting estimator the adaptive least squares (ALS) estimator. The motivation is as follows. Under conditional homoskedasticity, the ALS estimator will be equal to the WLS estimator with a small probability only (roughly equal to the nominal size of the test). Therefore, in this case, ALS is expected to be more efficient than WLS in finite samples, though still less efficient than OLS. Under conditional heteroskedasticity, the ALS estimator will be equal to the WLS estimator with probability tending to one (assuming that the chosen test is consistent against the existing nature of conditional heteroskedasticity). So for large sample sizes, ALS should be almost as efficient as WLS. For small sample sizes, when the power of the test is not near one, the efficiency is expected to be somewhere between OLS and WLS. (In fact, one could apply the same strategy, but letting the significance level αn of the “pretest” tend to zero as the sample size tends to infinity; one just needs to ensure αn tends to zero slowly enough so that the test still has power tending to one.) Consequently, ALS sacrifices some efficiency gains of WLS under conditional heteroskedasticity in favor of being closer to the performance of OLS under conditional homoskedasticity. These heuristics are confirmed by Monte Carlo simulations in Section 5. Remark 3.2 (Best Linear Predictor). Consider a new observation (y, x0 ). It is well known that even in the absence of a linear model (that is, with Assumption (A1) not holding), the best linear predictor of y in the mean squared error sense is given by x0 β ∗ ,

−1 β ∗ ..= E(xx0 ) E(x · y) ;

with

for example, see Hayashi (2000, Proposition 2.8). Under assumptions (A2) and (A4), the OLS estimator consistently estimates β ∗ , that is, P βˆOLS −→ β ∗ ,

(3.4)

P

where −→ denotes convergence in probability. Moreover, the condition E(xi · i ) = 0 is ensured when β = β ∗ , rather than the stronger assumption (A3). Consistency is not necessarily shared by the WLS and ALS estimators. (Note, however, that the weighted least squares estimator can similarly be viewed as a best linear predictor based on a mean weighted squared error criterion.) The consistency result (3.4) is sometimes viewed as an attractive robustness property of the OLS estimator. But we feel that is not of great practical importance. The best predictor in the mean squared sense is the conditional expectation E(y|x). If the linearity assumption (A1) does not hold, the conditional expectation E(y|x) can be arbitrarily far from the best linear predictor x0 β ∗ . Therefore, if it is suspected that the linearity assumption (A1) may not hold, rather than settling for 7

the best linear predictor, it may be more fruitful to include more covariates or to use a nonparametric approach to estimate the conditional expectation. Needless to say, if the goal is to interpret the estimator of β (in the sense of quantifying the ‘effect’ of the various entries of the vector of covariates x on the response variable y) or to make inference for β, then all three estimators — OLS, WLS, and ALS — rely on the validity of the linearity assumption (A1). So for such purposes, it is just as (un)safe to use WLS or ALS instead of OLS.

3.2

Parametric Model for Estimating the Skedastic Function

In order to estimate the skedastic function v(·), we suggest the use of a parametric model vθ (·), where θ ∈ Rd is a finite-dimensional parameter. Such a model could be suggested by economic theory, by exploratory data analysis (that is, residual plots from an OLS regression), or by convenience. In any case, the model used should nest the case of conditional homoskedasticity. In particular, for every σ 2 > 0, we assume the existence of a unique θ ..= θ(σ 2 ) such that vθ (x) ≡ σ 2 . A flexible parametric model we suggest is vθ (xi ) ..= exp ν + γ2 log |xi,2 | + . . . + γK log |xi,K | ,

with

θ ..= (ν, γ2 , . . . , γK )0 ,

(3.5)

assuming that xi,1 ≡ 1 (that is, the original regression contains a constant). Otherwise, the model should be vθ (xi ) ..= exp ν + γ1 log |xi,1 | + γ2 log |xi,2 | + . . . + γK log |xi,K | ,

with

θ ..= (ν, γ1 , . . . , γK )0 .

Such a model is a special case of the form of multiplicative conditional heteroskedasticity previously proposed by Harvey (1976) and Judge et al. (1988, Section 9.3), among others. Another possibility is to not take exponents and use vθ (xi ) ..= ν + γ2 |xi,2 | + . . . + γK |xi,K | ,

with θ ..= (ν, γ2 , . . . , γK )0 ,

(3.6)

The advantage of (3.5) over (3.6) is that it ensures variances are nonnegative, though the parameters in (3.6) can be restricted such that nonnegativity is satisfied. In all cases, the models obviously nest the case of conditional homoskedasticity. Furthermore, we recommend to base the test for conditional heteroskedasticity used in computing the ALS estimator of Remark 3.1 on the very parametric model of the skedastic function used in computing the WLS estimator. The motivation is that in this fashion, the ASL estimator is set to the WLS estimator (as opposed to the OLS estimator) only if there is significant evidence for the type of conditional heteroskedasticity that forms the basis of the WLS estimator. In particular, we 8

do not recommend to use a ‘generic’ test of conditional heteroskedasticity, such as the test of White (1980), unless the parametric specification vθ (·) used by the test is also the parametric specification used by the WLS estimator.3 Having chosen a parametric specification vθ (·) the test for conditional heteroskedasticity is then carried out by regressing the squared OLS residuals on the parametric specification, possibly after a suitable transformation to ensure linearity on the right-hand side, and by then comparing n times the R2 -statistic of this regression against the quantile of a chi-squared distribution. For example, if the parametric model is given by (3.5), the test specifies H0 : γ2 = . . . = γK = 0

vs. H1 : at least one γk 6= 0 (k = 2, . . . , K) .

To carry out the test, fix a small constant δ > 0, estimate the following regression by OLS: log max(δ 2 , εˆ2i ) = ν + γ2 log |xi,2 | + . . . + γK log |xi,K | + ui , with εˆi ..= yi − x0i βˆOLS ,

(3.7)

and denote the resulting R2 -statistic by R2 . Furthermore, denote by χ2K−1,1−α the 1 − α quantile of the chi-squared distribution with K − 1 degrees of freedom. Then the test for conditional heteroskedasticity rejects at nominal level α if n · R2 > χ2K−1,1−α . (The reason for introducing the constant δ here is that, because we are taking logs, we need to avoid a residual of zero, or even very near zero. If instead, we considered the specification (3.6), we would simply run a regression of ε2i on the right-hand side of (3.7) and no constant δ needs to be introduced.) Finally, the estimate of the skedastic function is given by vˆ(·) ..= vθˆ(·) , where θˆ is an estimator of θ obtained by on OLS regression of the type (3.7).

3.3

Limiting Distribution of the WLS Estimator

The first goal is to consider the behavior of the weighted least squares estimator under a perhaps incorrectly specified skedastic function. The estimator βˆBLUE assumes knowledge of the true skedastic function v(·). Instead, consider a generic WLS estimator that is based on the skedastic function w(·); this estimator is given by βˆW ..= (X 0 W −1 X)−1 X 0 W −1 y ,

(3.8)

where W is the diagonal matrix with (i, i) entry w(xi ). Given two real-valued functions a(·) and b(·) defined on RK (the space where xi lives), define Ωa/b to be the matrix given by a(xi ) 0 . Ωa/b .= E · xi xi . b(xi ) 3

For example, we would not recommend the parametric specification of White’s (1980) test, as it is of order K 2

and thus involves too many free parameters (unless the number of regressors, K, is very small compared to the sample size, n).

9

The first result deals with the case of a fixed employed choice of skedastic function w(·), though this choice may be misspecified, since the true skedastic function is v(·). Lemma 3.1. Assume (A1)–(A3) and (A6). Given a possibly mispecified skedastic function w(·) and the true skedastic function v(·), assume the matrices Ω1/w and Ωv/w2 are well-defined (in the sense of the corresponding expectations to exist and being finite). Also, assume Ω1/w is invertible. (These assumptions reduce to the usual assumptions (A4) and (A5) in case w(·) is constant.) Then, as n → ∞,

√ d −1 n(βˆW − β) −→ N 0, Ω−1 1/w Ωv/w2 Ω1/w .

Corollary 3.1. Assume the assumptions of Lemma 3.1 and in addition that both w(·) and v(·) are constant (so that, in particular, conditional homoskedasticity holds true). Then √

d n(βˆW − β) −→ N 0, Ω−1 1/v .

It is well known that, under conditional homoskedasticity, Ω−1 1/v is the limiting variance of the OLS estimator. So as long as the skedastic function w(·) is constant, the limiting distribution of βˆW is identical to the limiting distribution of βˆOLS under conditional homoskedasticity. Next, we consider the behavior of the WLS estimator based on an estimated skedastic function. Assume the parametric family of skedastic functions used to estimate v(·) is given by vθ (·), where θ = (θ1 , . . . , θd )0 varies in an open subset of Rd . Note the true v(·) need not be specified by any vθ (·). However, we always specify a family vθ (·) that includes constant values σ 2 , so as to always allow for conditional homoskedasticity. It is further tacitly assumed that vθ (x) > 0 on the support of x, so that 1/vθ (x) is well-defined with probability one. Assume that 1/vθ (·) is differentiable at some fixed θ0 in the following sense: there exists a vector-valued function of dimension 1 × d rθ0 (x) = rθ0 ,1 (x), . . . , rθ0 ,d (x)

and a real-valued function sθ0 (·) such that 1 1 1 2 vθ (x) − vθ (x) − rθ0 (x)(θ − θ0 ) ≤ 2 |θ − θ0 | sθ0 (x) , 0

(3.9)

for all θ in some small open ball around θ0 and all x in the support of the covariates. Evidently rθ (x) is the gradient with respect to θ of 1/vθ (x). Next, we assume we have a consistent estimator θˆ 0

of θ0 in the sense that P

n1/4 |θˆ − θ0 | −→ 0 .

(3.10)

√ Of course, (3.10) holds if θˆ is a n-consistent estimator of θ0 . (The weaker condition may be useful if one lets the dimension d of the model increase with the sample size n.) Theorem 3.1. Assume conditions (3.9) and (3.10). Further assume E |xi |2 v(xi )|rθ0 (xi )|2 < ∞ 10

(3.11)

and E |xi | · |εi sθ0 (xi )| < ∞ .

(3.12)

(Note that in the case the functions rθ0 (·) and sθ0 (·) can be taken to be uniformly bounded over the support of the covariates, then these two added assumptions (3.11) and (3.12) already follow from (A5) and (A6).) ˆ , and W ˆ is the diagonal Consider the estimator βˆWLS ..= βˆVˆ given by (3.8) with W replaced by W matrix with (i, i) entry vθˆ(xi ). Then, √

d −1 n(βˆWLS − β) −→ N 0, Ω−1 1/w Ωv/w2 Ω1/w ,

(3.13)

where v(·) is the true skedastic function and w(·) ..= vθ0 (·) corresponds to the limiting estimated skedastic function. Remark 3.3. Actually, the proof shows that √

P n(βˆWLS − βˆW ) −→ 0 ,

(3.14)

where βˆW is the WLS based on the known skedastic function w(·) = vθ0 (·). Corollary 3.2. Assume the assumptions of Theorem 3.1 and in addition that both vθ0 (·) and v(·) are constant (so that, in particular, conditional homoskedasticity holds true). Then √

d n(βˆWLS − β) −→ N 0, Ω−1 1/v .

Remark 3.4 (Assumptions on the Parametric Specification vθ (·)). We need to argue that the estimation scheme based on a parametric specification vθ (·), as described in Subsection 3.2, satisfies the assumptions of Theorem 3.1. The specifications we apply in the numerical work, such as given in Subsection 3.2 are clearly smooth, but it needs to be argued that (3.10) holds for some θ0 , even under conditional heteroskedasticity. The technical arguments are given in Appendix B.2 in the Appendix. In particular, both under conditional homoskedasticity and under conditional heteroskedasticity, our proposed estimation scheme of the skedastic function leads to a nonrandom estimate vθ0 (·) in the limit, as assumed by Theorem 3.1. Remark 3.5 (Efficiency of WLS under Homoskedasticity and Limiting Value θ0 .). It is well known that, under conditional homoskedasticity, Ω−1 1/v is the limiting variance of the OLS estimator. So as . long as the skedastic function w(·) .= vθ (·) is constant, the limiting distribution of βˆWLS is identical 0

to the limiting distribution of βˆOLS in this case. In Appendix B.2, it is argued that the estimator θˆ tends in probability to some θ0 . However, vθ0 (·) need not correspond to the true skedastic function v(·). Furthermore, even when v(·) is constant and the specification for vθ (·) nests conditional homoskedasticity, it may or may not be the case that vθ0 (·) is constant. 11

On the one hand, consider the specification (3.6). Then, using OLS when regressing ε2 (or εˆ2 ) on the right-hand-side of (3.6) gives a limiting value of θ0 that corresponds to the best linear predictor of E(ε2i |xi ). Hence, if E(ε2i |xi ) is constant, then so is vθ0 (·). On the other hand, consider the specification (3.5), where log(ε2i ) is modeled by a linear function of covariates. In such a case, OLS is consistent for θ0 , which corresponds to the best linear predictor of E log(ε2i )|xi . In the homoskedastic case where E(ε2i |xi ) is constant, one does not necessarily have that n o E log max(δ 2 , ε2i ) xi is constant.

(3.15)

Of course, (3.15) would hold in the more structured case where εi and xi are independent under conditional homoskedasticity. For example, this would be the case if (A6) is strengthened to (A6’) {xi }ni=1 is a K-variate i.i.d. sample and εi is given by εi =

p v(xi ) · zi ,

where v(·) : RK → R+ is a nonrandom skedastic function and {zi }ni=1 is a univariate i.i.d. sample with mean zero and variance one, independent of {xi }ni=1 . But in general (3.15) may fail. Therefore, to ensure in general that there is asymptotic efficiency loss of using WLS instead of OLS under conditional homoskedasticity, one needs to use a specification of the form (3.6); otherwise, one must assume that when conditional homoskedasticity holds, so does (3.15). Finally, since whenever vθ0 (·) is constant, OLS and WLS are asymptotically equivalent, then in such a case, OLS and ALS are asymptotically equivalent, as well.

4 4.1

Inference: OLS, WLS, and ALS Description of the Inference Methods

In most applications, it is of additional interest to conduct inference for β, by computing confidence intervals for (linear combinations of) β or by carrying out hypothesis tests for (linear combinations of) β. Unfortunately, when vˆ(·) is not a consistent estimator of the skedastic function v(·), then the textbook inference based on the WLS estimator can be misleading, in the sense that confidence intervals do not have the correct coverage probabilities and hypothesis tests do not have the correct null rejection probabilities, even asymptotically. This is an additional reason why applied researchers have shied away from WLS estimation. The contribution of this section is to propose a method by which consistent inference for β based on the WLS estimator can be obtained even if vˆ(·) is an inconsistent estimator. Our proposal is simple and straightforward. The idea is rooted in inference for β based on the OLS estimator.

12

It is well known that under conditional heteroskedasticity (A6), the OLS standard errors are not consistent and the resulting inference is misleading (in the sense specified in the previous paragraph). As a remedy, theoreticians have proposed heteroskedasticity consistent (HC) standard errors. Such research dates back to Eicker (1963, 1967), Huber (1967), and White (1980). Further refinements have been provided by MacKinnon and White (1985) and Cribari-Neto (2004). As is well known (e.g., Hayashi 2000, Proposition 2.1), under assumptions (A1)–(A5), √

d n(βˆOLS − β) −→ N 0, Avar(βˆOLS )

with

−1 Avar(βˆOLS ) = Σ−1 xx ΩΣxx ,

(4.1)

d

where the symbol −→ denotes convergence in distribution. By assumptions (A2) and (A4) and the continuous mapping theorem, n(X 0 X)−1 is a consistent estimator of Σ−1 xx . Therefore, the problem ˆ of Ω. Inference of consistently estimating Avar(βˆOLS ) is reduced to finding a consistent estimator Ω for β can then be based in the standard fashion on ˆ 0 X)−1 . [ HC (βˆOLS ) ..= n2 (X 0 X)−1 Ω(X Avar

(4.2)

For now, we focus on the case where the parameter of interest is βk , for some 1 ≤ k ≤ K. The OLS estimator of βk is βˆk,OLS and its HC standard error4 implied by (4.2) is r 1 [ AvarHC (βˆOLS ) k,k . (4.3) SEHC (βˆk,OLS ) ..= n Then, for example, a two-sided confidence interval for βk with nominal level 1 − α is given by βˆk,OLS ± tn−K,1−α/2 · SEHC (βˆk,OLS ) ,

(4.4)

where tn−K,1−α/2 denotes the 1 − α/2 quantile of the t distribution with n − K degrees of freedom.5 Alternatively, hypothesis tests of the form H0 : βk = βk,0 can be based on the test statistic βˆk,OLS − βk,0 SEHC (βˆk,OLS ) in conjunction with suitable quantiles of the tn−K distribution as critical values. As stated before, finding a consistent estimator of Avar(βˆOLS ) reduces to finding a consistent estimator of Ω in (4.2). There exist five widely used such estimators in the literature, named HC0–HC4. They are all of the ‘sandwich’ form ˆ ..= 1 X 0 ΨX ˆ Ω n 4

ˆ ..= diag{ψˆ1 , . . . , ψˆn } . with Ψ

(4.5)

In our terminology, a standard error is an estimate of the standard deviation of an estimator rather than the

actual standard deviation of the estimator itself. 5 On asymptotic grounds, one could also use the 1 − α/2 quantile of the standard normal distribution instead. Taking the quantile from the tn−K distribution results in somewhat more conservative inference in finite samples and is the standard practice in statistical software packages.

13

Therefore, to completely define one of the HC estimators, it is sufficient to specify a typical ˆ In doing so, let εˆi denote the ith OLS residual given by element, ψˆi , of the diagonal matrix Ψ. εˆi ..= yi − x0i βˆOLS , ¯ denote the let hi denote the ith diagonal element of the ‘hat’ matrix H ..= X(X 0 X)−1 X 0 , and let h grand mean of the {hi }ni=1 . The various HC estimators use the following specifications. HC0 : ψˆi ..= εˆ2i , n HC1 : ψˆi ..= · εˆ2 , n−K i εˆ2i HC2 : ψˆi ..= , (1 − hi ) εˆ2i , and HC3 : ψˆi ..= (1 − hi )2 hi εˆ2i . . ˆ with δi .= min 4 , ¯ . HC4 : ψi .= (1 − hi )δi h

(4.6)

HC0 dates back to White (1980) but results in inference that is generally liberal in small to moderate samples. HC1–HC3 are various improvements suggested by MacKinnon and White (1985): HC1 uses a global degrees-of-freedom adjustment, HC2 is based on influential analysis, and HC3 approximates a jackknife estimator. HC4 is the most recent proposal by Cribari-Neto (2004) designed to also handle observations xi with strong leverage. Of the estimators HC0–HC3, the one that delivers the most reliable finite-sample inference is HC3; for example, see MacKinnon and White (1985), Long and Ervin (2000), and Angrist and Pischke (2009, Section 8.1). It is also the default option in several statistical software packages to carry out HC estimation, such as in the R function vcov(); for example, see Zeileis (2004). On the other hand, we are not aware of any simulation studies evaluating the performance of the HC4 estimator outside of Cribari-Neto (2004).6 It is a characteristic feature of a HC standard error of the form (4.2)–(4.3) that its variance is larger than the variance of the conventional standard error based on the assumption of conditional homoskedasticity: SECO (βˆk,OLS ) ..=

q s2 (X 0 X)−1 k,k

with s2 ..=

n 1 X 2 εˆi . n−K

(4.7)

i=1

(A HC standard error as well as the conventional standard error are functions of the data. They are therefore random variables and, in particular, have a variance.) As a result, inference based on a HC standard error tends to be liberal7 in small samples, especially when there is no or only little 6 7

His Monte Carlo study only considers a single parametric specification of the skedastic function v(·). This means that confidence intervals tend to undercover and that hypothesis tests tend to overreject under the

null.

14

conditional heteroskedasticity. These facts have been demonstrated by Kauermann and Carroll (2001) analytically and by Long and Ervin (2000), Kauermann and Carroll (2001), Cribari-Neto (2004), and Angrist and Pischke (2009, Section 8.1), among others, via Monte Carlo studies. As a rule-of-thumb remedy, Angrist and Pischke (2009, Section 8.1) propose to take the maximum of a HC standard error and the conventional standard error. Letting SEmax (βˆk,OLS ) ..= max SEHC (βˆk,OLS ), SECO (βˆk,OLS ) , a more conservative confidence interval for βk is then given by βˆk,OLS ± tn−K,1−α/2 · SEmax (βˆk,OLS ) .

(4.8)

(In particular, they recommend the use of the HC3 standard error.) We next turn to inference on βk based on the WLS estimator. The textbook solution is to assume that vˆ(·) is a consistent estimator for the skedastic function v(·) and to then compute a conventional standard error from the transformed data yi y˜i ..= p vˆ(xi )

xi and x ˜i ..= p . vˆ(xi )

(4.9)

More specifically, SECO (βˆk,WLS ) ..=

q ˜ 0 X) ˜ −1 s˜2 (X k,k

with

s˜2 ..=

n 1 X 2 ε˜i n−K

and ε˜i ..= y˜i − x ˜0i βˆWLS . (4.10)

i=1

The problem is that this standard error is incorrect when vˆ(·) is not a consistent estimator and, as a result, a confidence interval for βk based on the WLS estimator combined with this standard error generally does not have correct coverage probability, even asymptotically. In the absence of some supernal information on the skedastic function v(·), applied researchers cannot be confident about having a consistent estimator vˆ(·). Therefore, they have rightfully shied away from the textbook inference based on the WLS estimator. The safe ‘solution’ is to simply use the OLS estimator combined with a HC standard error. This status quo in applied economic research is succinctly summarized by Angrist and Pischke (2010, p.10): Robust standard errors, automated clustering, and larger samples have also taken the steam out of issues like heteroskedasticity and serial correlation. A legacy of White’s (1980[a]) paper on robust standard errors, one of the most highly cited from the period, is the near death of generalized least squares in cross-sectional applied work.8 In the interests of replicability, and to reduce the scope for errors, modern applied researchers often prefer simpler estimators though they might be giving up asymptotic efficiency. 8

For cross-sectional data, generalized least squares equates to weighted least squares.

15

In contrast, we side with Leamer (2010) who views conditional heteroskedasticity as an opportunity, namely an opportunity to construct more efficient estimators and to obtain shorter confidence intervals by sensibly weighting the data. But such benefits should not come at the expense of valid inference when the model for the skedastic function is misspecified. To this end, ironically, the same tool that killed off the WLS estimator can be used to resurrect it. Our proposal is simple: applied researchers should use the WLS estimator combined with a HC standard error.9 Doing so allows for valid inference, under weak regularity conditions, even if the employed vˆ(·) is not a consistent estimator of the skedastic function v(·). Specifically, the WLS estimator is the OLS estimator applied to the transformed data (4.9). And, analogously, a corresponding HC standard error is also obtained from these transformed data. In practice, the applied researcher only has to transform the data and then do as he would have done with the original data instead: run OLS and compute a HC standard error. Denote the HC standard error computed from the transformed data by SEHC (βˆk,WLS ). Then a confidence interval for βk based on the WLS estimator is given by βˆk,WLS ± tn−K,1−α/2 · SEHC (βˆk,WLS ) .

(4.11)

As before, one might prefer a more conservative approach for small sample sizes using the maximum of a HC standard error and the conventional standard error: SEmax (βˆk,WLS ) ..= max SEHC (βˆk,WLS ), SECO (βˆk,WLS ) , resulting in the confidence interval βˆk,WLS ± tn−K,1−α/2 · SEmax (βˆk,WLS ) .

(4.12)

(In particular, we recommend the use of the HC3 standard error, or perhaps even the HC4 standard error.) Remark 4.1 (Adaptive Least Squares; Remark 3.1 continued). Should a researcher prefer ALS for the estimation of β, he generally also needs a corresponding method for making inference on β. The method then is straightforward. If the ALS estimator is equal to the OLS estimator, use the confidence interval (4.4) or even the confidence interval (4.8). If the ALS estimator is equal to the WLS estimator, use the confidence interval (4.11) or even the confidence interval (4.12). Note that in this setting, the test for conditional heteroskedasticity ‘determines’ the inference method but not in the way it has been generally promoted in the literature to date: namely, always use the OLS estimator and then base inference on a HC standard error (4.3) if the test rejects and on the conventional standard error (4.7) otherwise. This practice is not recommended since, under conditional heteroskedasticity, an invalid inference method (based on the conventional standard 9

In spite of its simplicity, we have not seen this proposal anywhere else so far.

16

error) will be chosen with non-negligible probability in small to moderate samples because the power of the test is not near one. As a result, the finite-sample properties of this practice, under conditional heteroskedasticity, are poor in small to moderate samples; for example, see Long and Ervin (2000, Section 4.3). In contrast, our proposal does not incur such a problem, since the pretest instead decides between two inference methods that are both valid under conditional heteroskedasticity. So far, we have only discussed inference for a generic component, βk , of β. The extension to more general inference problems is straightforward and detailed in Appendix A.

4.2

Consistent Estimation of the Limiting Covariance Matrix

We now consider estimating the unknown limiting covariance matrix of the WLS estimator, which recalling (3.13) is given by −1 Ω−1 1/w Ωv/w2 Ω1/w ,

where, again, w(·) ..= vθ0 (·) and v(·) is the true skedastic function. First, Ω1/w is estimated by 0 ˆ −1 X 0 V ˆ−1 X θ ˆ 1/w ..= X W X = . Ω n n

Second, we are left to consistently estimate Ωv/w2 , which we recall is just ! ! 2 v(xi ) ε i Ωv/w2 = E · xi x0i = E · xi x0i . vθ20 (xi ) vθ20 (xi )

(4.13)

(4.14)

Of course, by the law of large numbers, n

1X n i=1

ε2i · xi x0i vθ20 (xi )

! P

−→ Ωv/w2 .

We do not know vθ0 (xi ), but it can be estimated by vθˆ(xi ). In addition, we do not observe the true errors, but they can be estimated by the residuals after some consistent model fit. So given ˆ such as the ordinary least squares estimator, define the ith residual by some consistent estimator β, εˆi ..= yi − xi βˆ = εi − x0i (βˆ − β) .

(4.15)

The resulting estimator of (4.14) is then n

ˆ v/w2 Ω

X ..= 1 n i=1

εˆ2i · xi x0i 2 v ˆ(xi )

! .

(4.16)

θ

Furthermore, note that (3.9) implies that there exists a real-valued function Rθ0 (·) such that 1 1 − 2 (4.17) 2 ≤ Rθ0 (x)|θ − θ0 | vθ (x) vθ0 (x) for all θ in some small open ball around θ0 and all x in the domain of the covariates. 17

ˆ −1 Ω ˆ ˆ −1 Theorem 4.1. Assume the conditions of Theorem 3.1. Consider the estimator Ω 1/w v/w2 Ω1/w , ˆ 1/w is given in (4.13) and Ω ˆ v/w2 is given in (4.16). Then, where Ω −1 −1 ˆ −1 Ω ˆ ˆ −1 P Ω 1/w v/w2 Ω1/w −→ Ω1/w Ωv/w2 Ω1/w ,

(4.18)

provided the following moment conditions are satisfied: E |xij xik xil xim /vθ20 (xi )| < ∞ ,

(4.19)

E |xij xik xil εi /vθ20 (xi )| < ∞ ,

(4.20)

E |xi |2 ε2i Rθ0 (xi )|] = E |xi |2 v(xi )Rθ0 (xi )| < ∞ .

(4.21)

and

4.3

Asymptotic Validity of the Inference Methods

ˆ −1 Ω ˆ ˆ −1 It is easy to see that the estimator Ω 1/w v/w2 Ω1/w is none other than the HC0 described in (4.5) and (4.6). Of course, having proven consistency of the HC0 estimator, consistency of the HC1 estimator follows immediately. For motivations to use, alternatively, the estimators HC2–HC4, see MacKinnon and White (1985) and Cribari-Neto (2004). Being able to consistently estimate the limiting covariance matrix of the WLS estimator results in validity of the corresponding inference methods detailed in Subsection 4.1. As far as inference based on the ALS estimator is concerned, there are two cases to consider. In the first case, the limiting skedastic function vθ0 (·) is constant. In this case (see Remark 3.3), √

P n βˆOLS − βˆWLS −→ 0 .

Therefore, there is a common limiting distribution for all three estimators — OLS, WLS, and ALS — and validity of the inference based on the ALS estimator follows from the validity of the inference of the other two estimators. In the second case, the limiting skedastic function vθ0 (·) is not constant. In this case, the ALS estimator will be equal to the WLS estimator with probability tending to one and the validity of the inference based on the ALS estimator follows from the validity of the inference based on the WLS estimator.

5 5.1

Monte Carlo Study Basic Set-Up

We consider the simple regression model yi = β1 + β2 xi + εi ,

18

(5.1)

n based on an i.i.d. sample (yi , xi ) i=1 . In our design, xi ∼ U [1, 4] and εi ..=

p

v(xi ) · zi ,

(5.2)

where zi ∼ N (0, 1), and zi is independent of xi . The sample size is n ∈ {20, 50, 100}. The parameter of interest is β2 . When generating the data, we consider four parametric specifications for the skedastic function v(·). First, v(·) is a power function: v(x) = xγ ,

with γ ∈ {0, 1, 2, 4} .

(5.3)

This specification includes conditional homoskedasticity for the choice γ = 0. Second, v(·) is a power of the log function: γ v(x) = log(x) ,

with γ ∈ {2, 4} .

(5.4)

Third, v(·) is the exponential of a second-degree polynomial: v(x) = exp(γx + γx2 ) ,

with γ ∈ {0.1, 0.15} .

(5.5)

γ ∈ {1, 2} .

(5.6)

Fourth, v(·) is a power of a step function: 1γ , 1 ≤ x < 2 v(x) =

2γ , 2 ≤ x < 3 3γ , 3 ≤ x ≤ 4

,

with

The four specifications are graphically displayed in Figures 1–4. Note that for ease of interpretap p tion, we actually plot v(x) as a function, since v(x) corresponds to the conditional standard deviation and thus lives on the same scale as x. The parametric model used for estimating the skedastic function is vθ (x) = exp ν + γ log |x| ,

with θ ..= (ν, γ)0 .

(5.7)

The model assumed for the skedastic function is correctly specified in (5.3) (with ν = 0) and it is misspecified in (5.4)–(5.6). We estimate ν and γ from the data by the OLS regression log max(δ 2 , εˆ2i ) = ν + γ log |xi | + ui ,

(5.8)

where the εˆi are the OLS residuals of (5.1) and δ is chosen as δ = 0.1 throughout. The resulting estimator of (ν, γ) is denoted by (ˆ ν , γˆ ). WLS is then based on vˆ(x) ..= exp(ˆ ν + γˆ log x) .

(5.9)

Remark 5.1 (Choice of the Parametric Specification vθ (·)). As explained in Remark 3.5, under conditional homoskedasticity, WLS is asymptotically as efficient as OLS when using a specification 19

of the form (3.6), but not necessarily using the form (3.5) as chosen in this Monte Carlo study. The reasoning for preferring (3.5) in empirical work is two-fold. First, the specification (5.7) is equivalent to vθ (x) = σ 2 |x|γ ,

with ν = log σ 2 .

Therefore, estimating such a specification, implicitly, estimates the ‘best’ power of |x| for modeling the skedastic function.10 On the other hand, a specification of the form (3.6) would boil down to vθ (x) = ν + γ|x| . This specification sets the power of |x| equal to one, which may be far from optimal. One approach would be to choose another power, but then which one? A reasonable solution here would be to choose the power based on some residual plots (where the residuals are obtained from a first-stage OLS fit); but, clearly, such a method is difficult to implement in a Monte Carlo study. Another approach would be to include more than one power on the right-hand side, such as both |x| and |x|2 . Again, it is not be clear which powers are ‘best’, and including many powers will result in inflated estimation uncertainty. Second, a specification of the form (3.6) does not guarantee positivity of the weights vθˆ(xi ) to be used for WLS. Of course, there are several solutions to this problem, such as restricting the estimate θˆ in a suitable fashion. But, again, it is not necessarily clear which such solution should be used in practice. As an alternative, we also experimented with the specification vθ (x) = ν + γ1 |x| + γ2 |x|2 , which guarantees that WLS is asymptotically equivalent to OLS; see Remark 3.5.11 Compared to the specification (5.7), the results for WLS and ALS were similar under conditional homoskedasticity but worse under conditional heteroskedasticity.

5.2

Estimation

We consider the following three estimators of β2 . • OLS: The OLS estimator of β2 . • WLS: The WLS estimator of β2 based on vˆ(·) given by (5.9). 10

In particular, this specification is the one proposed by Judge et al. (1988, Section 9.3) for the case of a single

covariate (in addition to the constant potentially). 11 This specification might result in some non-positive weights vθˆ(xi ); such weights were then all set equal to a small, positive number.

20

• ALS: The ALS estimator of β2 of Remark 3.1. The test for conditional heteroskedasticity used rejects the null of conditional homoskedasticity if nR2 > χ21,0.9 , where R2 is the R2 -statistic from the OLS regression (5.8) and χ21,0.9 is the 0.9 quantile of the chi-squared distribution with one degree of freedom. The performance measure is the empirical mean squared error (eMSE). For a generic estimator β˜2 , it is defined as eMSE(β˜2 ) ..=

B 1 X ˜ (β2,b − β2 )2 , B b=1

where B denotes the number of Monte Carlo repetitions and β˜2,b denotes the outcome of β˜2 in the bth repetition. The simulations are based on B = 50, 000 Monte Carlo repetitions. Without loss of generality, we set (β1 , β2 ) = (0, 0) when generating the data. The results are presented in Tables 1–2 and can be summarized as follows. • As expected, in the case of conditional homoskedasticity (that is, in specification (5.3) with γ = 0), OLS is more efficient than WLS. But the differences are rather small and decreasing in n. In the worst case, n = 20, the ratio of the two eMSE’s (WLS/OLS) is only 1.12. • When there is conditional heteroskedasticity, WLS is generally more efficient than OLS. Only when the degree of conditional heteroskedasticity is low and the sample is small (n = 20) can OLS be more efficient, though the differences are always small. • When the degree of conditional heteroskedasticity is high and the sample size is large, the differences between OLS and WLS can be vast, namely, the ratio (WLS/OLS) can be in the neighborhood of 0.3. • ALS sacrifices some of the efficiency gains of WLS under conditional heteroskedasticity, especially when the sample size is small. On the other hand, it is closer to the performance of OLS under conditional homoskedasticity • The previous statements hold true even when the model used to estimate the skedastic function is misspecified. In sum, using WLS offers possibilities of vast improvements over OLS in terms of mean squared error while incurring only modest downside risk. ASL constitutes an attractive compromise between WLS and OLS. Remark 5.2 (Nonnormal Error Terms). To save space, we only report results when the distribution of the zi in (5.2) is standard normal. However, we carried out additional simulations changing this distribution to a t-distribution with five degrees of freedom (scaled to have variance one) and a chi-squared distribution with five degrees of freedom (centered and scaled to have variance one). In both cases, the numbers for the two eMSE’s increase compared to the normal distribution but their ratios remain virtually unchanged. Therefore, the preceding summary statements appear ‘robust’ to nonnormality of the error terms. 21

Remark 5.3 (Failure of Assumption (A.6’)). The scheme (5.2) to generate the error terms εi satisfies assumption (A6’) of Remark 3.5. It is therefore mathematically guaranteed that even the specification (5.7) guarantees that WLS and ALS are asymptotically as efficient as OLS under conditional homoskedasticity. To study the impact of the failure of (A6’) on the finite-sample performance under conditional homoskedasticity, we also consider error terms of the following form in specification (5.4) with γ = 0: where zi,1 ∼ N (0, 1) , zi,1 if xi < 2 , . . εi = (5.10) zi,2 if 2 ≤ xi < 3 , where zi,2 ∼ t∗5 , and 2,∗ z if 3 ≤ x < 4 , where z ∼ χ . i,3

i

i,3

5

Here, t∗5 denotes a t-distribution with five degrees of freedom (scaled to have variance one) and χ2,∗ 5 denotes a chi-squared distribution with five degrees of freedom (centered and scaled to have variance one). The results are presented at the bottom of Table 1. It can be seen that even if assumption (A.6’) does not hold, the efficiency loss of WLS and ALS compared to OLS under conditional homoskedasticity may still tend to zero as the sample size tends to infinity.

5.3

Inference

We next study the finite-sample performance of the following six confidence intervals for β2 . • OLS-HC: The interval (4.4). • OLS-Max: The interval (4.8). • WLS-HC: The interval (4.11). • WLS-Max: The interval (4.12). • ALS-HC: The ALS inference of Remark 4.1 based on either interval (4.4) or interval (4.11). • ALS-HC: The ALS inference of Remark 4.1 based on either interval (4.8) or interval (4.12). There are two performance measures: first, the empirical coverage probability of a confidence interval with nominal confidence level 1 − α = 95%; and second, the ratio of the average length of a confidence interval over the average length of OLS-HC. (By construction, this ratio is independent of the nominal level.) Again, the simulations are based on B = 50, 000 Monte Carlo replications. Again, without loss of generality, we set (β1 , β2 ) = (0, 0) when generating the data. The results are presented in Tables 3–4 and can be summarized as follows. • The coverage properties of all six intervals are at least satisfactory. Nevertheless, the HC intervals can undercover somewhat for small sample sizes. This problem is mitigated by using the Max intervals instead, at the expense of increasing the average lengths somewhat. Overall, the three HC intervals have comparable coverage to each other and three Max intervals have comparable coverage to teach other, as well. 22

• Although there are only minor differences in terms of coverage (within the HC type and the Max type, respectively), there can be major differences in terms of average length. On average, WLS-HC and ALS-CH are never longer than OLS-HC but they can be dramatically shorter in the presence of strong conditional heteroskedasticity and in extreme cases only about half as long. The findings are the same when comparing WLS-Max and ALS-Max to OLS-Max. • The previous statements hold true even when the model used to estimate the skedastic function is misspecified. In sum, confidence intervals based on WLS or ALS offer possibilities of vast improvements over OLS in terms of expected length. This benefit does not come at any noticeable expense in terms of noticeably inferior coverage properties. Remark 5.4 (Nonnormal Error Terms). To save space, we only report results when the distribution of the zi in (5.2) is standard normal. However, we carried out additional simulations changing this distribution to a t-distribution with five degrees of freedom (scaled to have variance one) and a chi-squared distribution with five degrees of freedom (centered and scaled to have variance one). For the case of the t-distribution, empirical coverage probabilities generally slightly increase; for the case of the chi-squared distribution, they decrease and can fall below 92% for the HC confidence intervals and below 93% for the Max confidence intervals. Nevertheless, for both cases, OLS-HC continues to have comparable coverage performance to WLS-HC and OLS-Max continues to have comparable coverage performance to WLS-Max. Furthermore, in both cases, the ratios of average lengths remain virtually unchanged compared to the normal distribution. Therefore, the preceding summary statements appear ‘robust’ to nonnormality of the error terms. Remark 5.5 (Hypothesis Tests). By the well-understood duality between confidence intervals and hypothesis tests, we can gain the following insights. Hypothesis tests on βk based on WLS or ALS offer possibilities of vast improvements over hypothesis tests based on OLS in terms of power while incurring basically no downside risk. This benefit does not come at any noticeable expense in terms of elevated null rejection probabilities.

6

Variations and Extensions

We briefly discuss a few natural variations and extensions to the proposed methodology. • In this paper, we have focused on standard inference based on asymptotic normality of an estimator coupled with an estimate of the limiting covariance matrix. An anticipated criticism is that, by trying to estimate the true skedastic function, increased error in finite samples 23

may result. But, increased efficiency results in shorter confidence intervals. If coverage error were too high in finite samples (though our simulations indicate adequate performance), the conclusion should not be to abandon weighted least squares, but to consider alternative inference methods that offer improved higher-order asymptotic accuracy (and thus translate to improved finite-sample performance). For example, one can consider bootstrap methods. In our setting, such inference would correspond to using the WLS estimator in combination with either the pairs bootstrap (e.g., see Efron and Tibshirani, 1993, Section 9.5) or the wild bootstrap (e.g., see Davison and Hinkley, 1997, Section 6.2), since these two bootstrap methods are appropriate for regression models that allow for conditional heteroskedasticity; a recent comparison for OLS estimation is provided in Flachaire (2005). As an alternative to bootstrapping, one can consider higher-order accuracy by using Edgeworth expansions, as studied in Hausman and Palmer (2012). It is beyond the scope of this paper to establish the asymptotic validity of such schemes applied to WLS and to study their finite-sample performance. Consequently, we leave such topics for future research. • In this paper, we have focused on the case of a stochastic design matrix X, which is the relevant case for economic applications. Alternatively, it would be possible handle the case of a nonstochastic design matrix X, assuming certain regularity conditions on the asymptotic behavior of X, such as in Amemiya (1985, Section 3.5). • Our goal in the present work is to primarily offer enough evidence to change the current practice by showing that improvements offered by weighted least squares are nontrivial. A more ambitious goal would be to estimate the skedastic function v(·) in a nonparametric fashion. For example, one could use a sieve of parametric models by allowing the number of covariates used in the modeling of v(·) to increase with n. Of course, nonparametric smoothing techniques could be used as well. The hope would be further gains in efficiency, which ought to be possible. • Finally, it would be of interest to extend the proposed methodology to the context of instrumental variables (IV) regression. HC inference of the HC0–HC1 type based on two-stage least squares (2SLS) estimation is already standard; for example, see Hayashi (2000, Section 3.5). On the other hand, improved HC inference of the HC2–HC3 type is still in its infancy; for example, see Steinhauer and W¨ urgler (2010). To the best of our knowledge, weighted two-stage least squares (W2SLS) estimation has not been considered at all yet in the context of IV regressions. Therefore, also this topic is beyond the scope of the paper.

24

7

Conclusion

As the amount of data collected is ever growing, the statistical toolbox of applied researchers is ever expanding. Nevertheless, it can be safely assumed that linear models will remain a part of our toolbox for quite some time to come. A textbook analysis of linear models always starts with an assumption of conditional homoskedasticity, that is, an assumption that the conditional variance of the error term is constant. Under such an assumption, one should estimate model parameters by ordinary least squares (OLS), as doing so is efficient. Unfortunately, the real world is plagued by conditional heteroskedasticity, since the conditional variance often depends on the explanatory variables. In such a setting, OLS is no longer efficient. If the true functional form of the conditional variance (that is, the skedastic function) were known, efficient estimators of model parameters could be constructed by properly weighting the data (using the inverse of square root of the skedastic function) and running OLS on the weighted data set. Of course, the true skedastic function is rarely known. In the olden days, applied researchers resorted to weighting the data based on an estimate of the skedastic function, resulting in the weighted least squares (WLS) estimator. Under conditional heteroskedasticity, textbook inference based the OLS estimator can be misleading. But the same is true for textbook inference based on the WLS estimator, unless the model for estimating the skedastic function is correctly specified. These shortcomings have motivated the development of heteroskedasticity consistent (HC) standard errors for the OLS estimator. Such standard errors ensure the (asymptotic) validity of inference based on the OLS estimator in the presence of conditional heteroskedasticity of unknown form. Over time, applied researchers have by and large adopted this practice, causing WLS to become extinct for all practical purposes. In this paper, we promote the use of HC standard errors for the WLS estimator instead. This practice ensures (asymptotic) validity of inference based on the WLS estimator even when the model for estimating the skedastic function is misspecified. The benefits of our proposal in the presence of noticeable conditional heteroskedasticity are two-fold. First, using WLS generally results in more efficient estimation. Second, HC inference based on WLS has more attractive properties in the sense that confidence intervals for model parameters tend to be shorter and hypothesis tests tend to be more powerful. The price to pay is some efficiency loss compared to OLS results in small samples in the textbook setting of conditional homoskedasticity. As a bonus, we propose a new adaptive least squares (ALS) estimator, where a pretest on conditional homoskedasticity is used in order to decide whether to weight the data (that is, whether to use WLS) or not (that is, to use OLS). Crucially, in either case, one uses HC standard errors so that (asymptotically) valid inference is ensured. Having no longer to live in fear of invalid inference, applied researchers should rediscover their long-lost friend the WLS estimator, or its new companion the ALS, estimator: the benefits over their current company, the OLS estimator, can be substantial. 25

References Amemiya, T. (1985). Advanced Econometrics. Harvard University Press, Cambridge, MA. Angrist, J. D. and Pischke, J.-S. (2009). Mostly Harmless Econometrics. Princeton University Press, Princeton, New Yersey. Angrist, J. D. and Pischke, J.-S. (2010). The credibility revolution in empirical economics: How better research design is taking the con out of econometrics. Discussion Paper Series No. 4800, IZA. Breusch, T. and Pagan, A. (1979). A simple test for heteroscedasticity and random coefficient variation. Econometrica, 47:1287–1294. Cribari-Neto, F. (2004). Asymptotic inference under heterskedasticty of unknown form. Computational Statistics & Data Analysis, 45:215–233. Davison, A. C. and Hinkley, D. V. (1997). Bootstrap Methods and their Application. Cambridge University Press, Cambridge. Efron, B. and Tibshirani, R. J. (1993). An Introduction to the Bootstrap. Chapman & Hall, New York. Eicker, F. (1963). Asymptotic normality and consistency of the least squares estimator for families of linear regressions. Annals of Mathematical Statistics, 34:447–456. Eicker, F. (1967). Limit theorems for regressions with unequal and dependent errors. In LeCam, L. M. and Neyman, J., editors, Proceedings of the Fifth Berkeley Symposium on Mathematical Statistics and Probability, volume 1, pages 59–82, Berkeley, CA. University of California Press. Flachaire, E. (2005). Bootstrapping heteroskedastic regression models: wild bootstrap vs. pairs bootstrap. Computational Statistics & Data Analysis, 49:361–377. Harvey, A. C. (1976). Estimating regression models with multiplicative heteroscedasticity. Econometrica, 44:461–465. Hausman, J. and Palmer, C. (2012). Heteroskedasticity-robust inference in finite samples. Economic Letters, 116:232–235. Hayashi, F. (2000). Econometrics. Princeton University Press, Princeton, New Jersey. Huber, P. (1967). The behavior of maximum likelihood estimation under nonstandard conditions. In LeCam, L. M. and Neyman, J., editors, Proceedings of the Fifth Berkeley Symposium on Mathematical Statistics and Probability, volume 1, pages 221–233, Berkeley, CA. University of California Press. 26

Judge, G. G., Hill, R. C., Griffiths, W. E., L¨ utkepohl, H., and Lee, T.-C. (1988). Introduction To The Theory And Practice Of Econometrics. John Wiley & Sons, New York, second edition. Kauermann, G. and Carroll, R. J. (2001). A note on the efficiency of sandwich covariance matrix estimation. Journal of the American Statistical Association, 96(456):1387–1396. Koenker, R. (1981). A note on studentizing a test for heteroscedasticity. Journal of Econometrics, 17:107–112. Koenker, R. and Bassett, G. (1982). Robust tests for heteroscedasticity based on regression quantiles. Econometrica, 50:43–61. Leamer, E. E. (2010). Tantalus on the road to asymptotia. Journal of Economic Perspectives, 24(2):31–46. Long, J. S. and Ervin, L. H. (2000). Using heteroskedasticity consistent standard errors in the linear regression model. The American Statistician, 54:217–224. MacKinnon, J. G. and White, H. L. (1985). Some heteroskedasticity-consistent covariance matrix estimators with improved finite-sample properties. Journal of Econometrics, 29:53–57. Steinhauer, A. and W¨ urgler, T. (2010). Leverage and covariance matrix estimation in finite-sample IV regressions. Working Paper 521, IEW, University of Zurich. White, H. L. (1980). A heteroskedasticity-consistent covariance matrix estimator and a direct test of heteroskedasticity. Econometrica, 48:817–838. Zeileis, A. (2004). Econometric computing with HC and HAC covariance matrix estimators. Journal of Statistical Software, 11(10):1–17.

27

A

More General Inference Problems

A.1

Inference for a Linear Combination

Generalize the parameter of interest from a component βk to a linear combination a0 β, where a ∈ RK is vector specifying the linear combination of interest. The OLS estimator of a0 β is a0 βˆOLS . A HC standard error is given by r 0ˆ

SEHC (a βOLS )

..=

1 0 [ a AvarHC (βˆOLS ) a , n

[ HC (βˆOLS ) is as described in Subsection (4.1). The conventional standard error is given by where Avar 0ˆ

SECO (a βOLS )

..=

q s2 a0 (X 0 X)−1 a

with

s2 ..=

2 1 X 2 εˆi n−K

and εˆi ..= yi − x0i βˆOLS .

i=1

The WLS estimator of a0 β is a0 βˆWLS . A HC standard error is given by r 1 0 [ 0ˆ SEHC (a βWLS ) ..= a AvarHC (βˆWLS ) a , n [ HC (βˆWLS ) is as described in Subsection (4.1). The conventional standard error is given by where Avar SECO (a0 βˆWLS ) ..=

q ˜ 0 X) ˜ −1 a s˜2 a0 (X

with s˜2 ..=

2 1 X 2 ε˜i n−K

and ε˜i ..= y˜i − x ˜0i βˆWLS .

i=1

From here on, the extension of the inference methods for βk discussed in Subsection 4.1 to inference methods for a0 β is clear.

A.2

Testing a Set of Linear Restrictions

Consider testing a set of linear restrictions on β of the form H0 : Rβ = r , where R ∈ Rp×K is matrix of full row rank specifying p ≤ K linear combinations of interest and r ∈ Rp is a vector specifying their respective values under the null. A HC Wald statistic based on the OLS estimator is given by n [ HC (βˆOLS )R0 −1 (RβˆOLS − r) · (RβˆOLS − r)0 R Avar p

WHC (βˆOLS ) ..=

and its conventional counterpart is given by WCO (βˆOLS ) ..=

−1 n · (RβˆOLS − r)0 R(X 0 X)−1 R0 (RβˆOLS − r) . 2 ps

A HC Wald statistic based on the WLS estimator is given by WHC (βˆWLS ) ..=

n [ HC (βˆWLS )R0 −1 (RβˆWLS − r) · (RβˆWLS − r)0 R Avar p 28

and its conventional counterpart is given by n ˜ −1 R0 −1 (RβˆWLS − r) . ˜ 0 X) WCO (βˆWLS ) ..= 2 · (RβˆWLS − r)0 R(X p˜ s For a generic Wald statistic W , the corresponding p-value is obtained as ˜}, P V (W ) ..= Prob{F ≥ W

where F ∼ Fp,n .

Here, Fp,n denotes the F distribution with p and n degrees of freedom. HC inference based on the OLS estimator reports P V (WHC (βˆOLS )), while more conservative inference based on the OLS estimator reports P Vmax (βˆOLS ) ..= max P V (WHC (βˆOLS )), P V (WCO (βˆOLS )) . HC inference based on the WLS estimator reports P V (WHC (βˆWLS )), while more conservative inference based on the WLS estimator reports P Vmax (βˆWLS ) ..= max P V (WHC (βˆWLS )), P V (WCO (βˆWLS )) .

B B.1

Mathematical Results Proofs

Proof of Lemma 3.1. Replacing y by Xβ + ε in the definition of βˆW in (3.8) yields 0 −1 −1 0 −1 √ XW ε XW X ˆ √ n(βW − β) = . n n

(B.1)

By Slutsky’s Theorem, the proof consists in showing X 0 W −1 X P −→ Ω1/w n

(B.2)

and

X 0 W −1 ε d −→ N (0, Ωv/w2 ) . n1/2 To show (B.2), its left side has (j, k) element given by n x1j x1k 1 X xij xik P −→ E , n w(xi ) w(x1 )

(B.3)

i=1

by the law of large numbers. To show (B.3), first note that E(X 0 W −1 ε) = E X 0 W −1 E(ε|X) = 0 by Assumption (A3). Furthermore, X 0 W −1 ε is a sum of i.i.d. random vectors xi · εi /w(xi ) with common covariance matrix having (j, k) element x1j ε1 x1k ε1 x1j x1k ε21 x1j x1k x1,j x1k v(x1 ) 2 Cov , =E =E E(ε1 |x1 ) = E . w(x1 ) w(x1 ) w2 (x1 ) w2 (x1 ) w2 (x1 ) 29

Thus, each vector xi · εi /w(xi ) has covariance matrix Ωv/w2 . Therefore, by the multivariate Central Limit Theorem, (B.3) holds. Proof of Theorem 3.1. Let W be the diagonal matrix with (i, i) element vθ0 (xi ). Similarly to (B.1), we have √

ˆ −1 X X 0W n

n(βˆWLS − β) =

!−1

ˆ −1 ε X 0W √ . n

(B.4)

First, we show that ˆ −1 ε X 0 W −1 ε P X 0W √ − √ −→ 0 . n n

(B.5)

ˆ and W are close, one needs to exercise some care, as Even though the assumptions imply that W the dimension of these matrices increases with the sample size n. The left-hand side of (B.5) is n X ˆ −1 − W −1 )ε X 0 (W 1 1 −1/2 √ =n x i · εi − =A+B , vθˆ(xi ) vθ0 (xi ) n i=1

where A

..=

n

−1/2

n X

xi εi rθ0 (xi )(θˆ − θ0 ) ,

(B.6)

i=1

and, with probability tending to one, B is a vector with jth component satisfying n

X 1 |xij εi sθ0 (xi )| . |Bj | ≤ n−1/2 |θˆ − θ0 |2 2

(B.7)

i=1

The jth component of A is n

−1/2

n X

xij εi

i=1

K X

rθ0 ,l (xi )(θˆl − θl ) .

l=1

So to show A = oP (1), it suffices to show that, for each j and l, (θˆl − θl )n−1/2

n X

P

xij εi rθ0 ,l (xi ) −→ 0 .

i=1

The first factor (θˆ − θ0 ) = oP (1), and so it suffices to show that n

−1/2

n X

xi,j εi rθ0 (xi ) = OP (1) .

i=1

The terms in this sum are i.i.d. random variables with mean zero and finite second moments, where finite second moments follow from (3.11), and so this normalized sum converges in distribution to a multivariate normal distribution. Therefore, A = oP (1). To show |B| = oP (1), write the right-hand side of (B.7) as n

1√ ˆ 1X |xij εi sθ0 (xi )| . n|θ − θ0 |2 2 n i=1

30

(B.8)

The first factor

√ ˆ n|θ − θ0 |2 = oP (1) by assumption while the average of the i.i.d. variables

|xij εi sθ0 (xi )| obeys the law of large numbers by the moment assumption (3.12). Thus, |B| = oP (1) also and (B.5) holds. Next, we show that ˆ −1 X X 0W X 0 W −1 X P −→ 0 . − n n

(B.9)

To this end simply write (B.9) as ˆ −1 − W −1 )X X 0 (W 1X 1 1 0 , = xi xi − n n vθˆ(xi ) vθ0 (xi ) i

and then use the differentiability assumption as above (which is even easier now because one only needs to invoke the law of large numbers and not the central limit theorem). It now also follows by the limit (B.2) and the fact that the limiting matrix there is positive definite that !−1 −1 ˆ −1 X X 0W X 0 W −1 X P −→ 0 . − n n

(B.10)

Then, the convergences (B.5) and (B.10) are enough to show that the right-hand side of (B.4) satisfies ˆ −1 X X 0W n

!−1

ˆ −1 ε X 0 W −1 X −1 X 0 W −1 ε P X 0W √ √ − −→ 0 ; n n n

just by making simple use of the equality a ˆˆb − ab = a ˆ(ˆb − b) + (ˆ a − a)b . Finally, Slutsky’s theorem yields the result. Proof of Theorem 4.1. First, the estimator (4.13) is consistent because of (B.2) and (B.9). To analyze (4.16) , we first consider the behavior of this estimator with vθˆ(·) replaced by the fixed vθ0 (·), but retaining the residuals (instead of the true error terms). From (4.15) it follows that εˆ2i = ε2i − 2(βˆ − β)0 xi · εi + (βˆ − β)0 xi · x0i (βˆ − β) . Then, multiplying the last expression by xi x0i /vθ20 (xi ) and averaging over i yields ! ! n n 2 X 1X εˆ2i 1 ε i · xi x0i − · xi x0i = Cn + Dn , n n vθ20 (xi ) vθ20 (xi ) i=1

i=1

where

n

Cn ..= −

2X xi x0i · (βˆ − β)0 xi · εi /vθ20 (xi ) n i=1

and

n

Dn ..=

1X xi x0i · (βˆ − β)0 xi x0i (βˆ − β)/vθ20 (xi ) . n i=1

31

(B.11)

The first goal is to show both Cn and Dn tend to zero in probability. The (j, k) term in the matrix Dn is given by Dn (j, k) =

n K K X X 1X xij xik (βˆl − βl )xil (βˆm − βm )xim /vθ20 (xi ) . n i=1

m=1

l=1

Thus, it suffices to show that, for each j, k, l, and m, 1 (βˆl − βl )(βˆm − βm ) n

n X

P

xij xik xil xim /vθ2i (xi ) −→ 0 .

(B.12)

i=1

P

But (βˆl − βl )(βˆm − βm ) −→ 0 and the average on the right-hand side of (B.12) satisfies the law of large numbers under the assumption of the “fourth moment condition” (4.19) and thus tends to P

something finite in probability. Therefore, (B.12) holds and so Dn −→ 0. P

Next, we show Cn −→ 0. But, (−1/2) times the (j, k) term of Cn is given by n

K

i=1

l=1

X 1X (βˆl − βl )xil εi /vθ20 (xi ) . xij xik n So, it suffices to show that, for each j, k, and l, 1 (βˆl − βl ) n

n X

P

xij xik xil εi /vθ20 (xi ) −→ 0 .

(B.13)

i=1

P

But (βˆl − βl ) −→ 0 and the average on the right-hand side of (B.13) satisfies the law of large numbers under the assumption of the “fourth moment condition” (4.20) and thus tends to something P

finite in probability. Therefore, Cn −→ 0. In summary, what we have shown so far is that (B.11) tends to zero in probability. Thus, the proof of consistency will be complete if we can show that also ! ! n n 1X ε2i 1X ε2i P 0 0 −→ 0 . 2 (x ) · xi xi − n 2 (x ) · xi xi n v v ˆ i θ0 i i=1 i=1

(B.14)

θ

By property (4.17) of the function Rθ0 (·), the left-hand-side of (B.14) has (j, k) component that can be bounded by the absolute value of n

|θˆ − θ0 |

1X xij xik ε2i Rθ0 (xi ) . n

(B.15)

i=1

P But (θˆ − θ0 ) −→ 0 and the average in (B.15) obeys the law of large numbers under the moment

condition (4.21) and thus tends to something finite in probability. Therefore, (B.14) holds.

32

B.2

Verification of Assumptions for the Parametric Specification vθ (·)

The main theorems assume the family vθ (·) leads to a θˆ satisfying (3.10). Assume the family vθ (·) is of the form (which is slightly more general than (3.5)) d X vθ (x) ..= exp θj gj (x) ,

(B.16)

j=1

where θ = (θ1 , . . . , θd )0 and g(x) = (g1 (x), . . . , gd (x))0 . It is tacitly assumed that g1 (x) = 1 to ensure that this specification nests the case of conditional homoskedasticity. Fix δ > 0 and let hδ (ε) ..= log max(δ 2 , ε2 ) . The estimator θˆ is obtained by regressing the residuals εˆi , or more ˆ we first consider θ, ˜ which is obtained precisely hδ (ˆ εi ) on g(xi ). Before analyzing the behavior of θ, by regressing hδ (εi ) on g(xi ). (Needless to say, we do not know the εi , but we can view θ˜ as an oracle ‘estimator’.) As argued in Hayashi (2000, Section 2.9), θ˜ is a consistent estimator of −1 θ0 ..= E(g(xi )g(xi )0 E g(xi ) · hδ (εi ) . √ To show that θ˜ is moreover n-consistent, note that θ˜ = L−1 n mn , where Ln is the d × d matrix n

Ln ..=

1X g(xi )g(xi )0 n i=1

and mn is the d × 1 vector

n

1X g(xi ) · hδ (εi ) . n i=1 √ Since Ln is an average of i.i.d. random matrices, it is a n-consistent estimator of L ..= E g(xi )g(xi )0 mn ..=

(under the assumption of finite second moments of products and invertibility of L), and in fact is √ asymptotically multivariate normal as well.12 Similarly, n(mn − m) is asymptotically multivariate normal under moment conditions, where m ..= E g(xi ) · hδ (εi ) . √ But, if Ln and mn are each n-consistent estimators of L and m, respectively, it is easy to see that √ θ˜ = Ln · mn is a n-consistent estimator of L · m = θ0 .13 However, our algorithm uses the residuals εˆi after an OLS fit of yi on xi , rather than the true errors εi . So, we must argue that the difference between θ˜ above and θˆ obtained when using the residuals is of order oP (n−1/4 ), which would then verify (3.10). Note that θˆ = Ln · m ˆ n , where n

m ˆ n ..=

1X g(xi ) · hδ (ˆ εi ) . n i=1

12

Note that L is clearly invertible in the case g(x) ..= (1, log(x))0 as used in the Monte Carlo study of Section 5.

13 Alternatively, by the usual arguments that show asymptotic normality of OLS, under moment assumptions, √ ˜ √ n(θ − θ0 ) is asymptotically normal, and hence n-consistent.

33

Hence, it suffices to show that m ˆ n − m = oP n−1/4 .

(B.17)

To do this, first note that max(δ, |ˆ εi |) − max(δ, |εi |) ≤ |ˆ εi − εi | . Then, hδ (ˆ εi ) − hδ (εi ) = log[max(δ 2 , εˆ2i )] − log[max(δ 2 , ε2i )] = 2 log[max(δ, |ˆ εi |)] − log[max(δ, |εi |)] 2 εi |)] − max(δ, |εi |) ≤ max(δ, |ˆ δ 2 ≤ |ˆ εi − ε i | δ 2 = |x0i (βˆ − β)| , δ where the first inequality follows from the mean value theorem of calculus. Therefore,

n 2 X |g(xi )| · x0i (βˆ − β) . |m ˆ n − m| ≤ nδ i=1

But assuming E gk (xi ) · xj < ∞ for any i, j, one can apply the law of large numbers to conclude that |m ˆ n − m| = OP |βˆ − β|/δ = OP n−1/2 , which certainly implies (B.17). As an added bonus, the argument shows that one can let δ ..= δn → 0 as long as δn goes to zero slowly enough; in particular, as long as δn n1/4 → ∞. The argument for the linear specification vθ (x)

..=

d X

θj gj (x)

j=1

is similar. Here, the estimator θˆ is obtained by regressing the residuals εˆ2i on g(xi ). As above, first consider θ˜ obtained by regressing the actual errors ε2i on g(xi ). Then, θ˜ is a consistent estimator of −1 θ0 ..= E(g(xi )g(xi )0 E g(xi ) · ε2i . √ As before, it is n-consistent, as θ˜ = L−1 n mn , with Ln defined exactly as before, but with mn now defined as the d × 1 vector

n

mn ..=

1X g(xi ) · ε2i . n i=1

Again, we must argue that the difference between θ˜ and θˆ is of order oP n−1/4 , and it suffices to show (B.17) where now n

m ˆ n ..=

1X g(xi ) · εˆ2i . n i=1

34

But, n n 1 X 1 X |m ˆ n − mn | = g(xi ) · (ˆ ε2i − ε2i ) = g(xi ) · −2(βˆ − β)0 xi · εi + (βˆ − β)0 xi · x0i (βˆ − β) n n i=1 i=1 n n 1 X 1 X ≤ 2 g(xi ) · (βˆ − β)0 xi · εi + g(xi ) · (βˆ − β)0 xi · x0i (βˆ − β) . n n i=1

i=1

Under moment assumptions, the sum in the first term is an average of mean-zero random vectors and is of order OP n−1 because βˆ − β is of order OP n−1/2 and an average of zero-mean i.i.d. random variables with finite variance is also of order OP n−1/2 . The second term does not have mean zero, but under moment assumptions, is of order |βˆ − β|2 , which is OP n−1 . Therefore, |m ˆ n − mn | is actually of order OP n−1/2 , which is clearly way more than needed.

35

C

Figures and Tables

15

Specification (5.3)

0 1 2 4

0

5

v(x)

10

= = = =

1.0

1.5

2.0

2.5

3.0

3.5

4.0

x

Figure 1: Graphical display of the parametric specification (5.3) for the skedastic function v(·). p Note that for ease of interpretation, we actually plot v(x) as a function of x.

2.0

Specification (5.4)

1.0 0.0

0.5

v(x)

1.5

= 2 = 4

1.0

1.5

2.0

2.5

3.0

3.5

4.0

x

Figure 2: Graphical display of the parametric specification (5.4) for the skedastic function v(·). p Note that for ease of interpretation, we actually plot v(x) as a function of x. 36

4

5

Specification (5.5)

0

1

2

v(x)

3

= 0.1 = 0.15

1.0

1.5

2.0

2.5

3.0

3.5

4.0

x

Figure 3: Graphical display of the parametric specification (5.5) for the skedastic function v(·). p Note that for ease of interpretation, we actually plot v(x) as a function of x.

3.0

Specification (5.6)

0.0

0.5

1.0

1.5

v(x)

2.0

2.5

= 1 = 2

1.0

1.5

2.0

2.5

3.0

3.5

4.0

x

Figure 4: Graphical display of the parametric specification (5.6) for the skedastic function v(·). p Note that for ease of interpretation, we actually plot v(x) as a function of x.

37

OLS

WLS

ALS

v(x) = xγ γ=0 n = 20

0.073

0.082 (1.12)

0.077 (1.04)

n = 50

0.028

0.029 (1.05)

0.028 (1.02)

n = 100 0.014

0.014 (1.02)

0.014 (1.01)

γ=1 n = 20

0.185

0.189 (1.03)

0.188 (1.02)

n = 50

0.070

0.066 (0.95)

0.069 (0.99)

n = 100 0.034

0.031 (0.92)

0.032 (0.95)

γ=2 n = 20

0.555

0.461 (0.83)

0.513 (0.93)

n = 50

0.211

0.157 (0.74)

0.171 (0.81)

n = 100 0.103

0.072 (0.70)

0.073 (0.71)

γ=4 n = 20

6.517

3.307 (0.51)

4.348 (0.67)

n = 50

2.534

0.957 (0.38)

0.994 (0.39)

n = 100 1.242

0.418 (0.34)

0.418 (0.34)

γ = 0 , error terms εi of form (5.10) n = 20

0.074

0.082 (1.12)

0.077 (1.04)

n = 50

0.028

0.029 (1.06)

0.028 (1.02)

n = 100 0.014

0.014 (1.03)

0.014 (1.01)

Table 1: Empirical mean squared errors (eMSE’s) of estimators of β2 when the parametric model used to estimate the skedastic function v(·) is correctly specified. (In parentheses are the ratios of the eMSE of a given estimator over the eMSE of OLS.) All numbers are based on 50,000 Monte Carlo repetitions.

38

OLS

WLS γ v(x) = log(x)

ALS

γ=2 n = 20

0.066

0.045 (0.69)

0.053 (0.81)

n = 50

0.025

0.014 (0.55)

0.015 (0.60)

n = 100 0.012

0.006 (0.50)

0.006 (0.50)

γ=4 n = 20

0.101

0.047 (0.46)

0.058 (0.58)

n = 50

0.039

0.013 (0.33)

0.013 (0.33)

n = 100 0.019

0.005 (0.25)

0.005 (0.25)

v(x) = exp(γx + γx2 ) γ = 0.1 n = 20

0.250

0.236 (0.94)

0.246 (0.98)

n = 50

0.096

0.083 (0.87)

0.089 (0.93)

n = 100 0.047

0.039 (0.83)

0.041 (0.86)

γ = 0.15 n = 20

0.530

0.413 (0.78)

0.473 (0.89)

n = 50

0.206

0.143 (0.70)

0.155 (0.75)

n = 100 0.101

0.067 (0.67)

0.068 (0.67)

v(x) of form (5.6) γ=1 n = 20

0.148

0.151 (1.02)

0.150 (1.02)

n = 50

0.056

0.054 (0.96)

0.056 (1.00)

n = 100 0.027

0.025 (0.93)

0.026 (0.96)

γ=2 n = 20

0.365

0.303 (0.83)

0.337 (0.93)

n = 50

0.138

0.108 (0.77)

0.112 (0.81)

n = 100 0.067

0.051 (0.75)

0.051 (0.75)

Table 2: Empirical mean squared errors (eMSE’s) of estimators of β2 when the parametric model used to estimate the skedastic function v(·) is misspecified. (In parentheses are the ratios of the eMSE of a given estimator over the eMSE of OLS.) All numbers are based on 50,000 Monte Carlo repetitions. 39

OLS-HC

OLS-Max

WLS-HC v(x) =

WLS-Max

ASL-HC

ASL-Max

xγ

γ=0 n = 20

95.4

96.5 (1.03)

93.5 (0.99)

95.0 (1.04)

94.5 (0.98)

95.8 (1.02)

n = 50

95.1

95.9 (1.03)

94.3 (0.99)

95.2 (1.02)

94.7 (1.00)

95.5 (1.02)

n = 100

95.1

95.7 (1.02)

94.8 (1.00)

95.4 (1.02)

95.0 (1.00)

95.5 (1.02)

γ=1 n = 20

95.3

96.5 (1.04)

93.8 (0.94)

95.2 (0.98)

94.4 (0.96)

95.8 (1.01)

n = 50

95.1

95.9 (1.02)

94.5 (0.95)

95.4 (0.97)

94.5 (0.96)

95.3 (0.98)

n = 100

95.0

95.7 (1.02)

94.8 (0.95)

95.5 (0.97)

94.7 (0.97)

95.4 (0.99)

γ=2 n = 20

94.8

96.0 (1.04)

94.0 (0.86)

95.2 (0.89)

93.9 (0.90)

95.1 (0.94)

n = 50

94.8

95.5 (1.02)

94.5 (0.84)

95.3 (0.86)

94.2 (0.85)

95.1 (0.88)

n = 100

94.8

95.3 (1.02)

94.8 (0.83)

95.4 (0.85)

94.8 (0.83)

95.3 (0.85)

γ=4 n = 20

93.9

94.8 (1.02)

94.0 (0.66)

94.6 (0.67)

93.1 (0.70)

93.9 (0.71)

n = 50

94.4

94.7 (1.01)

94.3 (0.59)

94.7 (0.60)

94.2 (0.59)

94.6 (0.60)

n = 100

94.6

94.7 (1.00)

94.6 (0.57)

95.0 (0.58)

94.6 (0.57)

95.0 (0.58)

Table 3: Empirical coverage probabilities in percent of nominal 95% confidence intervals for β2 when the parametric model used to estimate the skedastic function v(·) is correctly specified. (In parentheses are the ratios of the average length of a given confidence interval over the average length of OLS-HC.) All numbers are based on 50,000 Monte Carlo repetitions.

40

OLS-HC

OLS-Max

WLS-HC WLS-Max γ v(x) = log(x)

ALS-HC

ALS-Max

γ=2 n = 20

94.8

96.3 (1.05)

94.6 (0.77)

95.9 (0.81)

94.1 (0.80)

95.6 (0.85)

n = 50

94.6

95.7 (1.04)

94.6 (0.72)

96.2 (0.78)

94.5 (0.72)

96.0 (0.78)

n = 100

94.8

95.5 (1.03)

94.8 (0.70)

96.6 (0.77)

94.8 (0.70)

96.5 (0.77)

γ=4 n = 20

93.8

94.9 (1.02)

94.9 (0.61)

94.2 (0.62)

93.5 (0.63)

94.2 (0.64)

n = 50

94.3

94.7 (1.01)

94.3 (0.54)

94.6 (0.55)

94.2 (0.54)

94.6 (0.55)

n = 100

94.5

94.7 (1.00)

94.5 (0.52)

94.8 (0.53)

94.5 (0.52)

94.8 (0.53)

v(x) = exp(γx + γx2 ) γ = 0.1 n = 20

94.9

95.8 (1.03)

93.3 (0.90)

94.5 (0.93)

93.7 (0.94)

94.8 (0.97)

n = 50

94.8

95.2 (1.01)

94.3 (0.91)

94.7 (0.92)

94.1 (0.93)

94.5 (0.94)

n = 100

94.9

95.0 (1.01)

94.7 (0.91)

94.9 (0.91)

94.6 (0.92)

94.8 (0.92)

γ = 0.15 n = 20

94.5

95.2 (1.02)

93.3 (0.83)

94.2 (0.85)

93.2 (0.88)

94.0 (0.90)

n = 50

94.6

94.9 (1.01)

94.1 (0.82)

94.4 (0.81)

93.9 (0.83)

94.2 (0.84)

n = 100

94.7

95.8 (1.00)

94.6 (0.81)

94.7 (0.81)

94.6 (0.81)

94.7 (0.81)

v(x) of form (5.6) γ=1 n = 20

95.2

96.4 (1.04)

93.7 (0.94)

95.1 (0.98)

94.2 (0.96)

95.6 (1.00)

n = 50

95.0

95.8 (1.03)

94.4 (0.95)

95.2 (0.98)

95.2 (0.97)

96.1 (1.00)

n = 100

95.0

95.7 (1.02)

94.7 (0.96)

95.2 (0.97)

94.6 (0.96)

95.2 (0.98)

γ=2 n = 20

94.7

96.0 (1.04)

93.9 (0.86)

94.9 (0.89)

93.5 (0.89)

94.7 (0.93)

n = 50

94.8

95.5 (1.03)

94.4 (0.86)

94.9 (0.87)

94.1 (0.87)

94.7 (0.88)

n = 100

94.8

95.3 (1.02)

94.8 (0.86)

95.0 (0.87)

94.8 (0.86)

95.0 (0.87)

Table 4: Empirical coverage probabilities in percent of nominal 95% confidence intervals for β2 when the parametric model used to estimate the skedastic function v(·) is misspecified. (In parentheses are the ratios of the average length of a given confidence interval over the average length of OLS-HC.) All numbers are based on 50,000 Monte Carlo repetitions. 41

Technical Report No. 2014-11 September 2014

Department of Statistics STANFORD UNIVERSITY Stanford, California 94305-4065

RESURRECTING WEIGHTED LEAST SQUARES By Joseph P. Romano Stanford University Michael Wolf University of Zurich

Technical Report No. 2014-11 September 2014

This research was supported in part by National Science Foundation grant DMS 0707085.

Department of Statistics STANFORD UNIVERSITY Stanford, California 94305-4065 http://statistics.stanford.edu

Resurrecting Weighted Least Squares Joseph P. Romano ∗ Departments of Statistics and Economics Stanford University [email protected]

Michael Wolf Department of Economics University of Zurich [email protected]

September 2014

Abstract Linear regression models form the cornerstone of applied research in economics and other scientific disciplines. When conditional heteroskedasticity is present, or at least suspected, the practice of reweighting the data has long been abandoned in favor of estimating model parameters by ordinary least squares (OLS), in conjunction with using heteroskedasticity consistent (HC) standard errors. However, we argue for reintroducing the practice of reweighting the data, since doing so can lead to large efficiency gains of the resulting weighted least squares (WLS) estimator over OLS even when the model for reweighting the data is misspecified. Efficiency gains manifest in a first-order asymptotic sense and thus should be considered in current empirical practice. Crucially, we also derive how asymptotically valid inference based on the WLS estimator can be obtained even when the model for reweighting the data is misspecified. The idea is that, just like the OLS estimator, the WLS estimator can also be accompanied by HC standard errors without knowledge of the functional form of conditional heteroskedasticity. A Monte Carly study demonstrates attractive finite-sample properties of our proposals compared to the status quo, both in terms of estimation and making inference.

KEY WORDS: Conditional heteroskedasticity, HC standard errors, weighted least squares. JEL classification codes: C12, C13, C21.

∗

Research supported by NSF Grant DMS-0707085.

1

1

Introduction

Despite constant additions to the toolbox of applied researchers, linear regression models remain the cornerstone of empirical work in economics and other scientific disciplines. Any introductory course in econometrics starts with an assumption of conditional homoskedasticity: the conditional variance of the error terms does not depend on the regressors. In such an idyllic situation, one should estimate the model parameters by ordinary least squares (OLS) and use the conventional inference produced by any of the multitude of software packages. Unfortunately, in many applications, applied researchers are plagued by conditional heteroskedasticity: the conditional variance of the error term is a function of the regressors. A simple example is a wage regression where wages (or perhaps log wages) are regressed on experience plus a constant. In most professions, there is a larger variation in wages for workers with many years of experience compared to workers with few years of experience. Therefore, in such a case, the conditional variance of the error term is an increasing function of experience. In the presence of conditional heteroskedasticity, the OLS estimator still has attractive properties, such as being unbiased and being consistent (under mild regularity conditions). However, it is no longer the best linear unbiased estimator (BLUE). Even more problematic, conventional inference generally is no longer valid: confidence intervals do not have the correct coverage probabilities and hypothesis tests do not have the correct null rejection probabilities, even asymptotically. In early days, econometricians prescribed the cure of weighted least squares (WLS). It consisted of modeling the functional form of conditional heteroskedasticity, reweighting the data (both the response variable and the regressors), and running OLS combined with conventional inference with the weighted data. The rationale was that ‘correctly’ weighting the data (based on the true conditional variance model) results in efficiency gains over the OLS estimator. Furthermore, conventional inference based on the ‘correctly’ weighted data is valid, at least asymptotically. Then came White (1980) who changed the game with one of the most influential and widely-cited papers in the field. He promoted heteroskedasticity consistent (HC) standard errors for the OLS estimator. His alternative cure consists of retaining the OLS estimator (that is, not weighting the data) but using HC standard errors instead of the conventional standard errors. The resulting inference is (asymptotically) valid in the presence of conditional heteroskedasticity of unknown form, which has been a major selling point. Indeed, the earlier cure had the nasty side effect of invalid inference if the applied researcher did not model the conditional heteroskedasticity correctly (arguably, a common occurrence). As the years have passed, weighting the data has become out of fashion and applied researchers have instead largely favored the cure prescribed by White (1980) and his followers. The bad publicity for WLS is still ongoing. As an example, consider Angrist and Pischke (2010, Section 3.4.1) who discourage applied researchers from weighting the data with the following arguments, among others.

2

1. “If the conditional variance model is a poor approximation or if the estimates of it are very noisy, WLS estimators may have worse finite-sample properties than unweighted estimators.” 2. “The inferences you draw [. . . ] may therefore be misleading, and the hoped-for efficiency gain may not materialize.” 3. “Any efficiency gain from weighting is likely to be modest, and incorrectly or poorly estimated weights can do more harm than good.” Alas, not everyone has converted and a few lone warriors defending WLS remain. At the forefront is Leamer (2010, p.43) who calls the current practice “White-washing” and argues that “. . . we should be doing the hard work of modeling the heteroskedasticity [. . . ] to determine if sensible reweighting of the observations materially changes the locations of the estimates of interest as well as the widths of the confidence intervals.” In this paper, we offer a new, third cure, which is a simple combination of the two previous cures: use WLS combined with HC standard errors. The aim of this cure is to offer the best of both worlds. First, sensibly weighting the data can lead to noticeable efficiency gains over OLS, even if the conditional variance model is misspecified. Second, combining WLS with HC standard errors allows for valid inference, even if the conditional variance model is misspecified. The cure we offer is a simple and natural one. But, to the best of our knowledge, it has not been offered before. For example, Hayashi (2000, Section 2.8) describes the approach of estimating a parametric specification of conditional heteroskedasticity, but the corresponding inference for the parameter vector assumes the parametric specification is correctly specified; otherwise, it may not be valid. Our approach is similar in that it also specifies a parametric specification for the skedastic function, but it is different in that it produces valid inference under general forms of conditional heteroskedasticity even when the parametric specification does not include the true skedastic function. As a bonus, we also propose a new estimator: adaptive least squares (ALS). Our motivation is a follows. Under conditional homoskedasticity, OLS is the optimal estimator and one should not weight the data at all. Using WLS in such a setting will lead to an efficiency loss, at least in small and moderate samples, because of the noise in the estimated conditional variance model. As a remedy, we propose to first carry out a test of conditional heteroskedasticity, based on the very conditional variance model intended to be used in weighting the data. If the test rejects, use WLS; otherwise, stick with OLS. In this way, one will only use WLS when it is worthwhile doing so, that is, when there is sufficient evidence in the data supporting the conditional variance model. Crucially, independent of the outcome of the test, always use HC standard errors.1 The remainder of the paper is organized as follows. Section 2 introduces the model. Section 3 1

Tests for conditional heteroskedasticity had come with a different prescription in the past. Namely, if the test

rejects, use OLS with HC standard errors, otherwise, use OLS with the conventional standard errors; for example, see Hayashi (2000, p.132). But such a practice is not recommended, since it has poor finite-sample properties under conditional heteroskedasticity in small and moderate samples; for example, see Long and Ervin (2000, Section 4.3). The reason is that when the test has low power, an invalid inference method will be chosen with non-negligible

3

describes the various estimators and derives the asymptotic distribution of the WLS estimator when the weighting of the data is possibly incorrect. Section 4 establishes validity of our proposed inference based on the WLS estimator when the weighting of the data is possibly incorrect. Section 5 examines finite-sample performance via a Monte Carlo study. Section 6 briefly discusses possible variations and extensions. Finally, Section 7 concludes. An Appendix contains details on various inference methods and all mathematical proofs.

2

The Model

We maintain the following set of assumptions throughout the paper. (A1) The linear model is of the form yi = x0i β + εi

(i = 1, . . . , n) ,

(2.1)

where xi ∈ RK is a vector of explanatory variables (regressors), β ∈ RK is a coefficient vector, and εi is the unobservable error term with certain properties to be specified below. n (A2) The sample (yi , x0i ) i=1 is independent and identically distributed (i.i.d.). (A3) All the regressors are predetermined in the sense that they are orthogonal to the contemporaneous error term: E(εi |xi ) = 0 .

(2.2)

Of course, under the i.i.d. assumption (A2) it then also holds that E(εi |x1 , . . . , xn ) = 0 , that is, the regressors are strictly exogenous. (A4) The K × K matrix Σxx ..= E(xi x0i ) is nonsingular (and hence finite). Furthermore,

Pn

0 i=1 xi xi

is invertible with probability one. (A5) The K × K matrix Ω ..= E(ε2i xi x0i ) is nonsingular (and hence) finite. (A6) There exists a nonrandom function v : RK → R+ such that E(ε2i |xi ) = v(xi ) .

(2.3)

Therefore, the skedastic function v(·) determines the functional form of the conditional hetero-skedasticity. Note that under (A6), Ω = E v(xi ) · xi x0i . probability. Instead, we use tests for conditional heteroskedasticity for an honorable purpose and thereby restore some of their lost appeal.

4

It is useful to introduce the customary vector-matrix notations y1 ε1 x01 x11 . . . x1K . . . . .. . . . . . . . y .= ... . . , ε .= . , X . = . = . 0 yn εn xn xn1 . . . xnK

,

so that equation (2.1) can be written more compactly as y = Xβ + ε . Furthermore, assumptions (A2), (A3), and (A5) imply that v(x1 ) .. Var(ε|X) = . v(xn )

(2.4)

.

Remark 2.1 (Justifying the I.I.D. Assumption). The application of WLS relies upon Var(ε|X) being a diagonal matrix. For the sake of theory, it is possible to generalize the set of assumptions (A2)–(A5) such that this condition is still satisfied. For the sake of simplicity, however, we prefer to maintain the set of assumptions (A2)–(A5), which are based on the key assumption (A2) of observing a random sample. Our reasoning here is that virtually all applications of WLS are restricted to such a setting, a leading example being cross-sectional studies. Therefore, allowing for more general settings would mainly serve to impress theoreticians as opposed to keeping it simple for our target audience, namely applied researchers.

3

Estimators: OLS, WLS, and ALS

3.1

Description of the Estimators

The ubiquitous estimator of β is the ordinary least squares (OLS) estimator βˆOLS ..= (X 0 X)−1 X 0 y . Under the maintained assumptions, the OLS is unbiased and consistent. This is the good news. The bad news is that it is not efficient under conditional heteroskedasticity (that is, when the skedastic function v(·) is not constant). A more efficient estimator can be obtained by reweighting the data (yi , x0i ) and then applying OLS in the transformed model y x0 εi p i = p i β+p . v(xi ) v(xi ) v(xi )

5

(3.1)

Letting

v(x1 ) ..

V ..=

,

. v(xn )

the resulting estimator can be written as βˆBLUE ..= (X 0 V −1 X)−1 X 0 V −1 y .

(3.2)

It is the best linear unbiased estimator (BLUE) and is consistent; in particular, it is more efficient than the OLS estimator. But outside of textbooks, this ‘oracle’ estimator mainly exists in utopia, since the skedastic function v(·) is typically unknown. A feasible approach is to estimate the skedastic function v(·) from the data in some way and to then apply OLS in the model y x0 εi p i = p i β+p , vˆ(xi ) vˆ(xi ) vˆ(xi )

(3.3)

where vˆ(·) denotes the estimator of v(·). The resulting estimator is the weighted least squares (WLS) estimator.2 Letting Vˆ ..=

vˆ(x1 ) ..

,

. vˆ(xn )

the WLS estimator can be written as βˆWLS ..= (X 0 Vˆ −1 X)−1 X 0 Vˆ −1 y . It is not necessarily unbiased. If vˆ(·) is a consistent estimator of v(·), than WLS is asymptotically more efficient than OLS. But even if vˆ(·) is an inconsistent estimator of v(·), WLS can result in large efficiency gains over OLS in the presence of noticeable conditional heteroskedasticity; see Section 5. Using OLS is straightforward and has become the status quo in applied economic research. But foregoing potentially large efficiency gains ‘on principle’ would seem an approach to data analysis that is hard to justify. Remark 3.1 (Adaptive Least Squares). Under conditional homoskedasticity — that is, when the skedastic function v(·) is constant — OLS is generally more efficient than WLS in finite samples. But, under certain assumptions on the scheme to estimate the skedastic function, OLS and WLS are asymptotically equivalent in this case. On the other hand, under (noticeable) conditional heteroskedasticity, WLS is generally more efficient, both in finite samples and even in a first-order asymptotic sense. (Such claims will be justified mathematically later.) 2

Another convention is to call weighted least squares estimator what we call best linear unbiased estimator and to

call feasible weighted least squares estimator what we call weighted least squares estimator.

6

Therefore, it is tempting to decide based on the data which route to take: OLS or WLS. Specifically, we suggest applying a test for conditional heteroskedasticity. Several such tests exists, the most popular ones being the tests of Breusch and Pagan (1979) and White (1980); also see Koenker (1981) and Koenker and Bassett (1982). If the null hypothesis of conditional homoskedasticity it not rejected by such a test, use the OLS estimator; otherwise, use the WLS estimator. We call the resulting estimator the adaptive least squares (ALS) estimator. The motivation is as follows. Under conditional homoskedasticity, the ALS estimator will be equal to the WLS estimator with a small probability only (roughly equal to the nominal size of the test). Therefore, in this case, ALS is expected to be more efficient than WLS in finite samples, though still less efficient than OLS. Under conditional heteroskedasticity, the ALS estimator will be equal to the WLS estimator with probability tending to one (assuming that the chosen test is consistent against the existing nature of conditional heteroskedasticity). So for large sample sizes, ALS should be almost as efficient as WLS. For small sample sizes, when the power of the test is not near one, the efficiency is expected to be somewhere between OLS and WLS. (In fact, one could apply the same strategy, but letting the significance level αn of the “pretest” tend to zero as the sample size tends to infinity; one just needs to ensure αn tends to zero slowly enough so that the test still has power tending to one.) Consequently, ALS sacrifices some efficiency gains of WLS under conditional heteroskedasticity in favor of being closer to the performance of OLS under conditional homoskedasticity. These heuristics are confirmed by Monte Carlo simulations in Section 5. Remark 3.2 (Best Linear Predictor). Consider a new observation (y, x0 ). It is well known that even in the absence of a linear model (that is, with Assumption (A1) not holding), the best linear predictor of y in the mean squared error sense is given by x0 β ∗ ,

−1 β ∗ ..= E(xx0 ) E(x · y) ;

with

for example, see Hayashi (2000, Proposition 2.8). Under assumptions (A2) and (A4), the OLS estimator consistently estimates β ∗ , that is, P βˆOLS −→ β ∗ ,

(3.4)

P

where −→ denotes convergence in probability. Moreover, the condition E(xi · i ) = 0 is ensured when β = β ∗ , rather than the stronger assumption (A3). Consistency is not necessarily shared by the WLS and ALS estimators. (Note, however, that the weighted least squares estimator can similarly be viewed as a best linear predictor based on a mean weighted squared error criterion.) The consistency result (3.4) is sometimes viewed as an attractive robustness property of the OLS estimator. But we feel that is not of great practical importance. The best predictor in the mean squared sense is the conditional expectation E(y|x). If the linearity assumption (A1) does not hold, the conditional expectation E(y|x) can be arbitrarily far from the best linear predictor x0 β ∗ . Therefore, if it is suspected that the linearity assumption (A1) may not hold, rather than settling for 7

the best linear predictor, it may be more fruitful to include more covariates or to use a nonparametric approach to estimate the conditional expectation. Needless to say, if the goal is to interpret the estimator of β (in the sense of quantifying the ‘effect’ of the various entries of the vector of covariates x on the response variable y) or to make inference for β, then all three estimators — OLS, WLS, and ALS — rely on the validity of the linearity assumption (A1). So for such purposes, it is just as (un)safe to use WLS or ALS instead of OLS.

3.2

Parametric Model for Estimating the Skedastic Function

In order to estimate the skedastic function v(·), we suggest the use of a parametric model vθ (·), where θ ∈ Rd is a finite-dimensional parameter. Such a model could be suggested by economic theory, by exploratory data analysis (that is, residual plots from an OLS regression), or by convenience. In any case, the model used should nest the case of conditional homoskedasticity. In particular, for every σ 2 > 0, we assume the existence of a unique θ ..= θ(σ 2 ) such that vθ (x) ≡ σ 2 . A flexible parametric model we suggest is vθ (xi ) ..= exp ν + γ2 log |xi,2 | + . . . + γK log |xi,K | ,

with

θ ..= (ν, γ2 , . . . , γK )0 ,

(3.5)

assuming that xi,1 ≡ 1 (that is, the original regression contains a constant). Otherwise, the model should be vθ (xi ) ..= exp ν + γ1 log |xi,1 | + γ2 log |xi,2 | + . . . + γK log |xi,K | ,

with

θ ..= (ν, γ1 , . . . , γK )0 .

Such a model is a special case of the form of multiplicative conditional heteroskedasticity previously proposed by Harvey (1976) and Judge et al. (1988, Section 9.3), among others. Another possibility is to not take exponents and use vθ (xi ) ..= ν + γ2 |xi,2 | + . . . + γK |xi,K | ,

with θ ..= (ν, γ2 , . . . , γK )0 ,

(3.6)

The advantage of (3.5) over (3.6) is that it ensures variances are nonnegative, though the parameters in (3.6) can be restricted such that nonnegativity is satisfied. In all cases, the models obviously nest the case of conditional homoskedasticity. Furthermore, we recommend to base the test for conditional heteroskedasticity used in computing the ALS estimator of Remark 3.1 on the very parametric model of the skedastic function used in computing the WLS estimator. The motivation is that in this fashion, the ASL estimator is set to the WLS estimator (as opposed to the OLS estimator) only if there is significant evidence for the type of conditional heteroskedasticity that forms the basis of the WLS estimator. In particular, we 8

do not recommend to use a ‘generic’ test of conditional heteroskedasticity, such as the test of White (1980), unless the parametric specification vθ (·) used by the test is also the parametric specification used by the WLS estimator.3 Having chosen a parametric specification vθ (·) the test for conditional heteroskedasticity is then carried out by regressing the squared OLS residuals on the parametric specification, possibly after a suitable transformation to ensure linearity on the right-hand side, and by then comparing n times the R2 -statistic of this regression against the quantile of a chi-squared distribution. For example, if the parametric model is given by (3.5), the test specifies H0 : γ2 = . . . = γK = 0

vs. H1 : at least one γk 6= 0 (k = 2, . . . , K) .

To carry out the test, fix a small constant δ > 0, estimate the following regression by OLS: log max(δ 2 , εˆ2i ) = ν + γ2 log |xi,2 | + . . . + γK log |xi,K | + ui , with εˆi ..= yi − x0i βˆOLS ,

(3.7)

and denote the resulting R2 -statistic by R2 . Furthermore, denote by χ2K−1,1−α the 1 − α quantile of the chi-squared distribution with K − 1 degrees of freedom. Then the test for conditional heteroskedasticity rejects at nominal level α if n · R2 > χ2K−1,1−α . (The reason for introducing the constant δ here is that, because we are taking logs, we need to avoid a residual of zero, or even very near zero. If instead, we considered the specification (3.6), we would simply run a regression of ε2i on the right-hand side of (3.7) and no constant δ needs to be introduced.) Finally, the estimate of the skedastic function is given by vˆ(·) ..= vθˆ(·) , where θˆ is an estimator of θ obtained by on OLS regression of the type (3.7).

3.3

Limiting Distribution of the WLS Estimator

The first goal is to consider the behavior of the weighted least squares estimator under a perhaps incorrectly specified skedastic function. The estimator βˆBLUE assumes knowledge of the true skedastic function v(·). Instead, consider a generic WLS estimator that is based on the skedastic function w(·); this estimator is given by βˆW ..= (X 0 W −1 X)−1 X 0 W −1 y ,

(3.8)

where W is the diagonal matrix with (i, i) entry w(xi ). Given two real-valued functions a(·) and b(·) defined on RK (the space where xi lives), define Ωa/b to be the matrix given by a(xi ) 0 . Ωa/b .= E · xi xi . b(xi ) 3

For example, we would not recommend the parametric specification of White’s (1980) test, as it is of order K 2

and thus involves too many free parameters (unless the number of regressors, K, is very small compared to the sample size, n).

9

The first result deals with the case of a fixed employed choice of skedastic function w(·), though this choice may be misspecified, since the true skedastic function is v(·). Lemma 3.1. Assume (A1)–(A3) and (A6). Given a possibly mispecified skedastic function w(·) and the true skedastic function v(·), assume the matrices Ω1/w and Ωv/w2 are well-defined (in the sense of the corresponding expectations to exist and being finite). Also, assume Ω1/w is invertible. (These assumptions reduce to the usual assumptions (A4) and (A5) in case w(·) is constant.) Then, as n → ∞,

√ d −1 n(βˆW − β) −→ N 0, Ω−1 1/w Ωv/w2 Ω1/w .

Corollary 3.1. Assume the assumptions of Lemma 3.1 and in addition that both w(·) and v(·) are constant (so that, in particular, conditional homoskedasticity holds true). Then √

d n(βˆW − β) −→ N 0, Ω−1 1/v .

It is well known that, under conditional homoskedasticity, Ω−1 1/v is the limiting variance of the OLS estimator. So as long as the skedastic function w(·) is constant, the limiting distribution of βˆW is identical to the limiting distribution of βˆOLS under conditional homoskedasticity. Next, we consider the behavior of the WLS estimator based on an estimated skedastic function. Assume the parametric family of skedastic functions used to estimate v(·) is given by vθ (·), where θ = (θ1 , . . . , θd )0 varies in an open subset of Rd . Note the true v(·) need not be specified by any vθ (·). However, we always specify a family vθ (·) that includes constant values σ 2 , so as to always allow for conditional homoskedasticity. It is further tacitly assumed that vθ (x) > 0 on the support of x, so that 1/vθ (x) is well-defined with probability one. Assume that 1/vθ (·) is differentiable at some fixed θ0 in the following sense: there exists a vector-valued function of dimension 1 × d rθ0 (x) = rθ0 ,1 (x), . . . , rθ0 ,d (x)

and a real-valued function sθ0 (·) such that 1 1 1 2 vθ (x) − vθ (x) − rθ0 (x)(θ − θ0 ) ≤ 2 |θ − θ0 | sθ0 (x) , 0

(3.9)

for all θ in some small open ball around θ0 and all x in the support of the covariates. Evidently rθ (x) is the gradient with respect to θ of 1/vθ (x). Next, we assume we have a consistent estimator θˆ 0

of θ0 in the sense that P

n1/4 |θˆ − θ0 | −→ 0 .

(3.10)

√ Of course, (3.10) holds if θˆ is a n-consistent estimator of θ0 . (The weaker condition may be useful if one lets the dimension d of the model increase with the sample size n.) Theorem 3.1. Assume conditions (3.9) and (3.10). Further assume E |xi |2 v(xi )|rθ0 (xi )|2 < ∞ 10

(3.11)

and E |xi | · |εi sθ0 (xi )| < ∞ .

(3.12)

(Note that in the case the functions rθ0 (·) and sθ0 (·) can be taken to be uniformly bounded over the support of the covariates, then these two added assumptions (3.11) and (3.12) already follow from (A5) and (A6).) ˆ , and W ˆ is the diagonal Consider the estimator βˆWLS ..= βˆVˆ given by (3.8) with W replaced by W matrix with (i, i) entry vθˆ(xi ). Then, √

d −1 n(βˆWLS − β) −→ N 0, Ω−1 1/w Ωv/w2 Ω1/w ,

(3.13)

where v(·) is the true skedastic function and w(·) ..= vθ0 (·) corresponds to the limiting estimated skedastic function. Remark 3.3. Actually, the proof shows that √

P n(βˆWLS − βˆW ) −→ 0 ,

(3.14)

where βˆW is the WLS based on the known skedastic function w(·) = vθ0 (·). Corollary 3.2. Assume the assumptions of Theorem 3.1 and in addition that both vθ0 (·) and v(·) are constant (so that, in particular, conditional homoskedasticity holds true). Then √

d n(βˆWLS − β) −→ N 0, Ω−1 1/v .

Remark 3.4 (Assumptions on the Parametric Specification vθ (·)). We need to argue that the estimation scheme based on a parametric specification vθ (·), as described in Subsection 3.2, satisfies the assumptions of Theorem 3.1. The specifications we apply in the numerical work, such as given in Subsection 3.2 are clearly smooth, but it needs to be argued that (3.10) holds for some θ0 , even under conditional heteroskedasticity. The technical arguments are given in Appendix B.2 in the Appendix. In particular, both under conditional homoskedasticity and under conditional heteroskedasticity, our proposed estimation scheme of the skedastic function leads to a nonrandom estimate vθ0 (·) in the limit, as assumed by Theorem 3.1. Remark 3.5 (Efficiency of WLS under Homoskedasticity and Limiting Value θ0 .). It is well known that, under conditional homoskedasticity, Ω−1 1/v is the limiting variance of the OLS estimator. So as . long as the skedastic function w(·) .= vθ (·) is constant, the limiting distribution of βˆWLS is identical 0

to the limiting distribution of βˆOLS in this case. In Appendix B.2, it is argued that the estimator θˆ tends in probability to some θ0 . However, vθ0 (·) need not correspond to the true skedastic function v(·). Furthermore, even when v(·) is constant and the specification for vθ (·) nests conditional homoskedasticity, it may or may not be the case that vθ0 (·) is constant. 11

On the one hand, consider the specification (3.6). Then, using OLS when regressing ε2 (or εˆ2 ) on the right-hand-side of (3.6) gives a limiting value of θ0 that corresponds to the best linear predictor of E(ε2i |xi ). Hence, if E(ε2i |xi ) is constant, then so is vθ0 (·). On the other hand, consider the specification (3.5), where log(ε2i ) is modeled by a linear function of covariates. In such a case, OLS is consistent for θ0 , which corresponds to the best linear predictor of E log(ε2i )|xi . In the homoskedastic case where E(ε2i |xi ) is constant, one does not necessarily have that n o E log max(δ 2 , ε2i ) xi is constant.

(3.15)

Of course, (3.15) would hold in the more structured case where εi and xi are independent under conditional homoskedasticity. For example, this would be the case if (A6) is strengthened to (A6’) {xi }ni=1 is a K-variate i.i.d. sample and εi is given by εi =

p v(xi ) · zi ,

where v(·) : RK → R+ is a nonrandom skedastic function and {zi }ni=1 is a univariate i.i.d. sample with mean zero and variance one, independent of {xi }ni=1 . But in general (3.15) may fail. Therefore, to ensure in general that there is asymptotic efficiency loss of using WLS instead of OLS under conditional homoskedasticity, one needs to use a specification of the form (3.6); otherwise, one must assume that when conditional homoskedasticity holds, so does (3.15). Finally, since whenever vθ0 (·) is constant, OLS and WLS are asymptotically equivalent, then in such a case, OLS and ALS are asymptotically equivalent, as well.

4 4.1

Inference: OLS, WLS, and ALS Description of the Inference Methods

In most applications, it is of additional interest to conduct inference for β, by computing confidence intervals for (linear combinations of) β or by carrying out hypothesis tests for (linear combinations of) β. Unfortunately, when vˆ(·) is not a consistent estimator of the skedastic function v(·), then the textbook inference based on the WLS estimator can be misleading, in the sense that confidence intervals do not have the correct coverage probabilities and hypothesis tests do not have the correct null rejection probabilities, even asymptotically. This is an additional reason why applied researchers have shied away from WLS estimation. The contribution of this section is to propose a method by which consistent inference for β based on the WLS estimator can be obtained even if vˆ(·) is an inconsistent estimator. Our proposal is simple and straightforward. The idea is rooted in inference for β based on the OLS estimator.

12

It is well known that under conditional heteroskedasticity (A6), the OLS standard errors are not consistent and the resulting inference is misleading (in the sense specified in the previous paragraph). As a remedy, theoreticians have proposed heteroskedasticity consistent (HC) standard errors. Such research dates back to Eicker (1963, 1967), Huber (1967), and White (1980). Further refinements have been provided by MacKinnon and White (1985) and Cribari-Neto (2004). As is well known (e.g., Hayashi 2000, Proposition 2.1), under assumptions (A1)–(A5), √

d n(βˆOLS − β) −→ N 0, Avar(βˆOLS )

with

−1 Avar(βˆOLS ) = Σ−1 xx ΩΣxx ,

(4.1)

d

where the symbol −→ denotes convergence in distribution. By assumptions (A2) and (A4) and the continuous mapping theorem, n(X 0 X)−1 is a consistent estimator of Σ−1 xx . Therefore, the problem ˆ of Ω. Inference of consistently estimating Avar(βˆOLS ) is reduced to finding a consistent estimator Ω for β can then be based in the standard fashion on ˆ 0 X)−1 . [ HC (βˆOLS ) ..= n2 (X 0 X)−1 Ω(X Avar

(4.2)

For now, we focus on the case where the parameter of interest is βk , for some 1 ≤ k ≤ K. The OLS estimator of βk is βˆk,OLS and its HC standard error4 implied by (4.2) is r 1 [ AvarHC (βˆOLS ) k,k . (4.3) SEHC (βˆk,OLS ) ..= n Then, for example, a two-sided confidence interval for βk with nominal level 1 − α is given by βˆk,OLS ± tn−K,1−α/2 · SEHC (βˆk,OLS ) ,

(4.4)

where tn−K,1−α/2 denotes the 1 − α/2 quantile of the t distribution with n − K degrees of freedom.5 Alternatively, hypothesis tests of the form H0 : βk = βk,0 can be based on the test statistic βˆk,OLS − βk,0 SEHC (βˆk,OLS ) in conjunction with suitable quantiles of the tn−K distribution as critical values. As stated before, finding a consistent estimator of Avar(βˆOLS ) reduces to finding a consistent estimator of Ω in (4.2). There exist five widely used such estimators in the literature, named HC0–HC4. They are all of the ‘sandwich’ form ˆ ..= 1 X 0 ΨX ˆ Ω n 4

ˆ ..= diag{ψˆ1 , . . . , ψˆn } . with Ψ

(4.5)

In our terminology, a standard error is an estimate of the standard deviation of an estimator rather than the

actual standard deviation of the estimator itself. 5 On asymptotic grounds, one could also use the 1 − α/2 quantile of the standard normal distribution instead. Taking the quantile from the tn−K distribution results in somewhat more conservative inference in finite samples and is the standard practice in statistical software packages.

13

Therefore, to completely define one of the HC estimators, it is sufficient to specify a typical ˆ In doing so, let εˆi denote the ith OLS residual given by element, ψˆi , of the diagonal matrix Ψ. εˆi ..= yi − x0i βˆOLS , ¯ denote the let hi denote the ith diagonal element of the ‘hat’ matrix H ..= X(X 0 X)−1 X 0 , and let h grand mean of the {hi }ni=1 . The various HC estimators use the following specifications. HC0 : ψˆi ..= εˆ2i , n HC1 : ψˆi ..= · εˆ2 , n−K i εˆ2i HC2 : ψˆi ..= , (1 − hi ) εˆ2i , and HC3 : ψˆi ..= (1 − hi )2 hi εˆ2i . . ˆ with δi .= min 4 , ¯ . HC4 : ψi .= (1 − hi )δi h

(4.6)

HC0 dates back to White (1980) but results in inference that is generally liberal in small to moderate samples. HC1–HC3 are various improvements suggested by MacKinnon and White (1985): HC1 uses a global degrees-of-freedom adjustment, HC2 is based on influential analysis, and HC3 approximates a jackknife estimator. HC4 is the most recent proposal by Cribari-Neto (2004) designed to also handle observations xi with strong leverage. Of the estimators HC0–HC3, the one that delivers the most reliable finite-sample inference is HC3; for example, see MacKinnon and White (1985), Long and Ervin (2000), and Angrist and Pischke (2009, Section 8.1). It is also the default option in several statistical software packages to carry out HC estimation, such as in the R function vcov(); for example, see Zeileis (2004). On the other hand, we are not aware of any simulation studies evaluating the performance of the HC4 estimator outside of Cribari-Neto (2004).6 It is a characteristic feature of a HC standard error of the form (4.2)–(4.3) that its variance is larger than the variance of the conventional standard error based on the assumption of conditional homoskedasticity: SECO (βˆk,OLS ) ..=

q s2 (X 0 X)−1 k,k

with s2 ..=

n 1 X 2 εˆi . n−K

(4.7)

i=1

(A HC standard error as well as the conventional standard error are functions of the data. They are therefore random variables and, in particular, have a variance.) As a result, inference based on a HC standard error tends to be liberal7 in small samples, especially when there is no or only little 6 7

His Monte Carlo study only considers a single parametric specification of the skedastic function v(·). This means that confidence intervals tend to undercover and that hypothesis tests tend to overreject under the

null.

14

conditional heteroskedasticity. These facts have been demonstrated by Kauermann and Carroll (2001) analytically and by Long and Ervin (2000), Kauermann and Carroll (2001), Cribari-Neto (2004), and Angrist and Pischke (2009, Section 8.1), among others, via Monte Carlo studies. As a rule-of-thumb remedy, Angrist and Pischke (2009, Section 8.1) propose to take the maximum of a HC standard error and the conventional standard error. Letting SEmax (βˆk,OLS ) ..= max SEHC (βˆk,OLS ), SECO (βˆk,OLS ) , a more conservative confidence interval for βk is then given by βˆk,OLS ± tn−K,1−α/2 · SEmax (βˆk,OLS ) .

(4.8)

(In particular, they recommend the use of the HC3 standard error.) We next turn to inference on βk based on the WLS estimator. The textbook solution is to assume that vˆ(·) is a consistent estimator for the skedastic function v(·) and to then compute a conventional standard error from the transformed data yi y˜i ..= p vˆ(xi )

xi and x ˜i ..= p . vˆ(xi )

(4.9)

More specifically, SECO (βˆk,WLS ) ..=

q ˜ 0 X) ˜ −1 s˜2 (X k,k

with

s˜2 ..=

n 1 X 2 ε˜i n−K

and ε˜i ..= y˜i − x ˜0i βˆWLS . (4.10)

i=1

The problem is that this standard error is incorrect when vˆ(·) is not a consistent estimator and, as a result, a confidence interval for βk based on the WLS estimator combined with this standard error generally does not have correct coverage probability, even asymptotically. In the absence of some supernal information on the skedastic function v(·), applied researchers cannot be confident about having a consistent estimator vˆ(·). Therefore, they have rightfully shied away from the textbook inference based on the WLS estimator. The safe ‘solution’ is to simply use the OLS estimator combined with a HC standard error. This status quo in applied economic research is succinctly summarized by Angrist and Pischke (2010, p.10): Robust standard errors, automated clustering, and larger samples have also taken the steam out of issues like heteroskedasticity and serial correlation. A legacy of White’s (1980[a]) paper on robust standard errors, one of the most highly cited from the period, is the near death of generalized least squares in cross-sectional applied work.8 In the interests of replicability, and to reduce the scope for errors, modern applied researchers often prefer simpler estimators though they might be giving up asymptotic efficiency. 8

For cross-sectional data, generalized least squares equates to weighted least squares.

15

In contrast, we side with Leamer (2010) who views conditional heteroskedasticity as an opportunity, namely an opportunity to construct more efficient estimators and to obtain shorter confidence intervals by sensibly weighting the data. But such benefits should not come at the expense of valid inference when the model for the skedastic function is misspecified. To this end, ironically, the same tool that killed off the WLS estimator can be used to resurrect it. Our proposal is simple: applied researchers should use the WLS estimator combined with a HC standard error.9 Doing so allows for valid inference, under weak regularity conditions, even if the employed vˆ(·) is not a consistent estimator of the skedastic function v(·). Specifically, the WLS estimator is the OLS estimator applied to the transformed data (4.9). And, analogously, a corresponding HC standard error is also obtained from these transformed data. In practice, the applied researcher only has to transform the data and then do as he would have done with the original data instead: run OLS and compute a HC standard error. Denote the HC standard error computed from the transformed data by SEHC (βˆk,WLS ). Then a confidence interval for βk based on the WLS estimator is given by βˆk,WLS ± tn−K,1−α/2 · SEHC (βˆk,WLS ) .

(4.11)

As before, one might prefer a more conservative approach for small sample sizes using the maximum of a HC standard error and the conventional standard error: SEmax (βˆk,WLS ) ..= max SEHC (βˆk,WLS ), SECO (βˆk,WLS ) , resulting in the confidence interval βˆk,WLS ± tn−K,1−α/2 · SEmax (βˆk,WLS ) .

(4.12)

(In particular, we recommend the use of the HC3 standard error, or perhaps even the HC4 standard error.) Remark 4.1 (Adaptive Least Squares; Remark 3.1 continued). Should a researcher prefer ALS for the estimation of β, he generally also needs a corresponding method for making inference on β. The method then is straightforward. If the ALS estimator is equal to the OLS estimator, use the confidence interval (4.4) or even the confidence interval (4.8). If the ALS estimator is equal to the WLS estimator, use the confidence interval (4.11) or even the confidence interval (4.12). Note that in this setting, the test for conditional heteroskedasticity ‘determines’ the inference method but not in the way it has been generally promoted in the literature to date: namely, always use the OLS estimator and then base inference on a HC standard error (4.3) if the test rejects and on the conventional standard error (4.7) otherwise. This practice is not recommended since, under conditional heteroskedasticity, an invalid inference method (based on the conventional standard 9

In spite of its simplicity, we have not seen this proposal anywhere else so far.

16

error) will be chosen with non-negligible probability in small to moderate samples because the power of the test is not near one. As a result, the finite-sample properties of this practice, under conditional heteroskedasticity, are poor in small to moderate samples; for example, see Long and Ervin (2000, Section 4.3). In contrast, our proposal does not incur such a problem, since the pretest instead decides between two inference methods that are both valid under conditional heteroskedasticity. So far, we have only discussed inference for a generic component, βk , of β. The extension to more general inference problems is straightforward and detailed in Appendix A.

4.2

Consistent Estimation of the Limiting Covariance Matrix

We now consider estimating the unknown limiting covariance matrix of the WLS estimator, which recalling (3.13) is given by −1 Ω−1 1/w Ωv/w2 Ω1/w ,

where, again, w(·) ..= vθ0 (·) and v(·) is the true skedastic function. First, Ω1/w is estimated by 0 ˆ −1 X 0 V ˆ−1 X θ ˆ 1/w ..= X W X = . Ω n n

Second, we are left to consistently estimate Ωv/w2 , which we recall is just ! ! 2 v(xi ) ε i Ωv/w2 = E · xi x0i = E · xi x0i . vθ20 (xi ) vθ20 (xi )

(4.13)

(4.14)

Of course, by the law of large numbers, n

1X n i=1

ε2i · xi x0i vθ20 (xi )

! P

−→ Ωv/w2 .

We do not know vθ0 (xi ), but it can be estimated by vθˆ(xi ). In addition, we do not observe the true errors, but they can be estimated by the residuals after some consistent model fit. So given ˆ such as the ordinary least squares estimator, define the ith residual by some consistent estimator β, εˆi ..= yi − xi βˆ = εi − x0i (βˆ − β) .

(4.15)

The resulting estimator of (4.14) is then n

ˆ v/w2 Ω

X ..= 1 n i=1

εˆ2i · xi x0i 2 v ˆ(xi )

! .

(4.16)

θ

Furthermore, note that (3.9) implies that there exists a real-valued function Rθ0 (·) such that 1 1 − 2 (4.17) 2 ≤ Rθ0 (x)|θ − θ0 | vθ (x) vθ0 (x) for all θ in some small open ball around θ0 and all x in the domain of the covariates. 17

ˆ −1 Ω ˆ ˆ −1 Theorem 4.1. Assume the conditions of Theorem 3.1. Consider the estimator Ω 1/w v/w2 Ω1/w , ˆ 1/w is given in (4.13) and Ω ˆ v/w2 is given in (4.16). Then, where Ω −1 −1 ˆ −1 Ω ˆ ˆ −1 P Ω 1/w v/w2 Ω1/w −→ Ω1/w Ωv/w2 Ω1/w ,

(4.18)

provided the following moment conditions are satisfied: E |xij xik xil xim /vθ20 (xi )| < ∞ ,

(4.19)

E |xij xik xil εi /vθ20 (xi )| < ∞ ,

(4.20)

E |xi |2 ε2i Rθ0 (xi )|] = E |xi |2 v(xi )Rθ0 (xi )| < ∞ .

(4.21)

and

4.3

Asymptotic Validity of the Inference Methods

ˆ −1 Ω ˆ ˆ −1 It is easy to see that the estimator Ω 1/w v/w2 Ω1/w is none other than the HC0 described in (4.5) and (4.6). Of course, having proven consistency of the HC0 estimator, consistency of the HC1 estimator follows immediately. For motivations to use, alternatively, the estimators HC2–HC4, see MacKinnon and White (1985) and Cribari-Neto (2004). Being able to consistently estimate the limiting covariance matrix of the WLS estimator results in validity of the corresponding inference methods detailed in Subsection 4.1. As far as inference based on the ALS estimator is concerned, there are two cases to consider. In the first case, the limiting skedastic function vθ0 (·) is constant. In this case (see Remark 3.3), √

P n βˆOLS − βˆWLS −→ 0 .

Therefore, there is a common limiting distribution for all three estimators — OLS, WLS, and ALS — and validity of the inference based on the ALS estimator follows from the validity of the inference of the other two estimators. In the second case, the limiting skedastic function vθ0 (·) is not constant. In this case, the ALS estimator will be equal to the WLS estimator with probability tending to one and the validity of the inference based on the ALS estimator follows from the validity of the inference based on the WLS estimator.

5 5.1

Monte Carlo Study Basic Set-Up

We consider the simple regression model yi = β1 + β2 xi + εi ,

18

(5.1)

n based on an i.i.d. sample (yi , xi ) i=1 . In our design, xi ∼ U [1, 4] and εi ..=

p

v(xi ) · zi ,

(5.2)

where zi ∼ N (0, 1), and zi is independent of xi . The sample size is n ∈ {20, 50, 100}. The parameter of interest is β2 . When generating the data, we consider four parametric specifications for the skedastic function v(·). First, v(·) is a power function: v(x) = xγ ,

with γ ∈ {0, 1, 2, 4} .

(5.3)

This specification includes conditional homoskedasticity for the choice γ = 0. Second, v(·) is a power of the log function: γ v(x) = log(x) ,

with γ ∈ {2, 4} .

(5.4)

Third, v(·) is the exponential of a second-degree polynomial: v(x) = exp(γx + γx2 ) ,

with γ ∈ {0.1, 0.15} .

(5.5)

γ ∈ {1, 2} .

(5.6)

Fourth, v(·) is a power of a step function: 1γ , 1 ≤ x < 2 v(x) =

2γ , 2 ≤ x < 3 3γ , 3 ≤ x ≤ 4

,

with

The four specifications are graphically displayed in Figures 1–4. Note that for ease of interpretap p tion, we actually plot v(x) as a function, since v(x) corresponds to the conditional standard deviation and thus lives on the same scale as x. The parametric model used for estimating the skedastic function is vθ (x) = exp ν + γ log |x| ,

with θ ..= (ν, γ)0 .

(5.7)

The model assumed for the skedastic function is correctly specified in (5.3) (with ν = 0) and it is misspecified in (5.4)–(5.6). We estimate ν and γ from the data by the OLS regression log max(δ 2 , εˆ2i ) = ν + γ log |xi | + ui ,

(5.8)

where the εˆi are the OLS residuals of (5.1) and δ is chosen as δ = 0.1 throughout. The resulting estimator of (ν, γ) is denoted by (ˆ ν , γˆ ). WLS is then based on vˆ(x) ..= exp(ˆ ν + γˆ log x) .

(5.9)

Remark 5.1 (Choice of the Parametric Specification vθ (·)). As explained in Remark 3.5, under conditional homoskedasticity, WLS is asymptotically as efficient as OLS when using a specification 19

of the form (3.6), but not necessarily using the form (3.5) as chosen in this Monte Carlo study. The reasoning for preferring (3.5) in empirical work is two-fold. First, the specification (5.7) is equivalent to vθ (x) = σ 2 |x|γ ,

with ν = log σ 2 .

Therefore, estimating such a specification, implicitly, estimates the ‘best’ power of |x| for modeling the skedastic function.10 On the other hand, a specification of the form (3.6) would boil down to vθ (x) = ν + γ|x| . This specification sets the power of |x| equal to one, which may be far from optimal. One approach would be to choose another power, but then which one? A reasonable solution here would be to choose the power based on some residual plots (where the residuals are obtained from a first-stage OLS fit); but, clearly, such a method is difficult to implement in a Monte Carlo study. Another approach would be to include more than one power on the right-hand side, such as both |x| and |x|2 . Again, it is not be clear which powers are ‘best’, and including many powers will result in inflated estimation uncertainty. Second, a specification of the form (3.6) does not guarantee positivity of the weights vθˆ(xi ) to be used for WLS. Of course, there are several solutions to this problem, such as restricting the estimate θˆ in a suitable fashion. But, again, it is not necessarily clear which such solution should be used in practice. As an alternative, we also experimented with the specification vθ (x) = ν + γ1 |x| + γ2 |x|2 , which guarantees that WLS is asymptotically equivalent to OLS; see Remark 3.5.11 Compared to the specification (5.7), the results for WLS and ALS were similar under conditional homoskedasticity but worse under conditional heteroskedasticity.

5.2

Estimation

We consider the following three estimators of β2 . • OLS: The OLS estimator of β2 . • WLS: The WLS estimator of β2 based on vˆ(·) given by (5.9). 10

In particular, this specification is the one proposed by Judge et al. (1988, Section 9.3) for the case of a single

covariate (in addition to the constant potentially). 11 This specification might result in some non-positive weights vθˆ(xi ); such weights were then all set equal to a small, positive number.

20

• ALS: The ALS estimator of β2 of Remark 3.1. The test for conditional heteroskedasticity used rejects the null of conditional homoskedasticity if nR2 > χ21,0.9 , where R2 is the R2 -statistic from the OLS regression (5.8) and χ21,0.9 is the 0.9 quantile of the chi-squared distribution with one degree of freedom. The performance measure is the empirical mean squared error (eMSE). For a generic estimator β˜2 , it is defined as eMSE(β˜2 ) ..=

B 1 X ˜ (β2,b − β2 )2 , B b=1

where B denotes the number of Monte Carlo repetitions and β˜2,b denotes the outcome of β˜2 in the bth repetition. The simulations are based on B = 50, 000 Monte Carlo repetitions. Without loss of generality, we set (β1 , β2 ) = (0, 0) when generating the data. The results are presented in Tables 1–2 and can be summarized as follows. • As expected, in the case of conditional homoskedasticity (that is, in specification (5.3) with γ = 0), OLS is more efficient than WLS. But the differences are rather small and decreasing in n. In the worst case, n = 20, the ratio of the two eMSE’s (WLS/OLS) is only 1.12. • When there is conditional heteroskedasticity, WLS is generally more efficient than OLS. Only when the degree of conditional heteroskedasticity is low and the sample is small (n = 20) can OLS be more efficient, though the differences are always small. • When the degree of conditional heteroskedasticity is high and the sample size is large, the differences between OLS and WLS can be vast, namely, the ratio (WLS/OLS) can be in the neighborhood of 0.3. • ALS sacrifices some of the efficiency gains of WLS under conditional heteroskedasticity, especially when the sample size is small. On the other hand, it is closer to the performance of OLS under conditional homoskedasticity • The previous statements hold true even when the model used to estimate the skedastic function is misspecified. In sum, using WLS offers possibilities of vast improvements over OLS in terms of mean squared error while incurring only modest downside risk. ASL constitutes an attractive compromise between WLS and OLS. Remark 5.2 (Nonnormal Error Terms). To save space, we only report results when the distribution of the zi in (5.2) is standard normal. However, we carried out additional simulations changing this distribution to a t-distribution with five degrees of freedom (scaled to have variance one) and a chi-squared distribution with five degrees of freedom (centered and scaled to have variance one). In both cases, the numbers for the two eMSE’s increase compared to the normal distribution but their ratios remain virtually unchanged. Therefore, the preceding summary statements appear ‘robust’ to nonnormality of the error terms. 21

Remark 5.3 (Failure of Assumption (A.6’)). The scheme (5.2) to generate the error terms εi satisfies assumption (A6’) of Remark 3.5. It is therefore mathematically guaranteed that even the specification (5.7) guarantees that WLS and ALS are asymptotically as efficient as OLS under conditional homoskedasticity. To study the impact of the failure of (A6’) on the finite-sample performance under conditional homoskedasticity, we also consider error terms of the following form in specification (5.4) with γ = 0: where zi,1 ∼ N (0, 1) , zi,1 if xi < 2 , . . εi = (5.10) zi,2 if 2 ≤ xi < 3 , where zi,2 ∼ t∗5 , and 2,∗ z if 3 ≤ x < 4 , where z ∼ χ . i,3

i

i,3

5

Here, t∗5 denotes a t-distribution with five degrees of freedom (scaled to have variance one) and χ2,∗ 5 denotes a chi-squared distribution with five degrees of freedom (centered and scaled to have variance one). The results are presented at the bottom of Table 1. It can be seen that even if assumption (A.6’) does not hold, the efficiency loss of WLS and ALS compared to OLS under conditional homoskedasticity may still tend to zero as the sample size tends to infinity.

5.3

Inference

We next study the finite-sample performance of the following six confidence intervals for β2 . • OLS-HC: The interval (4.4). • OLS-Max: The interval (4.8). • WLS-HC: The interval (4.11). • WLS-Max: The interval (4.12). • ALS-HC: The ALS inference of Remark 4.1 based on either interval (4.4) or interval (4.11). • ALS-HC: The ALS inference of Remark 4.1 based on either interval (4.8) or interval (4.12). There are two performance measures: first, the empirical coverage probability of a confidence interval with nominal confidence level 1 − α = 95%; and second, the ratio of the average length of a confidence interval over the average length of OLS-HC. (By construction, this ratio is independent of the nominal level.) Again, the simulations are based on B = 50, 000 Monte Carlo replications. Again, without loss of generality, we set (β1 , β2 ) = (0, 0) when generating the data. The results are presented in Tables 3–4 and can be summarized as follows. • The coverage properties of all six intervals are at least satisfactory. Nevertheless, the HC intervals can undercover somewhat for small sample sizes. This problem is mitigated by using the Max intervals instead, at the expense of increasing the average lengths somewhat. Overall, the three HC intervals have comparable coverage to each other and three Max intervals have comparable coverage to teach other, as well. 22

• Although there are only minor differences in terms of coverage (within the HC type and the Max type, respectively), there can be major differences in terms of average length. On average, WLS-HC and ALS-CH are never longer than OLS-HC but they can be dramatically shorter in the presence of strong conditional heteroskedasticity and in extreme cases only about half as long. The findings are the same when comparing WLS-Max and ALS-Max to OLS-Max. • The previous statements hold true even when the model used to estimate the skedastic function is misspecified. In sum, confidence intervals based on WLS or ALS offer possibilities of vast improvements over OLS in terms of expected length. This benefit does not come at any noticeable expense in terms of noticeably inferior coverage properties. Remark 5.4 (Nonnormal Error Terms). To save space, we only report results when the distribution of the zi in (5.2) is standard normal. However, we carried out additional simulations changing this distribution to a t-distribution with five degrees of freedom (scaled to have variance one) and a chi-squared distribution with five degrees of freedom (centered and scaled to have variance one). For the case of the t-distribution, empirical coverage probabilities generally slightly increase; for the case of the chi-squared distribution, they decrease and can fall below 92% for the HC confidence intervals and below 93% for the Max confidence intervals. Nevertheless, for both cases, OLS-HC continues to have comparable coverage performance to WLS-HC and OLS-Max continues to have comparable coverage performance to WLS-Max. Furthermore, in both cases, the ratios of average lengths remain virtually unchanged compared to the normal distribution. Therefore, the preceding summary statements appear ‘robust’ to nonnormality of the error terms. Remark 5.5 (Hypothesis Tests). By the well-understood duality between confidence intervals and hypothesis tests, we can gain the following insights. Hypothesis tests on βk based on WLS or ALS offer possibilities of vast improvements over hypothesis tests based on OLS in terms of power while incurring basically no downside risk. This benefit does not come at any noticeable expense in terms of elevated null rejection probabilities.

6

Variations and Extensions

We briefly discuss a few natural variations and extensions to the proposed methodology. • In this paper, we have focused on standard inference based on asymptotic normality of an estimator coupled with an estimate of the limiting covariance matrix. An anticipated criticism is that, by trying to estimate the true skedastic function, increased error in finite samples 23

may result. But, increased efficiency results in shorter confidence intervals. If coverage error were too high in finite samples (though our simulations indicate adequate performance), the conclusion should not be to abandon weighted least squares, but to consider alternative inference methods that offer improved higher-order asymptotic accuracy (and thus translate to improved finite-sample performance). For example, one can consider bootstrap methods. In our setting, such inference would correspond to using the WLS estimator in combination with either the pairs bootstrap (e.g., see Efron and Tibshirani, 1993, Section 9.5) or the wild bootstrap (e.g., see Davison and Hinkley, 1997, Section 6.2), since these two bootstrap methods are appropriate for regression models that allow for conditional heteroskedasticity; a recent comparison for OLS estimation is provided in Flachaire (2005). As an alternative to bootstrapping, one can consider higher-order accuracy by using Edgeworth expansions, as studied in Hausman and Palmer (2012). It is beyond the scope of this paper to establish the asymptotic validity of such schemes applied to WLS and to study their finite-sample performance. Consequently, we leave such topics for future research. • In this paper, we have focused on the case of a stochastic design matrix X, which is the relevant case for economic applications. Alternatively, it would be possible handle the case of a nonstochastic design matrix X, assuming certain regularity conditions on the asymptotic behavior of X, such as in Amemiya (1985, Section 3.5). • Our goal in the present work is to primarily offer enough evidence to change the current practice by showing that improvements offered by weighted least squares are nontrivial. A more ambitious goal would be to estimate the skedastic function v(·) in a nonparametric fashion. For example, one could use a sieve of parametric models by allowing the number of covariates used in the modeling of v(·) to increase with n. Of course, nonparametric smoothing techniques could be used as well. The hope would be further gains in efficiency, which ought to be possible. • Finally, it would be of interest to extend the proposed methodology to the context of instrumental variables (IV) regression. HC inference of the HC0–HC1 type based on two-stage least squares (2SLS) estimation is already standard; for example, see Hayashi (2000, Section 3.5). On the other hand, improved HC inference of the HC2–HC3 type is still in its infancy; for example, see Steinhauer and W¨ urgler (2010). To the best of our knowledge, weighted two-stage least squares (W2SLS) estimation has not been considered at all yet in the context of IV regressions. Therefore, also this topic is beyond the scope of the paper.

24

7

Conclusion

As the amount of data collected is ever growing, the statistical toolbox of applied researchers is ever expanding. Nevertheless, it can be safely assumed that linear models will remain a part of our toolbox for quite some time to come. A textbook analysis of linear models always starts with an assumption of conditional homoskedasticity, that is, an assumption that the conditional variance of the error term is constant. Under such an assumption, one should estimate model parameters by ordinary least squares (OLS), as doing so is efficient. Unfortunately, the real world is plagued by conditional heteroskedasticity, since the conditional variance often depends on the explanatory variables. In such a setting, OLS is no longer efficient. If the true functional form of the conditional variance (that is, the skedastic function) were known, efficient estimators of model parameters could be constructed by properly weighting the data (using the inverse of square root of the skedastic function) and running OLS on the weighted data set. Of course, the true skedastic function is rarely known. In the olden days, applied researchers resorted to weighting the data based on an estimate of the skedastic function, resulting in the weighted least squares (WLS) estimator. Under conditional heteroskedasticity, textbook inference based the OLS estimator can be misleading. But the same is true for textbook inference based on the WLS estimator, unless the model for estimating the skedastic function is correctly specified. These shortcomings have motivated the development of heteroskedasticity consistent (HC) standard errors for the OLS estimator. Such standard errors ensure the (asymptotic) validity of inference based on the OLS estimator in the presence of conditional heteroskedasticity of unknown form. Over time, applied researchers have by and large adopted this practice, causing WLS to become extinct for all practical purposes. In this paper, we promote the use of HC standard errors for the WLS estimator instead. This practice ensures (asymptotic) validity of inference based on the WLS estimator even when the model for estimating the skedastic function is misspecified. The benefits of our proposal in the presence of noticeable conditional heteroskedasticity are two-fold. First, using WLS generally results in more efficient estimation. Second, HC inference based on WLS has more attractive properties in the sense that confidence intervals for model parameters tend to be shorter and hypothesis tests tend to be more powerful. The price to pay is some efficiency loss compared to OLS results in small samples in the textbook setting of conditional homoskedasticity. As a bonus, we propose a new adaptive least squares (ALS) estimator, where a pretest on conditional homoskedasticity is used in order to decide whether to weight the data (that is, whether to use WLS) or not (that is, to use OLS). Crucially, in either case, one uses HC standard errors so that (asymptotically) valid inference is ensured. Having no longer to live in fear of invalid inference, applied researchers should rediscover their long-lost friend the WLS estimator, or its new companion the ALS, estimator: the benefits over their current company, the OLS estimator, can be substantial. 25

References Amemiya, T. (1985). Advanced Econometrics. Harvard University Press, Cambridge, MA. Angrist, J. D. and Pischke, J.-S. (2009). Mostly Harmless Econometrics. Princeton University Press, Princeton, New Yersey. Angrist, J. D. and Pischke, J.-S. (2010). The credibility revolution in empirical economics: How better research design is taking the con out of econometrics. Discussion Paper Series No. 4800, IZA. Breusch, T. and Pagan, A. (1979). A simple test for heteroscedasticity and random coefficient variation. Econometrica, 47:1287–1294. Cribari-Neto, F. (2004). Asymptotic inference under heterskedasticty of unknown form. Computational Statistics & Data Analysis, 45:215–233. Davison, A. C. and Hinkley, D. V. (1997). Bootstrap Methods and their Application. Cambridge University Press, Cambridge. Efron, B. and Tibshirani, R. J. (1993). An Introduction to the Bootstrap. Chapman & Hall, New York. Eicker, F. (1963). Asymptotic normality and consistency of the least squares estimator for families of linear regressions. Annals of Mathematical Statistics, 34:447–456. Eicker, F. (1967). Limit theorems for regressions with unequal and dependent errors. In LeCam, L. M. and Neyman, J., editors, Proceedings of the Fifth Berkeley Symposium on Mathematical Statistics and Probability, volume 1, pages 59–82, Berkeley, CA. University of California Press. Flachaire, E. (2005). Bootstrapping heteroskedastic regression models: wild bootstrap vs. pairs bootstrap. Computational Statistics & Data Analysis, 49:361–377. Harvey, A. C. (1976). Estimating regression models with multiplicative heteroscedasticity. Econometrica, 44:461–465. Hausman, J. and Palmer, C. (2012). Heteroskedasticity-robust inference in finite samples. Economic Letters, 116:232–235. Hayashi, F. (2000). Econometrics. Princeton University Press, Princeton, New Jersey. Huber, P. (1967). The behavior of maximum likelihood estimation under nonstandard conditions. In LeCam, L. M. and Neyman, J., editors, Proceedings of the Fifth Berkeley Symposium on Mathematical Statistics and Probability, volume 1, pages 221–233, Berkeley, CA. University of California Press. 26

Judge, G. G., Hill, R. C., Griffiths, W. E., L¨ utkepohl, H., and Lee, T.-C. (1988). Introduction To The Theory And Practice Of Econometrics. John Wiley & Sons, New York, second edition. Kauermann, G. and Carroll, R. J. (2001). A note on the efficiency of sandwich covariance matrix estimation. Journal of the American Statistical Association, 96(456):1387–1396. Koenker, R. (1981). A note on studentizing a test for heteroscedasticity. Journal of Econometrics, 17:107–112. Koenker, R. and Bassett, G. (1982). Robust tests for heteroscedasticity based on regression quantiles. Econometrica, 50:43–61. Leamer, E. E. (2010). Tantalus on the road to asymptotia. Journal of Economic Perspectives, 24(2):31–46. Long, J. S. and Ervin, L. H. (2000). Using heteroskedasticity consistent standard errors in the linear regression model. The American Statistician, 54:217–224. MacKinnon, J. G. and White, H. L. (1985). Some heteroskedasticity-consistent covariance matrix estimators with improved finite-sample properties. Journal of Econometrics, 29:53–57. Steinhauer, A. and W¨ urgler, T. (2010). Leverage and covariance matrix estimation in finite-sample IV regressions. Working Paper 521, IEW, University of Zurich. White, H. L. (1980). A heteroskedasticity-consistent covariance matrix estimator and a direct test of heteroskedasticity. Econometrica, 48:817–838. Zeileis, A. (2004). Econometric computing with HC and HAC covariance matrix estimators. Journal of Statistical Software, 11(10):1–17.

27

A

More General Inference Problems

A.1

Inference for a Linear Combination

Generalize the parameter of interest from a component βk to a linear combination a0 β, where a ∈ RK is vector specifying the linear combination of interest. The OLS estimator of a0 β is a0 βˆOLS . A HC standard error is given by r 0ˆ

SEHC (a βOLS )

..=

1 0 [ a AvarHC (βˆOLS ) a , n

[ HC (βˆOLS ) is as described in Subsection (4.1). The conventional standard error is given by where Avar 0ˆ

SECO (a βOLS )

..=

q s2 a0 (X 0 X)−1 a

with

s2 ..=

2 1 X 2 εˆi n−K

and εˆi ..= yi − x0i βˆOLS .

i=1

The WLS estimator of a0 β is a0 βˆWLS . A HC standard error is given by r 1 0 [ 0ˆ SEHC (a βWLS ) ..= a AvarHC (βˆWLS ) a , n [ HC (βˆWLS ) is as described in Subsection (4.1). The conventional standard error is given by where Avar SECO (a0 βˆWLS ) ..=

q ˜ 0 X) ˜ −1 a s˜2 a0 (X

with s˜2 ..=

2 1 X 2 ε˜i n−K

and ε˜i ..= y˜i − x ˜0i βˆWLS .

i=1

From here on, the extension of the inference methods for βk discussed in Subsection 4.1 to inference methods for a0 β is clear.

A.2

Testing a Set of Linear Restrictions

Consider testing a set of linear restrictions on β of the form H0 : Rβ = r , where R ∈ Rp×K is matrix of full row rank specifying p ≤ K linear combinations of interest and r ∈ Rp is a vector specifying their respective values under the null. A HC Wald statistic based on the OLS estimator is given by n [ HC (βˆOLS )R0 −1 (RβˆOLS − r) · (RβˆOLS − r)0 R Avar p

WHC (βˆOLS ) ..=

and its conventional counterpart is given by WCO (βˆOLS ) ..=

−1 n · (RβˆOLS − r)0 R(X 0 X)−1 R0 (RβˆOLS − r) . 2 ps

A HC Wald statistic based on the WLS estimator is given by WHC (βˆWLS ) ..=

n [ HC (βˆWLS )R0 −1 (RβˆWLS − r) · (RβˆWLS − r)0 R Avar p 28

and its conventional counterpart is given by n ˜ −1 R0 −1 (RβˆWLS − r) . ˜ 0 X) WCO (βˆWLS ) ..= 2 · (RβˆWLS − r)0 R(X p˜ s For a generic Wald statistic W , the corresponding p-value is obtained as ˜}, P V (W ) ..= Prob{F ≥ W

where F ∼ Fp,n .

Here, Fp,n denotes the F distribution with p and n degrees of freedom. HC inference based on the OLS estimator reports P V (WHC (βˆOLS )), while more conservative inference based on the OLS estimator reports P Vmax (βˆOLS ) ..= max P V (WHC (βˆOLS )), P V (WCO (βˆOLS )) . HC inference based on the WLS estimator reports P V (WHC (βˆWLS )), while more conservative inference based on the WLS estimator reports P Vmax (βˆWLS ) ..= max P V (WHC (βˆWLS )), P V (WCO (βˆWLS )) .

B B.1

Mathematical Results Proofs

Proof of Lemma 3.1. Replacing y by Xβ + ε in the definition of βˆW in (3.8) yields 0 −1 −1 0 −1 √ XW ε XW X ˆ √ n(βW − β) = . n n

(B.1)

By Slutsky’s Theorem, the proof consists in showing X 0 W −1 X P −→ Ω1/w n

(B.2)

and

X 0 W −1 ε d −→ N (0, Ωv/w2 ) . n1/2 To show (B.2), its left side has (j, k) element given by n x1j x1k 1 X xij xik P −→ E , n w(xi ) w(x1 )

(B.3)

i=1

by the law of large numbers. To show (B.3), first note that E(X 0 W −1 ε) = E X 0 W −1 E(ε|X) = 0 by Assumption (A3). Furthermore, X 0 W −1 ε is a sum of i.i.d. random vectors xi · εi /w(xi ) with common covariance matrix having (j, k) element x1j ε1 x1k ε1 x1j x1k ε21 x1j x1k x1,j x1k v(x1 ) 2 Cov , =E =E E(ε1 |x1 ) = E . w(x1 ) w(x1 ) w2 (x1 ) w2 (x1 ) w2 (x1 ) 29

Thus, each vector xi · εi /w(xi ) has covariance matrix Ωv/w2 . Therefore, by the multivariate Central Limit Theorem, (B.3) holds. Proof of Theorem 3.1. Let W be the diagonal matrix with (i, i) element vθ0 (xi ). Similarly to (B.1), we have √

ˆ −1 X X 0W n

n(βˆWLS − β) =

!−1

ˆ −1 ε X 0W √ . n

(B.4)

First, we show that ˆ −1 ε X 0 W −1 ε P X 0W √ − √ −→ 0 . n n

(B.5)

ˆ and W are close, one needs to exercise some care, as Even though the assumptions imply that W the dimension of these matrices increases with the sample size n. The left-hand side of (B.5) is n X ˆ −1 − W −1 )ε X 0 (W 1 1 −1/2 √ =n x i · εi − =A+B , vθˆ(xi ) vθ0 (xi ) n i=1

where A

..=

n

−1/2

n X

xi εi rθ0 (xi )(θˆ − θ0 ) ,

(B.6)

i=1

and, with probability tending to one, B is a vector with jth component satisfying n

X 1 |xij εi sθ0 (xi )| . |Bj | ≤ n−1/2 |θˆ − θ0 |2 2

(B.7)

i=1

The jth component of A is n

−1/2

n X

xij εi

i=1

K X

rθ0 ,l (xi )(θˆl − θl ) .

l=1

So to show A = oP (1), it suffices to show that, for each j and l, (θˆl − θl )n−1/2

n X

P

xij εi rθ0 ,l (xi ) −→ 0 .

i=1

The first factor (θˆ − θ0 ) = oP (1), and so it suffices to show that n

−1/2

n X

xi,j εi rθ0 (xi ) = OP (1) .

i=1

The terms in this sum are i.i.d. random variables with mean zero and finite second moments, where finite second moments follow from (3.11), and so this normalized sum converges in distribution to a multivariate normal distribution. Therefore, A = oP (1). To show |B| = oP (1), write the right-hand side of (B.7) as n

1√ ˆ 1X |xij εi sθ0 (xi )| . n|θ − θ0 |2 2 n i=1

30

(B.8)

The first factor

√ ˆ n|θ − θ0 |2 = oP (1) by assumption while the average of the i.i.d. variables

|xij εi sθ0 (xi )| obeys the law of large numbers by the moment assumption (3.12). Thus, |B| = oP (1) also and (B.5) holds. Next, we show that ˆ −1 X X 0W X 0 W −1 X P −→ 0 . − n n

(B.9)

To this end simply write (B.9) as ˆ −1 − W −1 )X X 0 (W 1X 1 1 0 , = xi xi − n n vθˆ(xi ) vθ0 (xi ) i

and then use the differentiability assumption as above (which is even easier now because one only needs to invoke the law of large numbers and not the central limit theorem). It now also follows by the limit (B.2) and the fact that the limiting matrix there is positive definite that !−1 −1 ˆ −1 X X 0W X 0 W −1 X P −→ 0 . − n n

(B.10)

Then, the convergences (B.5) and (B.10) are enough to show that the right-hand side of (B.4) satisfies ˆ −1 X X 0W n

!−1

ˆ −1 ε X 0 W −1 X −1 X 0 W −1 ε P X 0W √ √ − −→ 0 ; n n n

just by making simple use of the equality a ˆˆb − ab = a ˆ(ˆb − b) + (ˆ a − a)b . Finally, Slutsky’s theorem yields the result. Proof of Theorem 4.1. First, the estimator (4.13) is consistent because of (B.2) and (B.9). To analyze (4.16) , we first consider the behavior of this estimator with vθˆ(·) replaced by the fixed vθ0 (·), but retaining the residuals (instead of the true error terms). From (4.15) it follows that εˆ2i = ε2i − 2(βˆ − β)0 xi · εi + (βˆ − β)0 xi · x0i (βˆ − β) . Then, multiplying the last expression by xi x0i /vθ20 (xi ) and averaging over i yields ! ! n n 2 X 1X εˆ2i 1 ε i · xi x0i − · xi x0i = Cn + Dn , n n vθ20 (xi ) vθ20 (xi ) i=1

i=1

where

n

Cn ..= −

2X xi x0i · (βˆ − β)0 xi · εi /vθ20 (xi ) n i=1

and

n

Dn ..=

1X xi x0i · (βˆ − β)0 xi x0i (βˆ − β)/vθ20 (xi ) . n i=1

31

(B.11)

The first goal is to show both Cn and Dn tend to zero in probability. The (j, k) term in the matrix Dn is given by Dn (j, k) =

n K K X X 1X xij xik (βˆl − βl )xil (βˆm − βm )xim /vθ20 (xi ) . n i=1

m=1

l=1

Thus, it suffices to show that, for each j, k, l, and m, 1 (βˆl − βl )(βˆm − βm ) n

n X

P

xij xik xil xim /vθ2i (xi ) −→ 0 .

(B.12)

i=1

P

But (βˆl − βl )(βˆm − βm ) −→ 0 and the average on the right-hand side of (B.12) satisfies the law of large numbers under the assumption of the “fourth moment condition” (4.19) and thus tends to P

something finite in probability. Therefore, (B.12) holds and so Dn −→ 0. P

Next, we show Cn −→ 0. But, (−1/2) times the (j, k) term of Cn is given by n

K

i=1

l=1

X 1X (βˆl − βl )xil εi /vθ20 (xi ) . xij xik n So, it suffices to show that, for each j, k, and l, 1 (βˆl − βl ) n

n X

P

xij xik xil εi /vθ20 (xi ) −→ 0 .

(B.13)

i=1

P

But (βˆl − βl ) −→ 0 and the average on the right-hand side of (B.13) satisfies the law of large numbers under the assumption of the “fourth moment condition” (4.20) and thus tends to something P

finite in probability. Therefore, Cn −→ 0. In summary, what we have shown so far is that (B.11) tends to zero in probability. Thus, the proof of consistency will be complete if we can show that also ! ! n n 1X ε2i 1X ε2i P 0 0 −→ 0 . 2 (x ) · xi xi − n 2 (x ) · xi xi n v v ˆ i θ0 i i=1 i=1

(B.14)

θ

By property (4.17) of the function Rθ0 (·), the left-hand-side of (B.14) has (j, k) component that can be bounded by the absolute value of n

|θˆ − θ0 |

1X xij xik ε2i Rθ0 (xi ) . n

(B.15)

i=1

P But (θˆ − θ0 ) −→ 0 and the average in (B.15) obeys the law of large numbers under the moment

condition (4.21) and thus tends to something finite in probability. Therefore, (B.14) holds.

32

B.2

Verification of Assumptions for the Parametric Specification vθ (·)

The main theorems assume the family vθ (·) leads to a θˆ satisfying (3.10). Assume the family vθ (·) is of the form (which is slightly more general than (3.5)) d X vθ (x) ..= exp θj gj (x) ,

(B.16)

j=1

where θ = (θ1 , . . . , θd )0 and g(x) = (g1 (x), . . . , gd (x))0 . It is tacitly assumed that g1 (x) = 1 to ensure that this specification nests the case of conditional homoskedasticity. Fix δ > 0 and let hδ (ε) ..= log max(δ 2 , ε2 ) . The estimator θˆ is obtained by regressing the residuals εˆi , or more ˆ we first consider θ, ˜ which is obtained precisely hδ (ˆ εi ) on g(xi ). Before analyzing the behavior of θ, by regressing hδ (εi ) on g(xi ). (Needless to say, we do not know the εi , but we can view θ˜ as an oracle ‘estimator’.) As argued in Hayashi (2000, Section 2.9), θ˜ is a consistent estimator of −1 θ0 ..= E(g(xi )g(xi )0 E g(xi ) · hδ (εi ) . √ To show that θ˜ is moreover n-consistent, note that θ˜ = L−1 n mn , where Ln is the d × d matrix n

Ln ..=

1X g(xi )g(xi )0 n i=1

and mn is the d × 1 vector

n

1X g(xi ) · hδ (εi ) . n i=1 √ Since Ln is an average of i.i.d. random matrices, it is a n-consistent estimator of L ..= E g(xi )g(xi )0 mn ..=

(under the assumption of finite second moments of products and invertibility of L), and in fact is √ asymptotically multivariate normal as well.12 Similarly, n(mn − m) is asymptotically multivariate normal under moment conditions, where m ..= E g(xi ) · hδ (εi ) . √ But, if Ln and mn are each n-consistent estimators of L and m, respectively, it is easy to see that √ θ˜ = Ln · mn is a n-consistent estimator of L · m = θ0 .13 However, our algorithm uses the residuals εˆi after an OLS fit of yi on xi , rather than the true errors εi . So, we must argue that the difference between θ˜ above and θˆ obtained when using the residuals is of order oP (n−1/4 ), which would then verify (3.10). Note that θˆ = Ln · m ˆ n , where n

m ˆ n ..=

1X g(xi ) · hδ (ˆ εi ) . n i=1

12

Note that L is clearly invertible in the case g(x) ..= (1, log(x))0 as used in the Monte Carlo study of Section 5.

13 Alternatively, by the usual arguments that show asymptotic normality of OLS, under moment assumptions, √ ˜ √ n(θ − θ0 ) is asymptotically normal, and hence n-consistent.

33

Hence, it suffices to show that m ˆ n − m = oP n−1/4 .

(B.17)

To do this, first note that max(δ, |ˆ εi |) − max(δ, |εi |) ≤ |ˆ εi − εi | . Then, hδ (ˆ εi ) − hδ (εi ) = log[max(δ 2 , εˆ2i )] − log[max(δ 2 , ε2i )] = 2 log[max(δ, |ˆ εi |)] − log[max(δ, |εi |)] 2 εi |)] − max(δ, |εi |) ≤ max(δ, |ˆ δ 2 ≤ |ˆ εi − ε i | δ 2 = |x0i (βˆ − β)| , δ where the first inequality follows from the mean value theorem of calculus. Therefore,

n 2 X |g(xi )| · x0i (βˆ − β) . |m ˆ n − m| ≤ nδ i=1

But assuming E gk (xi ) · xj < ∞ for any i, j, one can apply the law of large numbers to conclude that |m ˆ n − m| = OP |βˆ − β|/δ = OP n−1/2 , which certainly implies (B.17). As an added bonus, the argument shows that one can let δ ..= δn → 0 as long as δn goes to zero slowly enough; in particular, as long as δn n1/4 → ∞. The argument for the linear specification vθ (x)

..=

d X

θj gj (x)

j=1

is similar. Here, the estimator θˆ is obtained by regressing the residuals εˆ2i on g(xi ). As above, first consider θ˜ obtained by regressing the actual errors ε2i on g(xi ). Then, θ˜ is a consistent estimator of −1 θ0 ..= E(g(xi )g(xi )0 E g(xi ) · ε2i . √ As before, it is n-consistent, as θ˜ = L−1 n mn , with Ln defined exactly as before, but with mn now defined as the d × 1 vector

n

mn ..=

1X g(xi ) · ε2i . n i=1

Again, we must argue that the difference between θ˜ and θˆ is of order oP n−1/4 , and it suffices to show (B.17) where now n

m ˆ n ..=

1X g(xi ) · εˆ2i . n i=1

34

But, n n 1 X 1 X |m ˆ n − mn | = g(xi ) · (ˆ ε2i − ε2i ) = g(xi ) · −2(βˆ − β)0 xi · εi + (βˆ − β)0 xi · x0i (βˆ − β) n n i=1 i=1 n n 1 X 1 X ≤ 2 g(xi ) · (βˆ − β)0 xi · εi + g(xi ) · (βˆ − β)0 xi · x0i (βˆ − β) . n n i=1

i=1

Under moment assumptions, the sum in the first term is an average of mean-zero random vectors and is of order OP n−1 because βˆ − β is of order OP n−1/2 and an average of zero-mean i.i.d. random variables with finite variance is also of order OP n−1/2 . The second term does not have mean zero, but under moment assumptions, is of order |βˆ − β|2 , which is OP n−1 . Therefore, |m ˆ n − mn | is actually of order OP n−1/2 , which is clearly way more than needed.

35

C

Figures and Tables

15

Specification (5.3)

0 1 2 4

0

5

v(x)

10

= = = =

1.0

1.5

2.0

2.5

3.0

3.5

4.0

x

Figure 1: Graphical display of the parametric specification (5.3) for the skedastic function v(·). p Note that for ease of interpretation, we actually plot v(x) as a function of x.

2.0

Specification (5.4)

1.0 0.0

0.5

v(x)

1.5

= 2 = 4

1.0

1.5

2.0

2.5

3.0

3.5

4.0

x

Figure 2: Graphical display of the parametric specification (5.4) for the skedastic function v(·). p Note that for ease of interpretation, we actually plot v(x) as a function of x. 36

4

5

Specification (5.5)

0

1

2

v(x)

3

= 0.1 = 0.15

1.0

1.5

2.0

2.5

3.0

3.5

4.0

x

Figure 3: Graphical display of the parametric specification (5.5) for the skedastic function v(·). p Note that for ease of interpretation, we actually plot v(x) as a function of x.

3.0

Specification (5.6)

0.0

0.5

1.0

1.5

v(x)

2.0

2.5

= 1 = 2

1.0

1.5

2.0

2.5

3.0

3.5

4.0

x

Figure 4: Graphical display of the parametric specification (5.6) for the skedastic function v(·). p Note that for ease of interpretation, we actually plot v(x) as a function of x.

37

OLS

WLS

ALS

v(x) = xγ γ=0 n = 20

0.073

0.082 (1.12)

0.077 (1.04)

n = 50

0.028

0.029 (1.05)

0.028 (1.02)

n = 100 0.014

0.014 (1.02)

0.014 (1.01)

γ=1 n = 20

0.185

0.189 (1.03)

0.188 (1.02)

n = 50

0.070

0.066 (0.95)

0.069 (0.99)

n = 100 0.034

0.031 (0.92)

0.032 (0.95)

γ=2 n = 20

0.555

0.461 (0.83)

0.513 (0.93)

n = 50

0.211

0.157 (0.74)

0.171 (0.81)

n = 100 0.103

0.072 (0.70)

0.073 (0.71)

γ=4 n = 20

6.517

3.307 (0.51)

4.348 (0.67)

n = 50

2.534

0.957 (0.38)

0.994 (0.39)

n = 100 1.242

0.418 (0.34)

0.418 (0.34)

γ = 0 , error terms εi of form (5.10) n = 20

0.074

0.082 (1.12)

0.077 (1.04)

n = 50

0.028

0.029 (1.06)

0.028 (1.02)

n = 100 0.014

0.014 (1.03)

0.014 (1.01)

Table 1: Empirical mean squared errors (eMSE’s) of estimators of β2 when the parametric model used to estimate the skedastic function v(·) is correctly specified. (In parentheses are the ratios of the eMSE of a given estimator over the eMSE of OLS.) All numbers are based on 50,000 Monte Carlo repetitions.

38

OLS

WLS γ v(x) = log(x)

ALS

γ=2 n = 20

0.066

0.045 (0.69)

0.053 (0.81)

n = 50

0.025

0.014 (0.55)

0.015 (0.60)

n = 100 0.012

0.006 (0.50)

0.006 (0.50)

γ=4 n = 20

0.101

0.047 (0.46)

0.058 (0.58)

n = 50

0.039

0.013 (0.33)

0.013 (0.33)

n = 100 0.019

0.005 (0.25)

0.005 (0.25)

v(x) = exp(γx + γx2 ) γ = 0.1 n = 20

0.250

0.236 (0.94)

0.246 (0.98)

n = 50

0.096

0.083 (0.87)

0.089 (0.93)

n = 100 0.047

0.039 (0.83)

0.041 (0.86)

γ = 0.15 n = 20

0.530

0.413 (0.78)

0.473 (0.89)

n = 50

0.206

0.143 (0.70)

0.155 (0.75)

n = 100 0.101

0.067 (0.67)

0.068 (0.67)

v(x) of form (5.6) γ=1 n = 20

0.148

0.151 (1.02)

0.150 (1.02)

n = 50

0.056

0.054 (0.96)

0.056 (1.00)

n = 100 0.027

0.025 (0.93)

0.026 (0.96)

γ=2 n = 20

0.365

0.303 (0.83)

0.337 (0.93)

n = 50

0.138

0.108 (0.77)

0.112 (0.81)

n = 100 0.067

0.051 (0.75)

0.051 (0.75)

Table 2: Empirical mean squared errors (eMSE’s) of estimators of β2 when the parametric model used to estimate the skedastic function v(·) is misspecified. (In parentheses are the ratios of the eMSE of a given estimator over the eMSE of OLS.) All numbers are based on 50,000 Monte Carlo repetitions. 39

OLS-HC

OLS-Max

WLS-HC v(x) =

WLS-Max

ASL-HC

ASL-Max

xγ

γ=0 n = 20

95.4

96.5 (1.03)

93.5 (0.99)

95.0 (1.04)

94.5 (0.98)

95.8 (1.02)

n = 50

95.1

95.9 (1.03)

94.3 (0.99)

95.2 (1.02)

94.7 (1.00)

95.5 (1.02)

n = 100

95.1

95.7 (1.02)

94.8 (1.00)

95.4 (1.02)

95.0 (1.00)

95.5 (1.02)

γ=1 n = 20

95.3

96.5 (1.04)

93.8 (0.94)

95.2 (0.98)

94.4 (0.96)

95.8 (1.01)

n = 50

95.1

95.9 (1.02)

94.5 (0.95)

95.4 (0.97)

94.5 (0.96)

95.3 (0.98)

n = 100

95.0

95.7 (1.02)

94.8 (0.95)

95.5 (0.97)

94.7 (0.97)

95.4 (0.99)

γ=2 n = 20

94.8

96.0 (1.04)

94.0 (0.86)

95.2 (0.89)

93.9 (0.90)

95.1 (0.94)

n = 50

94.8

95.5 (1.02)

94.5 (0.84)

95.3 (0.86)

94.2 (0.85)

95.1 (0.88)

n = 100

94.8

95.3 (1.02)

94.8 (0.83)

95.4 (0.85)

94.8 (0.83)

95.3 (0.85)

γ=4 n = 20

93.9

94.8 (1.02)

94.0 (0.66)

94.6 (0.67)

93.1 (0.70)

93.9 (0.71)

n = 50

94.4

94.7 (1.01)

94.3 (0.59)

94.7 (0.60)

94.2 (0.59)

94.6 (0.60)

n = 100

94.6

94.7 (1.00)

94.6 (0.57)

95.0 (0.58)

94.6 (0.57)

95.0 (0.58)

Table 3: Empirical coverage probabilities in percent of nominal 95% confidence intervals for β2 when the parametric model used to estimate the skedastic function v(·) is correctly specified. (In parentheses are the ratios of the average length of a given confidence interval over the average length of OLS-HC.) All numbers are based on 50,000 Monte Carlo repetitions.

40

OLS-HC

OLS-Max

WLS-HC WLS-Max γ v(x) = log(x)

ALS-HC

ALS-Max

γ=2 n = 20

94.8

96.3 (1.05)

94.6 (0.77)

95.9 (0.81)

94.1 (0.80)

95.6 (0.85)

n = 50

94.6

95.7 (1.04)

94.6 (0.72)

96.2 (0.78)

94.5 (0.72)

96.0 (0.78)

n = 100

94.8

95.5 (1.03)

94.8 (0.70)

96.6 (0.77)

94.8 (0.70)

96.5 (0.77)

γ=4 n = 20

93.8

94.9 (1.02)

94.9 (0.61)

94.2 (0.62)

93.5 (0.63)

94.2 (0.64)

n = 50

94.3

94.7 (1.01)

94.3 (0.54)

94.6 (0.55)

94.2 (0.54)

94.6 (0.55)

n = 100

94.5

94.7 (1.00)

94.5 (0.52)

94.8 (0.53)

94.5 (0.52)

94.8 (0.53)

v(x) = exp(γx + γx2 ) γ = 0.1 n = 20

94.9

95.8 (1.03)

93.3 (0.90)

94.5 (0.93)

93.7 (0.94)

94.8 (0.97)

n = 50

94.8

95.2 (1.01)

94.3 (0.91)

94.7 (0.92)

94.1 (0.93)

94.5 (0.94)

n = 100

94.9

95.0 (1.01)

94.7 (0.91)

94.9 (0.91)

94.6 (0.92)

94.8 (0.92)

γ = 0.15 n = 20

94.5

95.2 (1.02)

93.3 (0.83)

94.2 (0.85)

93.2 (0.88)

94.0 (0.90)

n = 50

94.6

94.9 (1.01)

94.1 (0.82)

94.4 (0.81)

93.9 (0.83)

94.2 (0.84)

n = 100

94.7

95.8 (1.00)

94.6 (0.81)

94.7 (0.81)

94.6 (0.81)

94.7 (0.81)

v(x) of form (5.6) γ=1 n = 20

95.2

96.4 (1.04)

93.7 (0.94)

95.1 (0.98)

94.2 (0.96)

95.6 (1.00)

n = 50

95.0

95.8 (1.03)

94.4 (0.95)

95.2 (0.98)

95.2 (0.97)

96.1 (1.00)

n = 100

95.0

95.7 (1.02)

94.7 (0.96)

95.2 (0.97)

94.6 (0.96)

95.2 (0.98)

γ=2 n = 20

94.7

96.0 (1.04)

93.9 (0.86)

94.9 (0.89)

93.5 (0.89)

94.7 (0.93)

n = 50

94.8

95.5 (1.03)

94.4 (0.86)

94.9 (0.87)

94.1 (0.87)

94.7 (0.88)

n = 100

94.8

95.3 (1.02)

94.8 (0.86)

95.0 (0.87)

94.8 (0.86)

95.0 (0.87)

Table 4: Empirical coverage probabilities in percent of nominal 95% confidence intervals for β2 when the parametric model used to estimate the skedastic function v(·) is misspecified. (In parentheses are the ratios of the average length of a given confidence interval over the average length of OLS-HC.) All numbers are based on 50,000 Monte Carlo repetitions. 41