In this section we indicate that the fully imputed estimators are efficient in the four models discussed at the end of Section 2. Throughout we assume that we have no structural information on the covariate distribution G.

1. Nonparametric conditional distribution. In this model, Q is completely unspecified. The usual partially imputed estimators for E[h(X,Y)] are of the form

Hi = -J2 (Zih(Xi, Yi) + (1 — Zi)X(Xi)), i=i where x is a nonparametric estimator for x of the form n x(Xi) = £ WijZjh(Xj, Yj) j=i with weights Wij depending on Xi,..., Xn, Zi,..., Zn only. This includes kernel-type estimators and linear smoothers. Under appropriate smoothness conditions on x and n, and for properly chosen weights Wij, the estimator H1 has the stochastic expansion

Hi = - £x(Xi) + - £ -X(h(Xi, Y) - x(Xi)) + op(n-1/2). (4)

In the case h(X, Y) = Y, such conditions are given by Cheng [Che94] and Wang and Rao [WR02]. These authors use weights Wij corresponding to truncated kernel estimators. Cheng [Che94] also shows that H1 is asymptotically equivalent to the fully imputed H2 = n En=1 Xc(Xi). It follows from (4) that Hi and H2 have influence function 4 = 4np and are therefore efficient by Section 2.

2. Parametric conditional distribution. In this model, Q = with $ an m-dimensional parameter. Then

Here we use an estimator ê of ê and obtain for E[h(X,Y)] the partially and fully imputed estimators

For the following discussion, we assume again Hellinger differentiability of Q- as in Section 2 and write i- for the score function. A natural estimator for ê is the conditional maximum likelihood estimator, which solves n En=i Z^-d(Xi,Yi) = 0. Under some additional regularity conditions, this estimator has the expansion

with U = E[n(X(X, Y)lê(X, Y)T] . One can show that ê is efficient for ê = k(G, Q-,n). Moreover, under regularity conditions, for any n1/2-consistent ê, nn n J2 ZiXs(Xi) = n J2 Zix*(Xi) + Dj($ — $)+ Op(n-1/2), i=1 i=1 nn n E(1 — Zi)x^(Xi) = n E(1 — Zi)x*(Xi) + D(($ — $)+ op(n-1/2), i=1 i=1

where

D1 = E[Zh(X,Y)£*(X, Y)] and D0 = E[(1 — Z)h(X,Y)£*(X, Y)].

Thus, if we use the conditional maximum likelihood estimator for $, we have the expansions

H = n (Zih(Xi, Yi) + (1 — Zi)x*(Xi) + Dll-1Zil*(Xi, Yi)) n i=i

H4 = n J2 X (Xi) + (Do + Difl-1Zil* (Xi, Yi)) + Op(n-1/2). n i=i

Since D0 + D1 = E[h(X, Y)£*(X,Y)], we see that H4 has influence function 4 = 4P and is therefore efficient. The difference between the estimators is

H3 — H4 = n J2 Zi(h(Xi, Yi) — x* (Xi) — Djl-1£*(Xi, Yi)) + Op(n-1/2).

Hence H3 is asymptotically equivalent to H4, and therefore also efficient, if and only if Z(h(X,Y) — x*(X) — Djl-1£*(X,Y)) is zero almost surely. Since this is usually not the case, the partially imputed estimator H3 is typically inefficient.

3. Linear regression with independence. In this model, Q(x,dy) = Q*,f (x, dy) = f (y — $x) dy. We assume that f has finite Fisher information J for location and X has finite and positive variance. Now x(x) = x(x,$,f ) = J h(x,$x + u)f (u) du.

This suggests the estimator n sn=i Zj h(x,$x+y3—$Xj)

x(x,$) = ---=-, where Z = ^¿=i Zj. Then the partially and fully imputed estimators for E[h(X, Y] are

H5 = -Y, (Zih(Xi,Yi)+ (-— Zi)x(XiJ)) and He = - V) X(Xi,$). n z—' \ ) n z—'

Then E[S] = E[h(X, Y)] = k. By the Hoeffding decomposition,

Under additional assumptions,

= —EE Z h(Xi, $Xi + ej) + D($ — $) + 0p(n-1/2) n i=1j=1 Z

with

In the linear regression model without missing responses, efficient estimators for $ have been constructed by Bickel [Bic82], Koul and Susarla [KS83], and Schick [Sch87, Sch93]. Their influence function is £/E[£2] with £ = (X — E[X])£(e). An analogous construction based on the observations (Xi,Yi) with Zi = 1 yields an estimato) for $ with influence function Z£*/E[Z£l] with = (X — E(X I Z = 1)) i(e). One can show that $ is efficient for $ = k(G, Q$,f, n). If we use an estimator $ with this influence function, then X6 has the stochastic expansion

Thus this estimator has influence function ^ = ^i and is therefore efficient by Section 2. Note that in general the partially imputed estimator H5 is different from He and therefore inefficient. If h(X,Y) = Y, our estimator becomes ßX + n Z(Y - ßXi)/Z.

4. Linear regression without independence. In this model, Q satisfies the constraint J yQ(x, dy) = ßx. We estimate ß by a weighted least squares estimator based on (Xi,Yi) with Zi = 1,

ß = E rU Zi*-2(Xi)XiY En=i Zia-2(Xi)X2 , with a2(x) an estimator of a2(x) = E(e2 | X = x). Such estimators have been studied without missing responses by Carroll [Car82], Müller and Stadtmüller [MS87], Robinson [Rob87], and Schick [Sch87]. In view of their results, we get under appropriate conditions that c = ß + nE rU Zi^-2(Xi)Xiei + o i/2)

This estimator can be shown to be efficient for ß.

A possible estimator for x is the nonparametric estimator X introduced above for the nonparametric model. Here, however, we have the constraint J yQ(x, dy) = ßx and use the estimator

Was this article helpful?

## Post a comment