Goodness of Fit of a joint model for event time and nonignorable missing Longitudinal Quality of Life data

Sneh Gulati1 and Mounir Mesbah2

1 Department of Statistics, The Honors College, Florida International University, Miami, FL 33199,USA [email protected]

2 Laboratoire de Statistique Théorique et Appliquée (LSTA), Université Pierre et Marie Curie - Paris VI, Boîte 158, - Bureau 8A25 - Plateau A. 175 rue du Chevaleret, 75013 Paris, France [email protected]

Abstract: In many survival studies one is interested not only in the duration time to some terminal event, but also in repeated measurements made on a time-dependent covariate. In these studies, subjects often drop out of the study before the occurrence of the terminal event and the problem of interest then becomes modelling the relationship between the time to dropout and the internal covariate. Dupuy and Mesbah (2002) (DM) proposed a model that described this relationship when the value of the covariate at the dropout time is unobserved. This model combined a first-order Markov model for the longitudinally measured covariate with a time-dependent Cox model for the dropout process. Parameters were estimated using the EM algorithm and shown to be consistent and asymptotically normal. In this paper, we propose a test statistic to test the validity of Dupuy and Mesbah's model. Using the techniques developed by Lin (1991), we develop a class of estimators of the regression parameters using weight functions. The test statistic is a function of the standard maximum likelihood estimators and the estimators based on the weight function. Its asymptotic distribution and some related results are presented.

1 Introduction and Preliminaries

In survival studies; for each individual under study; one often makes repeated observations on covariates (possibly time dependent) until the occurrence of some terminal event. The survival time T in such situations is often modeled by the Cox Regression Model (Cox, 1972) which assumes that its hazard function has the proportional form:

In the above, t denotes the time to an event, X0(t) denotes the baseline hazard function and Z denotes the vector of covariates. If the covariates are time dependent, we distinguish between two main types: external and internal. An external covariate is one that is directly related to the failure mechanism. An internal covariate is generated by the individual under study and therefore can be observed only as long as the individual in the study.

The survival times themselves may be censored on the right by a censoring variable C, so that what one observes is X = min (T, C) and the censoring indicator A = I{T < C}, where I is the indicator function and conditional on Z,T and C are assumed to be independent.

The first step in making any inferences about the survival time is the estimation of the baseline hazard Xo and the vector [3q. When all the covariates are observed and are external (if they are time dependent), one estimates the parameter vector f3o by maximizing the following partial likelihood function (see Cox, 1972 for details):

where (X.i, Ai,Zi), 1 < i < n, is a random sample of the data and the variable Yj (t) = 1 if Xj > t and 0 otherwise. Given the MLE ¡3 of ¡3, the estimator of the cumulative hazard function Breslow (1972, 1974) is the one obtained by linear interpolation between failure times of the following function:

In case of internal covariates, however, as noted before, the observed value of the covariate carries information about the survival time of the corresponding individual and thus such covariates must be handled a little differently. For internal covariates, Kalbfleisch and Prentice (1980) define the hazard of the survival time t as:

where Z(t) denotes the history of the covariate up to time t. Thus the hazard rate conditions on the covariate only up to time t, but no further. As pointed out by Dupuy and Mesbah (2002) fitting the Cox model with internal covariates can lead to several problems. The inclusion of a covariate whose path is directly affected by the individual can mask treatment effects when comparing two treatments. Similarly inferences about the survival time T will require integration over the distribution of Z( t) or a model for failure time in which Z is suppressed. Several authors have dealt with the problem of fitting a Cox model involving internal covariates (see for example Tsiatis et. al. (1995), Wulfsohn and Tsiatis (1997), Dafni and Tsiatis (1998) etc. )

We focus here on the model developed by Dupuy and Mesbah (2002) who considered experiments where it was assumed that each subject leaves the study at a random time T > 0 called the dropout time and objective of the paper was to model the relationship between time to dropout and the longitudinally measured covariate. The work of Dupuy and Mesbah was motivated by a data set concerning quality of life (Qol) of subjects involved in a cancer clinical trial. The Qol values formed the covariate of interest. However, patients were likely to drop out of the study before disease progression and for such patients then, the value of the covariate was unobserved at the time of disease progression. Following the approach of Diggle and Kenward (1994), Dupuy and Mesbah (2002) fit a joint model to describe the relationship between the covariate and the time to dropout. Their model combined a first-order Markov model for the longitudinally measured covariate with a time-dependent Cox model for the dropout process, while Diggle and Kenward (1994) specified a logistic regression model for the dropout process. In this paper, we propose a test statistic to validate the model proposed by Dupuy and Mesbah (2002).

In Section II, we describe the various types of dropout processes which can be observed and the methods of dealing with them. The work of Dupuy and Mesbah (2002, 2004) is described in Section III. In Section IV, we develop the test statistic and study its properties.

2 The Dropout Process

Little and Rubin (1987) identify three main classifications of the drop-out process in longitudinal studies:

i) Completely Random (CRD): A drop-out process is said to be completely random when the drop out is independent of both the observed and the unobserved measurements.

ii) Random Drop-Out (RD): Here the drop-out process is dependent on the observed measurements, but is independent of the unobserved ones.

iii) Informative (or Non Ignorable) Dropout (ID): A drop-out process is noningnorable when it depends on the unobserved measurements, that is those that would have been observed if the unit had not dropped out.

Under the completely random drop-out process, drop-outs are equivalent to randomly missing values and so the data can be analyzed without requiring any special methods. In the random drop-out case, provided there are no parameters in common between the measurements and the drop-outs or any functional relationship between them, the longitudinal process can be completely ignored for the purpose of making likelihood based inference about the time to drop-out model. However, special methods are needed for the nonignorable case.

A number of authors have considered analysis under the ID model. Wu and Carroll (1988) considered informative drop-out in a random effects model where the experimental units follow a linear time trend whose slope and intercept vary according to a bivariate Gaussian distribution. Wang et. al. (1995) report on a simulation study to compare different methods of estimation under different assumptions about the drop-out process. Diggle and Kenward (1994) combined a multivariate linear model for the drop-out process with a logistic regression model for the drop-out. Molenberghs et. al. (1997) adopted a similar approach for repeated categorical data to handle informative dropout, amongst others (see for example, Little, 1995, Hogan and Laird, 1997, Troxel et. al., 1998, Verbeke and Molenberghs, 2000 etc.)

In the setting of Dupuy and Mesbah (2002), the value of the covariate at drop-out is unobserved and since drop-out is assumed to depend on quality of life of the individual, the drop-out may be treated as nonignorable. The model suggested by them is a generalization of the model by Diggle and Kenward (1994), since as opposed to Diggle and Kenward, Dupuy and Mesbah (2002) allow censoring in their model. Next, we describe their model.

3 The Model of Dupuy and Mesbah (2002)

Assume that n subjects are to be observed at a set of fixed times tj, j = 0, 1, 2, . . ., such that to = 0 < . . . < tj-i < tj < ... < to and 0 < eo < At = tj - tj-i < ei < to (times of measurement are fixed by study design). Let Z denote the internal covariate and Zi(t) denote the value of Z at time t for the ith individual under study (i = 1, 2, . . ., n). Repeated measurements of Z are taken on each subject at common fixed times tj, j = 1, 2, . . . (to = 0). Let Zi j denote the response for the ith subject in (tj,tj+i]. The time to dropout model proposed by Dupuy and Mesbah (2002) assumes that the hazard of dropout is related to the covariate by the time dependent Cox model:

where A(.) is the unspecified baseline hazard function and r(z(t), ¡3) is a functional of the covariate history up to time t. The functionals r(z(t), ¡3) as considered by Dupuy and Mesbah (2004) are of the form /3i(z(t — At)) + ¡32 z(t) and ¡¡3(z(t) — z(t — At)). The reason for using these functionals is the intuitive meaning behind them; in the first functional, we assume that the probability that T belongs to (tj-i,tj] depends on the current unobserved covariate at the time of the dropout and on the last observed covariate value before t, Z(t - At). In this setting, the covariate Z(t) is referred to as nonignorable missing data. The second form for r is used when the interest is in studying whether the increase or decrease in the value of Z between the times t - At and t influences dropout. As a result, equation (1.5) is reformulated as

where w(t) = (z(t - At), z(t))T and 3 = (31, 32)T or 3 = (-33, 33)T.

Once again, the dropout times are assumed to be censored on the right by the random variable C. In addition, the following conditions are assumed:

• i- The covariate vector Z is assumed to have the uniformly bounded continuous density f (z, a), z = (zo,z\,z2, . . .) G Rdepending on an unknown parameter a.

• ii- The censoring time C is assumed to have the continuous distribution function Gc(u) on the R+ = (0, to).

• iii- The censoring distribution is assumed to be independent of the unobserved covariate, and of the parameters a, 3 and A.

Now, let t denote a finite time point at which any individual who has not dropped out is censored, assume that P(X > t) > 0 and let at-1 = j if t G (tj,tj+i]. With this notation, w(t) can be rewritten as (zat-1, z(t))T. We observe n independent replicates of X = min (T, C), A and Z represented by the vector y = (x, S, zo,. . . .,zax 1). The problem of interest is to estimate via maximum likelihood, the true regression parameters, denoted by ao, 3o and the baseline hazard function ao=/ Xo(u)du The probability measure induced by the observed y will be denoted by Pg (dy) = f y (y;0)dy, where 0 = (a, 3, A).

The first step in the problem of maximum likelihood estimation is the development of the likelihood function. The likelihood fY (y; 0) for the vector of observations y = (x, S, zo,. . . .,zax 1) was obtained by Dupuy, and Mesbah (2002) by first writing the density of (y, z) for some value of z on (tax-i, tax] and then by integrating over z. This gives the partial likelihood function as:

where w(t) = (zax-1, z)T if t G (tax-1, tax). To estimate the parameters, especially the hazard function, Dupuy and Mesbah (2002) use the well-known method of sieves. The method consists of replacing the original parameter space O of the parameters (a, 3, A) by an approximating space On, called the sieve. More precisely, instead of considering the hazard function, A = A(t), one considers increasing step wise versions An<i = An(T(i)), at the points T(i), i = 1, 2,. . ., p(n), where T(1) < T< . . . < Tp(n) are the order statistics corresponding to the distinct dropout times Ti < T2 < . . . < Tp(n) Hence the approximating sieve is On = {0 = (a, 3, An): a G Rp, 3 G R2 , An,i < An,2 < . . . < An p(n)}. The estimates of the parameters a and

3 and the values An<i are obtained by maximizing the likelihood in (1.7) over the space On, in other words, one maximizes the pseudo-likelihood n

The above is obtained by multiplying over the subjects i (i = 1, following individual contributions:

6i i3twi(xi) - E ^An:k k'-T(k)<xi xf ( zi,0, . Zi,aXi-i, z; a) dz (9)

The semiparametric maximum likelihood estimator 0n = (an, 3n, An) of the parameter space is obtained by using the EM algorithm. The method is an iterative method, which iterates between an E-step where the expected loglikelihood of the complete data conditional on the observed data and the current estimate of the parameters is computed and an M-step where the parameter estimates are updated by maximizing the expected loglikelihood. Dupuy, Grama and Mesbah (2003) and Dupuy and Mesbah, 2004) have shown that the estimator 0n is identifiable and converges to a Gaussian process with a covariance that can be consistently estimated.

The purpose of this project is to validate the model in (1.7). In order to do so, we propose a method developed by Lin (1991) for the general Cox model. The method involves a class of weighted parameter estimates and works on the following idea: Parameter estimates for the standard Cox model are obtained by maximizing the score function, which assigns an equal weight to all the failures. The weighted parameter estimates are calculated by maximizing a weighted score function where different observations get different weights depending on their times of occurrence. Since both the weighted and the unweighted estimators are consistent, the rationale is that if there is no mis-specification in the model, then they should be close to each other. However, in case of model misspecification, the two estimators will tend to be different. We propose the use of this method to validate the model of Dupuy et al (2002) to test model validity.

The proposed test statistic is studied and developed in the next section.

4 The Test of Goodness of Fit

As mentioned earlier, the results in this section are based on the methods of Lin (1991). To verify the model in (1.7), first define a class of weighted pseudo-likelihood functions (for a random weight function, WG(.)) as follows:

p(n) Sil'T, ,

n ^An,k (fc)=xi}



Was this article helpful?

0 0

Post a comment