Vlad Barbu1 and Nikolaos Limnios2

1 Université de Technologie de Compiègne, Laboratoire de Mathématiques Appliquées de Compiègne, BP 20529, 60205 Compiègne, France [email protected]

2 Université de Technologie de Compiègne, Laboratoire de Mathématiques Appliquées de Compiègne [email protected]

Summary. We consider a semi-Markov chain, with a finite state space. Taking a censored history, we obtain empirical estimators for the discrete semi-Markov kernel, renewal function and semi-Markov transition function. We propose estimators for two different failure rate functions: the usual failure rate, BMP-failure rate, defined by [BMP63], and the new introduced failure rate, RG-failure rate, proposed by [RG92]. We study the strong consistency and the asymptotic normality for each estimator and we construct the confidence intervals. We illustrate our results by a numerical example.

Key words: semi-Markov chain, discrete time semi-Markov kernel , reliability , BMP-failure rate , RG-failure rate , nonparametric estimation , asymptotic properties .

Continuous time semi-Markov systems are important framework for reliability studies, cf. [LO99, L001]. Estimation of reliability and related measures for semi-Markov systems can be found in [L003], [0L99].

As compared to the attention given to the continuous time semi-Markov processes and related reliability matters, the discrete time semi-Markov processes (DTSMP) are less studied. This is rather surprising, because considering a discrete time semi-Markov system instead of a continuous one offers important advantages, especially for applications. Indeed, a semi-Markov chain makes only a finite number of transitions in a finite time interval and the Markov renewal function can be expressed as a finite sum, which is not the case for a continuous semi-Markov process. Consequently, the related numerical computations for a discrete time semi-Markov process are much faster and more accurate than for a continuous one.

An introduction to discrete time Markov renewal processes can be found in [How71], [MS00], [BBL04]. For the discrete time semi-Markov model for reliability see [Cse02], [BBL04]. Estimators of the kernel, transition matrix, renewal function, reliability, availability of a discrete time semi-Markov system and their asymptotic properties are given in [BL04a] and [BL04b].

The aim of this paper is to construct estimators for failure rate functions of a discrete time semi-Markov process and to give their properties. We consider two different failure rate functions: the usual failure rate, BMP-failure rate, defined by [BMP63, BP75], and the new introduced failure rate, AG-failure rate, proposed by [RG92]. The reasons for introducing this new definition of a failure rate for discrete time models are given in [Bra01, BGX01].

The estimator of the classical failure rate for a continuous time semi-Markov system have been studied in [OL98]. Statistical estimation and asymptotic properties for flG-failure rate and BMP-failure rate of a discrete time homogeneous Markov system are presented in [SL02].

The present paper is organized as follows. In Section 2 we give some definitions and recall some previously obtained results concerning discrete time semi-Markov processes and the associated reliability metrics (see [BBL04], [BL04a, BL04b]). In Section 3 we construct empirical estimators for failure rates and we study the strong consistency and the asymptotic normality for the proposed estimators. Asymptotic confidence intervals are also given. All the above results are proved in Section 4. In Section 5 we illustrate our results by a numerical example of a three state discrete time semi-Markov system.

2 Preliminaries

2.1 The Discrete Time semi-Markov Model

Let us consider:

• E, the state space. We suppose E to be finite, say E = {1,...,s}.

• The stochastic process J = (Jn; n G N) with state space E for the system state at the n-th jump.

• The stochastic process S = (Sn; n G N) with state space N for the n-th jump of the process. We suppose S0 = 0 and 0 < Si < S2 < ... < Sn < Sn+i < . . ..

• The stochastic process X = (Xn; n G N*) with state space N* for the sojourn time Xn in state Jn-i before the n-th jump. Thus, we have for all n G N*

We denote by ME the set of non negative matrices on E x E and by ME(N), the set of matrix-valued functions : N ^ME.

Definition 1. The stochastic process (J, S) = ((Jn,Sn); n G N) is said to be a discrete time Markov renewal process (DTMRP) if for all n G N, for all i,j G E and for all k G N it almost surely satisfies

P( Jn+1 = j, Sn+1-Sn = k | Jo,...,Jri; So,..., Srt) = P( Jn+1 = j, Sn+1-Sn = k | Jn).

Moreover, if (1) is independent of n, (J, S) is said to be homogeneous, with discrete semi-Markov kernel q(k) = (qij(k); i,j G E) gMe defined by

Definition 2. The transition function of the embedded Markov chain (Jn; n G N) is the matrix-valued function V gMe defined by

V = (Pij)i,jEE, Pij := P( Jn+1 = j I Jn = i), i,j G E, n G N. (2) Definition 3. For all i,j G E such that pij = 0, let us denote by:

1■ fij(:), the conditional distribution of the sojourn time in state i before going to state j :

fij (k)= P(Xn+1 = k I Jn = I, Jn+1 = j), k G N, (3)

2■ hi(-), the sojourn time distribution in state i:

hi(k) = P(Xn+1 = k I Jn = i) = Y, qil (k), k G N*, leE

3■ Hi(-), the sojourn time cumulative distribution function in state i :

Obviously, for all i,j G E such that pij = 0 and for all k G N, we have qij (k)= Pij fij(k). (4)

Let us give some definitions and results from [BBL04], which will be useful for the estimation presented in this paper.

Definition 4. (discrete time convolution product) Let A, B G Me(N) be two matrix-valued functions■ The matrix convolution product A * B is the matrix-valued function C G Me(N) defined by k

keE 1=0

Lemma 1. Let SI = (dij(k); i,j G E) G Me(N) be the matrix-valued function defined by d (k) i 1 if i = j and k = 0

Then, SI is the neutral element for the discrete time matrix convolution product, i.e., SI satisfies

Definition 5. Let A G Me(N) be a matrix-valued function. If there exists some B G Me(N) such that

then B is called the left inverse of A in the convolution sense and it is denoted by A(-!).

We stress the fact that the left inverse of A is not always defined. The next proposition gives a sufficient condition for the existence and uniqueness of the left inverse.

Proposition 1. Let A G ME (N) be a matrix-valued function. If det A(0) = 0, then the left inverse of A exists and is unique.

Definition 6. (discrete time n-fold convolution) Let A G Me(N) be a matrix-valued function and n G N. The n-fold convolution A(n) is the matrix function C G Me(N) defined recursively by:

Afk := E È A(;-1)(k - s) Aii(s), n > 2,k > 1.

leE s=0

For a DTMRP (J,S), the n-fold convolution of the semi-Markov kernel can be expressed as follows.

Proposition 2. For all i,j G E, for all n and k G N, we have qin](k) = P( Jn = j, Sn = k | Jo = i). (6)

Let us consider the matrix-valued functions Q = (Q(k); k G N) G ME(N), defined by k

Qij (k) := P(Jn+i = j,Xn+i < k I Jn = i) = qij (0, i, j G E,k G N (7)

Nonparametric Estimation for Failure Rates of semi-Markov Chains 57 and ^ = (^(k); k e N) e ME(N), defined by k

Proposition 3. The matrix-valued function ^ = (^(k); k e N) is given by:

where (SI — q)(-1) denotes the left convolution inverse of the matrix function (SI — q) and is computed using the following forward algorithm

I (SI — q)(-1)(k) = — E — (SI — q)(-1)(s) (SI — q)(k — s) if k e N*.

Definition 7. The matrix renewal function & = (&(k); k e N) e Me(N) of the DTMRP is defined by

where Nj(k) is the number of visits to state j before time k.

The matrix renewal function can be expressed in the following form:

Definition 8. A stochastic process Z = (Zk; k e N) is called the discrete time semi-Markov process associated with the DTMRP (J,S), if

Zk = Jn(k), k e N, where N(k) := m'Ax{n > 0; Sn < k} is the discrete time counting process of the number of jumps in [1,k] C N.

Thus, Zk gives the state of the process at time k. We have also Jn = Zsn, n e N.

Let us now define the discrete time semi-Markov transition matrix and propose a computation procedure.

Definition 9. The transition matrix of the semi-Markov process Z is the matrix-valued function P e Me(N) defined by

The Markov renewal equation for the semi-Markov transition function P is (see [BBL04])

Solving the Markov renewal equation (13) (see [BBL04]) we obtain that the unique solution is

P(k) = \(5I - q)(-1) * (I - H)] (k) = U * (I - diag(Q ■ 1))] (k), (14)

where 1 denotes the s-column vector whose all elements are 1.

2.2 Basic Results on semi-Markov Chains Estimation

We will give the following results for a MRP which satisfies some conditions. Assumptions

1. The Markov chain (Jn; n G N) is irreducible;

2. The mean sojourn times are finite, i.e., ^k>0 khi(k) < tt for any state i G E.

Let us consider a history H(M) of the MRP ((Jn,Sn); n G N), censored at time M G N,

H (M ) := (J0,X1 JN (M )-1 ,XN (M), JN (M), UM ), where we set N(M) := max{n | Sn < M} and UM := M - SN(M). Definition 10. For all i,j G E and k < M, we define:

=i}, the number of visits to stete i, up to time

a- Nij (M) :=Y.:NM J -1 =i,Jn=j}, the nUmber of tranSitionS from i to j, up to time M;

3. Nij (k,M) :=J2N=M) 1{Jn-1=i,Jn=j,Xn = k}, the nUmber of tranSitionS from i to j, up to time M, with sojourn time in state i equal to k, 1 < k < M.

Taking a history H(M) of a discrete time MRP, for all i, j G E and k G N,k < M, we define the empirical estimators of the probability transition function pij, sojourn conditioned time fj (k) and discrete semi-Markov kernel qij(k) by

Remark The above empirical estimators are approached nonparametric maximum likelihood estimators, i.e., they maximize an approached likelihood function, obtained by neglecting the part corresponding to Um := M -Sn(m).

Replacing q by its estimator in the expressions of and P we obtain the corresponding estimators:

P(k,M):= SI - q(-,M)) * II - diag(Q(-, M) ■ in(k) (20)

where q(n)(k,M) is the n-fold convolution of q(k,M) (see Definition 6).

We can prove the strong consistency for the proposed estimators and the asymptotic normality of qij(k,M), ■ipij(k,M), Pj(k,M) (see [BL04b]).

Was this article helpful?

## Post a comment