## Info

near the critical point have the same scaling behavior, i.e., they have exactly the same critical exponents, namely Vc ~ 0.64 and ¡}c = 0.33. Moreover, the theory predicts, that critical exponents are connected by several scaling relations, so that knowing any two exponents, for example Vf and Pc one can predict the values of all the others. It turns out, that critical exponents depend only on dimensionality of space and some other major characteristics, such as dimensionality of spin orientations for magnetics. Thus, all variety of critical points can be classified by few universality classes so that all systems belonging to the same universality class have exactly the same values of critical exponents.

One of the simplest models for critical phenomena, the Ising model,17 belongs to the same universality class as the liquid-gas critical point. We will discuss this model in greater detail, since it was first used by M. Ya. Azbel to describe possible correlations of nucleotides in the DNA.18"20

In the Ising model, atoms occupy sites on the ¿/-dimensional lattice, for example on a square or a cubic lattice. In a one-dimensional system, atoms are placed equidistandy on a line. Each atom has a magnetic moment or spin, which may have only two orientations: up {s = +1) or down (s = -1). All pairs of spins occupying nearest neighboring sites interact with each other, so that they have a tendency to acquire the same orientation. The pair with the same orientations has negative potential energy -£ while the pair with different orientations has positive potential energy +£. Note that £ < 0 corresponds to the model of anti-ferromagnetic interactions. In addition, spins may interact with external magnetic field with energies -h for positive spins and +h for negative spins. It can be shown that this model is equivalent to the model of lattice gas, in which positive orientation of spins corresponds to the sites occupied by molecules, negative orientation indicates empty sites, two neighboring molecules attract with energy -£, and the external field b corresponds to chemical potential which defines the average number of molecules in the system.

In 1973, M. Ya. Azbel18 mapped a DNA sequence onto a one-dimensional Ising model by assigning positive spins s = +1 to strongly bonded pairs cytosine (C) and guanine (G) and negative spins j = -1 to weakly bonded pairs adenine (A) and thymine (T). (Complimentary base-pairs C and G located on the opposite strands of the DNA double helix are bonded by three hydrogen bonds, while A and T are boned only by two hydrogen bonds.)

### One-Dimensional Ising Model

It is easy to solve the one-dimensional Ising model. According to the Boltzmann equation, the probability p{U) to find a thermally equilibrated system in a state with certain potential energy is proportional to p(U) ~ exp(-U / kBT), (3)

where T is absolute temperature and kB is Boltzmann constant. A striking simplicity of this equation is that it does not depend on any details of inter-atomic interaction and the details of motion of individual molecules. Once we know U and T, we can completely characterize our system in terms of the probability theory.

In the one-dimensional Ising model, a spin at position i can affect a spin at position i + 1 only through their direct interaction which is either -£ if they orient the same way or +£ if they orient in the opposite way. In the absence of magnetic field, the probabilities of these two orientations are proportional to ex.p(-UlkBT), where U = ±£. Hence the probability of the same orientation is p = exp (el kBT) l[ex.p(el kBT) + exp (-el kBT)] = 1/(1 + b), (4)

where b = exp(-2el kg!) and the probability of the opposite orientation \s q = I -p = b!{\ + b). Clearly, if T is small enough, b is also very small, and hence the probability for two neighboring spins to be in the same orientation is almost equal to one.

Do spins at a distant positions i and (/ + r) affect each other? To answer this question we must quantify this affect in mathematical terms. Two random variables s(t) and s(i + r) are called independent if the average of their product {s(t)s(i + r)) is equal to the product of their averages (s(t)) and (s(i + r)). Here and throughout the entire chapter (...) denotes average taken over all possible positions i of the spins or nucleotide positions in a DNA sequence. The difference between these two quantities

characterizes the mutual dependence of two spins and is called correlation function. If C(r) > 0, the spins are correlated. If C(r) < 0, the spins are anti-correlated. Note that C(0) coincides with the definition of variance of the variable s(t). Note also that in general, for finite system of size L, (s(i)) ^ (s(i + r)), because these two averages are taken over two different sets of positions i = 1,2.../. - r and i + r = r + 1 ,r + 2,..., L. When r is comparable to L, this difference becomes substantial.

It can be easily shown (see next section) that for a one-dimensional Ising model the correlations decay exponentially C(r) - exp(-r/§ at any temperature. The inverse speed of the exponential decay is identical to the correlation length. In the one-dimensional model, correlation length can diverge only if temperature approaches absolute zero. Thus the critical point for the one-dimensional model is Tc = 0.

In the next section we will show this by making a mathematical excursion into the theory of Markovian processes, which is a very useful tool in bioinformatics. This chapter may be omitted by a reader who does not want to go deep into mathematical details, but is useful for those whose goal is to apply mathematics in biology.

### Markovian Processes

In order to compute correlation function, we will represent a sequence of spins in the Ising model as a Markovian process. Markovian processes are very important in bioinformatics, thus we briefly summarize their definition and properties.

A Markovian process21 is defined as a process obeying the following rules, (i) A system at any time step t, can be in n possible states ei,e2,...e„. (ii) The probability to find a system in a certain state at any time step depends only on its state at the previous time step. Thus to fully characterize a Markovian process, we must define a n x n set of transition probabilities pij which are the probabilities to^find a system in a state e, at time t+ 1 provided that a time t it was in a state ey. Obviously, \pij = 1. (iii) It is assumed that pij do not depend on time.

It is convenient to describe the behavior of a Markovian process in terms of vector algebra, so that the probabilities pi(i) to find a system in any of its n states at time t is an «-dimensional vector-column p(i). The sum of its components pj{t) is equal to unity. Accordingly, it is natural to arrange the transition probabilities p;j into a nun matrix P. They'-th column of this matrix is the set of transitional probabilities pij. Using the rule of matrix multiplication combined with the law of probability multiplication for independent events, we can find p^ + rHP'p^), (6)

where Pr is the r-th power of matrix P, which can be easily found once we determine eigenvectors a; and eigenvalues A., of matrix P. By definition, eigenvectors and eigenvalues satisfy a homogeneous system of linear equations

which has a nontrivial solution only if its determinant is equal to zero. Accordingly, the eigenvalues satisfy an algebraic equation of H-th power which turns the determinant of the matrix P - AI, where I is the unity matrix, into zero.

Once we have determined the eigenvectors and eigenvalues, we can write

where A is the diagonal matrix formed by eigenvalues A„ and A is the matrix whose columns are eigenvectors a;.

Since the sum of elements in every column of matrix P is unity, the determinant of the matrix P - I is equal to zero and one of the eigenvalues must be equal to unity: Ai = 1. The eigenvector aj, corresponding to this eigenvalue has a very special meaning. Its components yield the probabilities to find the system in each of its states for r —¥ We will show it in the following paragraph.

Except in some special degenerate cases, all the eigenvalues of a matrix are different. Assuming this, we can express the state of the system at time t = r as a linear combination of the eigenvectors:

^(t + r) = c\a.x +c2A2a2 +... +cnXrna.n where c„ are some coefficients, which can be obtained by multiplying the initial state of the system p(t) by matrix A"1. It can be easily seen from this equation that all eigenvalues must be less or equal to one: ft, |< 1. Indeed, if any ft, |> 1, the corresponding term in the above equation would diverge for r —> contradicting inequalityplf) < 1, which must be satisfied by the probabilities. Thus for all i > 1, ft, |< 1, and for any initial state of the system, we have limr_>00p(r+ t) = qa).

Thus, the average probability of finding the system in each of its states in a very long process is determined by the vector qaj, which can be readily found from the system of linear equations:

Since the determinant of this system is equal to zero, it has a nontrivial solution cjaj, where C) is an arbitrary constant. Since the components of the vector c\SL\ have the meaning of the probabilities and, therefore, their sum must be equal to one, the coefficient ci must be the reciprocal of the sum of the elements of an arbitrary non-trivial solution ai of Eq. (9).

The second-largest eigenvalue determines the decay of the correlations: C(r) - A2 = exp(r In A2). By definition, the correlation length is the characteristic length of correlation decay which is determined by relation C(r) - exp(-r/£). Thus ¿j = 1/ ln(l/A2).

As an illustration of the Markovian formalism we can apply it to the one-dimensional Ising model. The matrix P in this case is simply

1 -p p where p is determined by Eq. (4). In order to find the eigenvalues, we must find the values of A which turn the determinant of the matrix P — A/into zero:

This gives us a quadratic equauon (p - A)2 - (1 — p)2 ■■ 2p — 1. The corresponding eigenvectors are ai aj

Accordingly, we have

## Post a comment