# Information Theoretic Measures for Multimodality Registration

Plotting the joint histogram of the two images provides a useful insight into how voxel similarity measures might be used for multi-modality registration . Figure 7.5 shows plots of the joint histogram computed for identical MR images and for an MRI and a PET image of the same subject. The joint histograms are plotted at registration and at two levels of mis-registration. A distinctive pattern emerges at registration of each pair and this pattern diffuses as mis-registration increases. This suggests certain statistical measures of mis-registration. Interestingly, the MRI-PET joint histogram also explains why the PIU measure works and why it works better when the scalp is edited out of the image. The scalp corresponds to the horizontal line in these plots, i.e., a low PET intensity and a wide range of MRI intensities with partial voluming up to very bright values. Scalp editing removes this line and the resulting plot shows a narrow distribution of PET intensities for each MRI intensity.

It can also be useful to think of image registration as trying to maximize the amount of shared information in two images. Qualitatively, the combined image of, say, two identical images of the head will contain just two eyes at registration but four eyes at mis-registration.

Figure 7.5. Example 2D image intensity histograms from Hill et al.  for identical MR images of the head (top row) and MR and PET images (bottom row, PET intensity plotted on y-axis and MR intensity on x-axis) of the same individual at registration and at alignment (left), mis-registration of 2 mm (middle) and mis-registration of 5 mm (right). Although the histograms for different image pairs have very different appearances, both sets show a dispersion of the histogram with increasing mis-registration.

This suggests the use of a measure of joint information as a measure of mis-registration. The signal processing literature contains such a measure of information, the Shannon-Weiner entropy H(A) and its analog for dual signals, the joint entropy H(A,B) . These can be defined over the region of overlap between two images as follows:

transformation is changed. Mutual information is expressed as

where pA(a) is the marginal probability of intensity a occurring at position xA in image A,pB(b) the marginal probability of intensity b occurring at position xA in image B, and pTAB(a,b) the joint probability of both intensity a occurring in image A and intensity b occurring in image B. These probabilities are estimated from the histograms of images A and B (marginal probabilities) and their joint histogram (joint probability) in the overlap domain with transformation T.

The joint entropy measures the information contained in the combined image. This suggests a registration metric. Registration should occur when the joint entropy is a minimum. This measure had been proposed independently by Collignon  and Studholme . Unfortunately, the volume of overlap between the two images to be registered also changes as they are transformed relative to one another and because of this joint entropy was found not to provide a reliable measure of alignment.

The solution, spotted independently by Collignon et al.  and Wells et al. , was to use mutual information (MI) as the registration metric instead. MI is the difference between the joint entropy and the sum of the individual (marginal) entropies of the two images. According to information theory, this difference is zero when there is no statistical relationship between the two images. If there is a statistical relationship, the mutual information will be greater than zero, and the stronger the statistical relationship the larger its value will be. This suggests that it could be used as a measure of alignment, since registration should maximize the statistical dependence of one image on the other. This measure largely overcomes the problem of volume of overlap changing as the

This measure has proved to be remarkably robust in a very wide range of applications. The Vanderbilt study  showed that it performed with comparable accuracy to the PIU algorithm for PET-MRI registration and does not require scalp editing. Studholme et al.  showed that the measure was the most robust and accurate of a range of different measures tested.

However, mutual information sometimes fails to find the correct solution when the volume of overlap varies significantly with alignment or when there is a large volume of background (i.e., air) in the field of view. Studholme et al.  showed that a simple reformulation of the measure as the ratio of the joint entropy and the sum of the marginal entropies in the overlapping region provided a more robust measure of alignment,

This measure and mutual information have now been widely adopted and registration software based on these measures has been extensively validated for rigid body registration of images of MRI and PET images of the head [16,50,51].

While qualitative arguments, such as those outlined above, have been used to justify mutual information or its normalized version, there is as yet no firm theoretical basis for these measures and further research is required to provide this theoretical underpinning. This research may lead to measures that outperform the information theoretic measures outlined above.