The intersecting cortical model in image processing
Introduction
The Intersecting Cortical Model (ICM), introduced in Ref. [1], is a simpler version of the Pulse-Coupled Neural Network (PCNN) model of Eckhorn [2]. The ICM is based on neural network techniques and is especially designed for image processing. It was introduced as a tool for image processing that is computationally faster than the full PCNN model. It was derived from several visual cortex models and is basically the intersection of these models, i.e. the common elements amongst these models. Two similar images will produce similar pulse images. On the other hand, images that differ will produce pulse images that differ in the corresponding regions of the input images. The change detection is then obtained from comparing the corresponding pulse images. Some advantages of the ICM are: distinct objects are readily detectable through the highlighting due to the pulse stimulations; aligned images remain aligned after the ICM process; image features, that cannot be easily enhanced with other more conventional image enhancement techniques, get enhanced; small features are subject to disappear.
Attempts to detect motion in imagery started early in computerised image processing. The first paper on the subject known to us is Ref. [3] in which an autocorrelation model for motion detection is presented. Since then, there have been many papers published. One of the early papers on motion detection is Ref. [4] where the effects of occlusions were analysed in order to detect motion. Here we will use synchronised pulses from a neural network as the foundation of our attempt to solve the motion-detection problem in imagery.
Section snippets
The intersecting cortical model
In the ICM the state oscillators of all the neurons are represented by a 2D array F (the internal neuron states; initially Fij=0 ∀i, j) and the threshold oscillators of all the neurons by a 2D array Θ (initially Θij=0 ∀i, j). Thus, the ijth neuron has state Fij and threshold Θij. They are computed fromwhere Sij is the stimulus (the input image, scaled so that the largest pixel value is 1.0); Yij is the
Aircraft detection
The ICM algorithm was used on a series of 16 images of an aircraft moving with the blue sky in the background. The sky looks rather homogeneous to the unaided eye. Apart from the aircraft, the only other object in the images is the moon. The images are roughly equally spaced in time. The first of the 16 images are shown in Fig. 1. The algorithm was implemented in Python (an interpreted, interactive, object-oriented programming language) and the values of the scalars f, g, and h were 0.9, 0.8,
Conclusions
As the misalignment of the aircraft in the fused image (Fig. 4) shows, at least one of two prerequisites has to be fulfilled in order to obtain a good result from the fusion process: either the images must be photographed from a perfectly fixed camera or each image must contain enough identifiable non-moving pixels so one is able to perform a matching of the images. Hence, in cases when detecting moving objects, as aircraft, against a more or less homogeneous background or a background devoid
Acknowledgements
The authors are grateful to Prof. Thomas Lindblad of the Royal Institute of Technology (KTH), Stockholm (Sweden), for giving us the idea for this work. We also thank Dr Sten Nyberg and Dr Morgan Ulvklo at the Swedish Defence Research Agency (FOI) for their help in providing images.
References (4)
- et al.
Comput. Graphics Image Process.
(1979) Asimplified pulse-coupled neural network
Proc. SPIE
(1996)