Comparison Of Standard And Image-filter Fusion Techniques
Free (open access)
Image fusion in a general sense, can be defined as: \“the combination of two or more different images to form a new image by using a certain algorithm” . It aims to extract from a set of different images of the same area those properties which are unavailable by the single sensor and which form in a combined way a new image with a higher multispectral resolution. In standard image fusion one multispectral image band is entirely substituted by the panchromatic band. It is not new, however, that due to the incoherence of sensor performances the original multispectral information becomes distorted and hence, the data provides only limited use in the spectral classification of textural image information. In comparison, newly evolved image filter data fusion techniques preserve both spatial and multispectral information. Results of both techniques are shown here using a higher resolution panchromatic image (IRS-1 C) and a lower resolution multispectral image (SPOT XS). In detail, this paper discusses on one side the performance of three standard image fusion techniques, Intensity Hue Saturation (IHS), Principle Component Analysis (PCA) and High Pass Filter (HPF) and on the other side the use of a newly evolved image fusion technique, the Adaptive Image Filter (AIF) fusion. Results are visually assessed for image interpretability and statistically for spectral classification of textural data. From this assessment the IHS and PCA fusion achieved similar good results for visual image interpretation. If fused images are used for subsequent image classification, then the AIF fusion is the preferable one, preserving the multispectral information while adding textural and structural detail.