International Science Index
The Layout Analysis of Handwriting Characters and the Fusion of Multi-style Ancient Books’ Background
Ancient books are signiﬁcant culture inheritors and their background textures convey the potential history information. However, multi-style texture recovery of ancient books has received little attention. Restricted by insufﬁcient ancient textures and complex handling process, the generation of ancient textures confronts with new challenges. For instance, training without sufficient data usually brings about overﬁtting or mode collapse, so some of the outputs are prone to be fake. Recently, image generation and style transfer based on deep learning are widely applied in computer vision. Breakthroughs within the ﬁeld make it possible to conduct research upon multi-style texture recovery of ancient books. Under the circumstances, we proposed a network of layout analysis and image fusion system. Firstly, we trained models by using Deep Convolution Generative against Networks (DCGAN) to synthesize multi-style ancient textures; then, we analyzed layouts based on the Position Rearrangement (PR) algorithm that we proposed to adjust the layout structure of foreground content; at last, we realized our goal by fusing rearranged foreground texts and generated background. In experiments, diversiﬁed samples such as ancient Yi, Jurchen, Seal were selected as our training sets. Then, the performances of different ﬁne-turning models were gradually improved by adjusting DCGAN model in parameters as well as structures. In order to evaluate the results scientiﬁcally, cross entropy loss function and Fréchet Inception Distance (FID) are selected to be our assessment criteria. Eventually, we got model M8 with lowest FID score. Compared with DCGAN model proposed by Radford at el., the FID score of M8 improved by 19.26%, enhancing the quality of the synthetic images profoundly.
A Hybrid Image Fusion Model for Generating High Spatial-Temporal-Spectral Resolution Data Using OLI-MODIS-Hyperion Satellite Imagery
Spatial, Temporal, and Spectral Resolution (STSR) are three key characteristics of Earth observation satellite sensors; however, any single satellite sensor cannot provide Earth observations with high STSR simultaneously because of the hardware technology limitations of satellite sensors. On the other hand, a conflicting circumstance is that the demand for high STSR has been growing with the remote sensing application development. Although image fusion technology provides a feasible means to overcome the limitations of the current Earth observation data, the current fusion technologies cannot enhance all STSR simultaneously and provide high enough resolution improvement level. This study proposes a Hybrid Spatial-Temporal-Spectral image Fusion Model (HSTSFM) to generate synthetic satellite data with high STSR simultaneously, which blends the high spatial resolution from the panchromatic image of Landsat-8 Operational Land Imager (OLI), the high temporal resolution from the multi-spectral image of Moderate Resolution Imaging Spectroradiometer (MODIS), and the high spectral resolution from the hyper-spectral image of Hyperion to produce high STSR images. The proposed HSTSFM contains three fusion modules: (1) spatial-spectral image fusion; (2) spatial-temporal image fusion; (3) temporal-spectral image fusion. A set of test data with both phenological and land cover type changes in Beijing suburb area, China is adopted to demonstrate the performance of the proposed method. The experimental results indicate that HSTSFM can produce fused image that has good spatial and spectral fidelity to the reference image, which means it has the potential to generate synthetic data to support the studies that require high STSR satellite imagery.
Efficient Feature Fusion for Noise Iris in Unconstrained Environment
This paper presents an efficient fusion algorithm for
iris images to generate stable feature for recognition in unconstrained
environment. Recently, iris recognition systems are focused on real
scenarios in our daily life without the subject’s cooperation. Under
large variation in the environment, the objective of this paper is to
combine information from multiple images of the same iris. The
result of image fusion is a new image which is more stable for further
iris recognition than each original noise iris image. A wavelet-based
approach for multi-resolution image fusion is applied in the fusion
process. The detection of the iris image is based on Adaboost
algorithm and then local binary pattern (LBP) histogram is then
applied to texture classification with the weighting scheme.
Experiment showed that the generated features from the proposed
fusion algorithm can improve the performance for verification system
through iris recognition.
Color Image Enhancement Using Multiscale Retinex and Image Fusion Techniques
In this paper, an edge-strength guided multiscale
retinex (EGMSR) approach will be proposed for color image contrast
enhancement. In EGMSR, the pixel-dependent weight associated with
each pixel in the single scale retinex output image is computed
according to the edge strength around this pixel in order to prevent
from over-enhancing the noises contained in the smooth dark/bright
regions. Further, by fusing together the enhanced results of EGMSR
and adaptive multiscale retinex (AMSR), we can get a natural fused
image having high contrast and proper tonal rendition. Experimental
results on several low-contrast images have shown that our proposed
approach can produce natural and appealing enhanced images.
Medical Image Fusion Based On Redundant Wavelet Transform and Morphological Processing
The process in which the complementary information from multiple images is integrated to provide composite image that contains more information than the original input images is called image fusion. Medical image fusion provides useful information from multimodality medical images that provides additional information to the doctor for diagnosis of diseases in a better way. This paper represents the wavelet based medical image fusion algorithm on different multimodality medical images. In order to fuse the medical images, images are decomposed using Redundant Wavelet Transform (RWT). The high frequency coefficients are convolved with morphological operator followed by the maximum-selection (MS) rule. The low frequency coefficients are processed by MS rule. The reconstructed image is obtained by inverse RWT. The quantitative measures which includes Mean, Standard Deviation, Average Gradient, Spatial frequency, Edge based Similarity Measures are considered for evaluating the fused images. The performance of this proposed method is compared with Pixel averaging, PCA, and DWT fusion methods. When compared with conventional methods, the proposed framework provides better performance for analysis of multimodality medical images.
Feature Level Fusion of Multimodal Images Using Haar Lifting Wavelet Transform
This paper presents feature level image fusion using Haar lifting wavelet transform. Feature fused is edge and boundary information, which is obtained using wavelet transform modulus maxima criteria. Simulation results show the superiority of the result as entropy, gradient, standard deviation are increased for fused image as compared to input images. The proposed methods have the advantages of simplicity of implementation, fast algorithm, perfect reconstruction, and reduced computational complexity. (Computational cost of Haar wavelet is very small as compared to other lifting wavelets.)
Multi-Focus Image Fusion Using SFM and Wavelet Packet
In this paper, a multi-focus image fusion method using Spatial Frequency Measurements (SFM) and Wavelet Packet was proposed. The proposed fusion approach, firstly, the two fused images were transformed and decomposed into sixteen subbands using Wavelet packet. Next, each subband was partitioned into sub-blocks and each block was identified the clearer regions by using the Spatial Frequency Measurement (SFM). Finally, the recovered fused image was reconstructed by performing the Inverse Wavelet Transform. From the experimental results, it was found that the proposed method outperformed the traditional SFM based methods in terms of objective and subjective assessments.
Integral Image-Based Differential Filters
We describe a relationship between integral images and
differential images. First, we derive a simple difference filter from
conventional integral image. In the derivation, we show that an
integral image and the corresponding differential image are related
to each other by simultaneous linear equations, where the numbers
of unknowns and equations are the same, and therefore, we can
execute the integration and differentiation by solving the simultaneous
equations. We applied the relationship to an image fusion problem,
and experimentally verified the effectiveness of the proposed method.
Performance Analysis of Brain Tumor Detection Based On Image Fusion
Medical Image fusion plays a vital role in medical
field to diagnose the brain tumors which can be classified as benign
or malignant. It is the process of integrating multiple images of the
same scene into a single fused image to reduce uncertainty and
minimizing redundancy while extracting all the useful information
from the source images. Fuzzy logic is used to fuse two brain MRI
images with different vision. The fused image will be more
informative than the source images. The texture and wavelet features
are extracted from the fused image. The multilevel Adaptive Neuro
Fuzzy Classifier classifies the brain tumors based on trained and
tested features. The proposed method achieved 80.48% sensitivity,
99.9% specificity and 99.69% accuracy. Experimental results
obtained from fusion process prove that the use of the proposed
image fusion approach shows better performance while compared
with conventional fusion methodologies.
Video Data Mining based on Information Fusion for Tamper Detection
In this paper, we propose novel algorithmic models
based on information fusion and feature transformation in crossmodal
subspace for different types of residue features extracted from
several intra-frame and inter-frame pixel sub-blocks in video
sequences for detecting digital video tampering or forgery. An
evaluation of proposed residue features – the noise residue features
and the quantization features, their transformation in cross-modal
subspace, and their multimodal fusion, for emulated copy-move
tamper scenario shows a significant improvement in tamper detection
accuracy as compared to single mode features without transformation
in cross-modal subspace.
A Novel Architecture for Wavelet based Image Fusion
In this paper, we focus on the fusion of images from
different sources using multiresolution wavelet transforms. Based on
reviews of popular image fusion techniques used in data analysis,
different pixel and energy based methods are experimented. A novel
architecture with a hybrid algorithm is proposed which applies pixel
based maximum selection rule to low frequency approximations and
filter mask based fusion to high frequency details of wavelet
decomposition. The key feature of hybrid architecture is the
combination of advantages of pixel and region based fusion in a
single image which can help the development of sophisticated
algorithms enhancing the edges and structural details. A Graphical
User Interface is developed for image fusion to make the research
outcomes available to the end user. To utilize GUI capabilities for
medical, industrial and commercial activities without MATLAB
installation, a standalone executable application is also developed
using Matlab Compiler Runtime.
Region-Based Image Fusion with Artificial Neural Network
For most image fusion algorithms separate
relationship by pixels in the image and treat them more or less
independently. In addition, they have to be adjusted different
parameters in different time or weather. In this paper, we propose a
region–based image fusion which combines aspects of feature and
pixel-level fusion method to replace only by pixel. The basic idea is
to segment far infrared image only and to add information of each
region from segmented image to visual image respectively. Then we
determine different fused parameters according different region. At
last, we adopt artificial neural network to deal with the problems of
different time or weather, because the relationship between fused
parameters and image features are nonlinear. It render the fused
parameters can be produce automatically according different states.
The experimental results present the method we proposed indeed
have good adaptive capacity with automatic determined fused
parameters. And the architecture can be used for lots of applications.
On the EM Algorithm and Bootstrap Approach Combination for Improving Satellite Image Fusion
This paper discusses EM algorithm and Bootstrap
approach combination applied for the improvement of the satellite
image fusion process. This novel satellite image fusion method based
on estimation theory EM algorithm and reinforced by Bootstrap
approach was successfully implemented and tested. The sensor
images are firstly split by a Bayesian segmentation method to
determine a joint region map for the fused image. Then, we use the
EM algorithm in conjunction with the Bootstrap approach to develop
the bootstrap EM fusion algorithm, hence producing the fused
targeted image. We proposed in this research to estimate the
statistical parameters from some iterative equations of the EM
algorithm relying on a reference of representative Bootstrap samples
of images. Sizes of those samples are determined from a new
criterion called 'hybrid criterion'. Consequently, the obtained results
of our work show that using the Bootstrap EM (BEM) in image
fusion improve performances of estimated parameters which involve
amelioration of the fused image quality; and reduce the computing
time during the fusion process.
A Similarity Metric for Assessment of Image Fusion Algorithms
In this paper, we present a novel objective nonreference
performance assessment algorithm for image fusion. It takes
into account local measurements to estimate how well the important
information in the source images is represented by the fused image.
The metric is based on the Universal Image Quality Index and uses
the similarity between blocks of pixels in the input images and the
fused image as the weighting factors for the metrics. Experimental
results confirm that the values of the proposed metrics correlate well
with the subjective quality of the fused images, giving a significant
improvement over standard measures based on mean squared error
and mutual information.
The Use of Complex Contourlet Transform on Fusion Scheme
Image fusion aims to enhance the perception
of a scene by combining important information captured by
different sensors. Dual-Tree Complex Wavelet (DT-CWT) has been
thouroughly investigated for image fusion, since it takes advantages
of approximate shift invariance and direction selectivity. But it can
only handle limited direction information. To allow a more flexible
directional expansion for images, we propose a novel fusion scheme,
referred to as complex contourlet transform (CCT). It successfully
incorporates directional filter banks (DFB) into DT-CWT. As a result
it efficiently deal with images containing contours and textures,
whereas it retains the property of shift invariance. Experimental
results demonstrated that the method features high quality fusion
performance and can facilitate many image processing applications.
A Novel Metric for Performance Evaluation of Image Fusion Algorithms
In this paper, we present a novel objective nonreference performance assessment algorithm for image fusion. It takes into account local measurements to estimate how well the important information in the source images is represented by the fused image. The metric is based on the Universal Image Quality Index and uses the similarity between blocks of pixels in the input images and the fused image as the weighting factors for the metrics. Experimental results confirm that the values of the proposed metrics correlate well with the subjective quality of the fused images, giving a significant improvement over standard measures based on mean squared error and mutual information.
Remote-Sensing Sunspot Images to Obtain the Sunspot Roads
A combination of image fusion and quad tree decomposition method is used for detecting the sunspot trajectories in each month and computation of the latitudes of these trajectories in each solar hemisphere. Daily solar images taken with SOHO satellite are fused for each month and the result of fused image is decomposed with Quad Tree decomposition method in order to classifying the sunspot trajectories and then to achieve the precise information about latitudes of sunspot trajectories. Also with fusion we deduce some physical remarkable conclusions about sun magnetic fields behavior. Using quad tree decomposition we give information about the region on sun surface and the space angle that tremendous flares and hot plasma gases permeate interplanetary space and attack to satellites and human technical systems. Here sunspot images in June, July and August 2001 are used for studying and give a method to compute the latitude of sunspot trajectories in each month with sunspot images.
Tree Based Decomposition of Sunspot Images
Solar sunspot rotation, latitudinal bands are studied based on intelligent computation methods. A combination of image fusion method with together tree decomposition is used to obtain quantitative values about the latitudes of trajectories on sun surface that sunspots rotate around them. Daily solar images taken with SOlar and Heliospheric (SOHO) satellite are fused for each month separately .The result of fused image is decomposed with Quad Tree decomposition method in order to achieve the precise information about latitudes of sunspot trajectories. Such analysis is useful for gathering information about the regions on sun surface and coordinates in space that is more expose to solar geomagnetic storms, tremendous flares and hot plasma gases permeate interplanetary space and help human to serve their technical systems. Here sunspot images in September, November and October in 2001 are used for studying the magnetic behavior of sun.