International Science Index
Building and Tree Detection Using Multiscale Matched Filtering
In this study, an automated building and tree detection
method is proposed using DSM data and true orthophoto image. A
multiscale matched filtering is used on DSM data. Therefore, first
watershed transform is applied. Then, Otsu’s thresholding method
is used as an adaptive threshold to segment each watershed region.
Detected objects are masked with NDVI to separate buildings and
trees. The proposed method is able to detect buildings and trees
without entering any elevation threshold. We tested our method on
ISPRS semantic labeling dataset and obtained promising results.
Speech Intelligibility Improvement Using Variable Level Decomposition DWT
Intelligibility is an essential characteristic of a speech
signal, which is used to help in the understanding of information in
speech signal. Background noise in the environment can deteriorate
the intelligibility of a recorded speech. In this paper, we presented a
simple variance subtracted - variable level discrete wavelet transform,
which improve the intelligibility of speech. The proposed algorithm
does not require an explicit estimation of noise, i.e., prior knowledge
of the noise; hence, it is easy to implement, and it reduces the
computational burden. The proposed algorithm decides a separate
decomposition level for each frame based on signal dominant and
dominant noise criteria. The performance of the proposed algorithm
is evaluated with speech intelligibility measure (STOI), and results
obtained are compared with Universal Discrete Wavelet Transform
(DWT) thresholding and Minimum Mean Square Error (MMSE)
methods. The experimental results revealed that the proposed scheme
outperformed competing methods
Automatic Detection of Proliferative Cells in Immunohistochemically Images of Meningioma Using Fuzzy C-Means Clustering and HSV Color Space
Visual search and identification of immunohistochemically stained tissue of meningioma was performed manually in pathologic laboratories to detect and diagnose the cancers type of meningioma. This task is very tedious and time-consuming. Moreover, because of cell's complex nature, it still remains a challenging task to segment cells from its background and analyze them automatically. In this paper, we develop and test a computerized scheme that can automatically identify cells in microscopic images of meningioma and classify them into positive (proliferative) and negative (normal) cells. Dataset including 150 images are used to test the scheme. The scheme uses Fuzzy C-means algorithm as a color clustering method based on perceptually uniform hue, saturation, value (HSV) color space. Since the cells are distinguishable by the human eye, the accuracy and stability of the algorithm are quantitatively compared through application to a wide variety of real images.
Analysis of Image Segmentation Techniques for Diagnosis of Dental Caries in X-ray Images
Early diagnosis of dental caries is essential for maintaining dental health. In this paper, method for diagnosis of dental caries is proposed using Laplacian filter, adaptive thresholding, texture analysis and Support Vector Machine (SVM) classifier. Analysis of the proposed method is compared with Otsu thresholding, watershed segmentation and active contouring method. Adaptive thresholding has comparatively better performance with 96.9% accuracy and 96.1% precision. The results are validated using statistical method, two-way ANOVA, at significant level of 5%, that shows the interaction of proposed method on performance parameter measures are significant. Hence the proposed technique could be used for detection of dental caries in automated computer assisted diagnosis system.
Automatic Thresholding for Data Gap Detection for a Set of Sensors in Instrumented Buildings
Building systems are highly vulnerable to different
kinds of faults and failures. In fact, various faults, failures and human
behaviors could affect the building performance. This paper tackles
the detection of unreliable sensors in buildings. Different literature
surveys on diagnosis techniques for sensor grids in buildings have
been published but all of them treat only bias and outliers. Occurences
of data gaps have also not been given an adequate span of attention
in the academia. The proposed methodology comprises the automatic thresholding
for data gap detection for a set of heterogeneous sensors in
instrumented buildings. Sensor measurements are considered to be
regular time series. However, in reality, sensor values are not
uniformly sampled. So, the issue to solve is from which delay each
sensor become faulty? The use of time series is required for detection of abnormalities on
the delays. The efficiency of the method is evaluated on measurements
obtained from a real power plant: an office at Grenoble Institute of
technology equipped by 30 sensors.
Cost Effective Real-Time Image Processing Based Optical Mark Reader
In this modern era of automation, most of the academic
exams and competitive exams are Multiple Choice Questions (MCQ).
The responses of these MCQ based exams are recorded in the
Optical Mark Reader (OMR) sheet. Evaluation of the OMR sheet
requires separate specialized machines for scanning and marking.
The sheets used by these machines are special and costs more than a
normal sheet. Available process is non-economical and dependent on
paper thickness, scanning quality, paper orientation, special hardware
and customized software. This study tries to tackle the problem of
evaluating the OMR sheet without any special hardware and making
the whole process economical. We propose an image processing
based algorithm which can be used to read and evaluate the scanned
OMR sheets with no special hardware required. It will eliminate the
use of special OMR sheet. Responses recorded in normal sheet is
enough for evaluation. The proposed system takes care of color,
brightness, rotation, little imperfections in the OMR sheet images.
Ice Load Measurements on Known Structures Using Image Processing Methods
This study employs a method based on image analyses and structure information to detect accumulated ice on known structures. The icing of marine vessels and offshore structures causes significant reductions in their efficiency and creates unsafe working conditions. Image processing methods are used to measure ice loads automatically. Most image processing methods are developed based on captured image analyses. In this method, ice loads on structures are calculated by defining structure coordinates and processing captured images. A pyramidal structure is designed with nine cylindrical bars as the known structure of experimental setup. Unsymmetrical ice accumulated on the structure in a cold room represents the actual case of experiments. Camera intrinsic and extrinsic parameters are used to define structure coordinates in the image coordinate system according to the camera location and angle. The thresholding method is applied to capture images and detect iced structures in a binary image. The ice thickness of each element is calculated by combining the information from the binary image and the structure coordinate. Averaging ice diameters from different camera views obtains ice thicknesses of structure elements. Comparison between ice load measurements using this method and the actual ice loads shows positive correlations with an acceptable range of error. The method can be applied to complex structures defining structure and camera coordinates.
Sparsity-Based Unsupervised Unmixing of Hyperspectral Imaging Data Using Basis Pursuit
Mixing in the hyperspectral imaging occurs due to the low spatial resolutions of the used cameras. The existing pure materials “endmembers” in the scene share the spectra pixels with different amounts called “abundances”. Unmixing of the data cube is an important task to know the present endmembers in the cube for the analysis of these images. Unsupervised unmixing is done with no information about the given data cube. Sparsity is one of the recent approaches used in the source recovery or unmixing techniques. The l1-norm optimization problem “basis pursuit” could be used as a sparsity-based approach to solve this unmixing problem where the endmembers is assumed to be sparse in an appropriate domain known as dictionary. This optimization problem is solved using proximal method “iterative thresholding”. The l1-norm basis pursuit optimization problem as a sparsity-based unmixing technique was used to unmix real and synthetic hyperspectral data cubes.
Assessment of the Number of Damaged Buildings from a Flood Event Using Remote Sensing Technique
The heavy rainfall from 3rd to 22th January 2017 had swamped much area of Ranot district in southern Thailand. Due to heavy rainfall, the district was flooded which had a lot of effects on economy and social loss. The major objective of this study is to detect flooding extent using Sentinel-1A data and identify a number of damaged buildings over there. The data were collected in two stages as pre-flooding and during flood event. Calibration, speckle filtering, geometric correction, and histogram thresholding were performed with the data, based on intensity spectral values to classify thematic maps. The maps were used to identify flooding extent using change detection, along with the buildings digitized and collected on JOSM desktop. The numbers of damaged buildings were counted within the flooding extent with respect to building data. The total flooded areas were observed as 181.45 sq.km. These areas were mostly occurred at Ban khao, Ranot, Takhria, and Phang Yang sub-districts, respectively. The Ban khao sub-district had more occurrence than the others because this area is located at lower altitude and close to Thale Noi and Thale Luang lakes than others. The numbers of damaged buildings were high in Khlong Daen (726 features), Tha Bon (645 features), and Ranot sub-district (604 features), respectively. The final flood extent map might be very useful for the plan, prevention and management of flood occurrence area. The map of building damage can be used for the quick response, recovery and mitigation to the affected areas for different concern organization.
Wavelet-Based ECG Signal Analysis and Classification
This paper presents the processing and analysis of ECG signals. The study is based on wavelet transform and uses exclusively the MATLAB environment. This study includes removing Baseline wander and further de-noising through wavelet transform and metrics such as signal-to noise ratio (SNR), Peak signal-to-noise ratio (PSNR) and the mean squared error (MSE) are used to assess the efficiency of the de-noising techniques. Feature extraction is subsequently performed whereby signal features such as heart rate, rise and fall levels are extracted and the QRS complex was detected which helped in classifying the ECG signal. The classification is the last step in the analysis of the ECG signals and it is shown that these are successfully classified as Normal rhythm or Abnormal rhythm. The final result proved the adequacy of using wavelet transform for the analysis of ECG signals.
An Image Enhancement Method Based on Curvelet Transform for CBCT-Images
Image denoising plays extremely important role in digital image processing. Enhancement of clinical image research based on Curvelet has been developed rapidly in recent years. In this paper, we present a method for image contrast enhancement for cone beam CT (CBCT) images based on fast discrete curvelet transforms (FDCT) that work through Unequally Spaced Fast Fourier Transform (USFFT). These transforms return a table of Curvelet transform coefficients indexed by a scale parameter, an orientation and a spatial location. Accordingly, the coefficients obtained from FDCT-USFFT can be modified in order to enhance contrast in an image. Our proposed method first uses a two-dimensional mathematical transform, namely the FDCT through unequal-space fast Fourier transform on input image and then applies thresholding on coefficients of Curvelet to enhance the CBCT images. Consequently, applying unequal-space fast Fourier Transform leads to an accurate reconstruction of the image with high resolution. The experimental results indicate the performance of the proposed method is superior to the existing ones in terms of Peak Signal to Noise Ratio (PSNR) and Effective Measure of Enhancement (EME).
Empirical Mode Decomposition Based Denoising by Customized Thresholding
This paper presents a denoising method called EMD-Custom that was based on Empirical Mode Decomposition (EMD) and the modified Customized Thresholding Function (Custom) algorithms. EMD was applied to decompose adaptively a noisy signal into intrinsic mode functions (IMFs). Then, all the noisy IMFs got threshold by applying the presented thresholding function to suppress noise and to improve the signal to noise ratio (SNR). The method was tested on simulated data and real ECG signal, and the results were compared to the EMD-Based signal denoising methods using the soft and hard thresholding. The results showed the superior performance of the proposed EMD-Custom denoising over the traditional approach. The performances were evaluated in terms of SNR in dB, and Mean Square Error (MSE).
Detecting Tomato Flowers in Greenhouses Using Computer Vision
This paper presents an image analysis algorithm to detect and count yellow tomato flowers in a greenhouse with uneven illumination conditions, complex growth conditions and different flower sizes. The algorithm is designed to be employed on a drone that flies in greenhouses to accomplish several tasks such as pollination and yield estimation. Detecting the flowers can provide useful information for the farmer, such as the number of flowers in a row, and the number of flowers that were pollinated since the last visit to the row. The developed algorithm is designed to handle the real world difficulties in a greenhouse which include varying lighting conditions, shadowing, and occlusion, while considering the computational limitations of the simple processor in the drone. The algorithm identifies flowers using an adaptive global threshold, segmentation over the HSV color space, and morphological cues. The adaptive threshold divides the images into darker and lighter images. Then, segmentation on the hue, saturation and volume is performed accordingly, and classification is done according to size and location of the flowers. 1069 images of greenhouse tomato flowers were acquired in a commercial greenhouse in Israel, using two different RGB Cameras – an LG G4 smartphone and a Canon PowerShot A590. The images were acquired from multiple angles and distances and were sampled manually at various periods along the day to obtain varying lighting conditions. Ground truth was created by manually tagging approximately 25,000 individual flowers in the images. Sensitivity analyses on the acquisition angle of the images, periods throughout the day, different cameras and thresholding types were performed. Precision, recall and their derived F1 score were calculated. Results indicate better performance for the view angle facing the flowers than any other angle. Acquiring images in the afternoon resulted with the best precision and recall results. Applying a global adaptive threshold improved the median F1 score by 3%. Results showed no difference between the two cameras used. Using hue values of 0.12-0.18 in the segmentation process provided the best results in precision and recall, and the best F1 score. The precision and recall average for all the images when using these values was 74% and 75% respectively with an F1 score of 0.73. Further analysis showed a 5% increase in precision and recall when analyzing images acquired in the afternoon and from the front viewpoint.
Automatic Method for Exudates and Hemorrhages Detection from Fundus Retinal Images
Diabetic Retinopathy (DR) is an eye disease that leads to blindness. The earliest signs of DR are the appearance of red and yellow lesions on the retina called hemorrhages and exudates. Early diagnosis of DR prevents from blindness; hence, many automated algorithms have been proposed to extract hemorrhages and exudates. In this paper, an automated algorithm is presented to extract hemorrhages and exudates separately from retinal fundus images using different image processing techniques including Circular Hough Transform (CHT), Contrast Limited Adaptive Histogram Equalization (CLAHE), Gabor filter and thresholding. Since Optic Disc is the same color as the exudates, it is first localized and detected. The presented method has been tested on fundus images from Structured Analysis of the Retina (STARE) and Digital Retinal Images for Vessel Extraction (DRIVE) databases by using MATLAB codes. The results show that this method is perfectly capable of detecting hard exudates and the highly probable soft exudates. It is also capable of detecting the hemorrhages and distinguishing them from blood vessels.
Automatic Detection and Classification of Diabetic Retinopathy Using Retinal Fundus Images
Diabetic Retinopathy (DR) is a severe retinal disease which is caused by diabetes mellitus. It leads to blindness when it progress to proliferative level. Early indications of DR are the appearance of microaneurysms, hemorrhages and hard exudates. In this paper, an automatic algorithm for detection of DR has been proposed. The algorithm is based on combination of several image processing techniques including Circular Hough Transform (CHT), Contrast Limited Adaptive Histogram Equalization (CLAHE), Gabor filter and thresholding. Also, Support Vector Machine (SVM) Classifier is used to classify retinal images to normal or abnormal cases including non-proliferative or proliferative DR. The proposed method has been tested on images selected from Structured Analysis of the Retinal (STARE) database using MATLAB code. The method is perfectly able to detect DR. The sensitivity specificity and accuracy of this approach are 90%, 87.5%, and 91.4% respectively.
CT Medical Images Denoising Based on New Wavelet Thresholding Compared with Curvelet and Contourlet
One of the most important challenging factors in
medical images is nominated as noise. Image denoising refers to the
improvement of a digital medical image that has been infected by
Additive White Gaussian Noise (AWGN). The digital medical image
or video can be affected by different types of noises. They are
impulse noise, Poisson noise and AWGN. Computed tomography
(CT) images are subjects to low quality due to the noise. Quality of
CT images is dependent on absorbed dose to patients directly in such
a way that increase in absorbed radiation, consequently absorbed
dose to patients (ADP), enhances the CT images quality. In this
manner, noise reduction techniques on purpose of images quality
enhancement exposing no excess radiation to patients is one the
challenging problems for CT images processing. In this work, noise
reduction in CT images was performed using two different
directional 2 dimensional (2D) transformations; i.e., Curvelet and
Contourlet and Discrete Wavelet Transform (DWT) thresholding
methods of BayesShrink and AdaptShrink, compared to each other
and we proposed a new threshold in wavelet domain for not only
noise reduction but also edge retaining, consequently the proposed
method retains the modified coefficients significantly that result good
visual quality. Data evaluations were accomplished by using two
criterions; namely, peak signal to noise ratio (PSNR) and Structure
A Real-Time Image Change Detection System
Detecting changes in multiple images of the same
scene has recently seen increased interest due to the many
contemporary applications including smart security systems, smart
homes, remote sensing, surveillance, medical diagnosis, weather
forecasting, speed and distance measurement, post-disaster forensics
and much more. These applications differ in the scale, nature, and
speed of change. This paper presents an application of image
processing techniques to implement a real-time change detection
system. Change is identified by comparing the RGB representation of
two consecutive frames captured in real-time. The detection threshold
can be controlled to account for various luminance levels. The
comparison result is passed through a filter before decision making to
reduce false positives, especially at lower luminance conditions. The
system is implemented with a MATLAB Graphical User interface
with several controls to manage its operation and performance.
Liver Lesion Extraction with Fuzzy Thresholding in Contrast Enhanced Ultrasound Images
In this paper, we present a new segmentation approach
for focal liver lesions in contrast enhanced ultrasound imaging. This
approach, based on a two-cluster Fuzzy C-Means methodology,
considers type-II fuzzy sets to handle uncertainty due to the image
modality (presence of speckle noise, low contrast, etc.), and to
calculate the optimum inter-cluster threshold. Fine boundaries are
detected by a local recursive merging of ambiguous pixels. The
method has been tested on a representative database. Compared to
both Otsu and type-I Fuzzy C-Means techniques, the proposed
method significantly reduces the segmentation errors.
Discrete and Stationary Adaptive Sub-Band Threshold Method for Improving Image Resolution
Image Processing is a structure of Signal Processing
for which the input is the image and the output is also an image or
parameter of the image. Image Resolution has been frequently
referred as an important aspect of an image. In Image Resolution
Enhancement, images are being processed in order to obtain more
enhanced resolution. To generate highly resoluted image for a low
resoluted input image with high PSNR value. Stationary Wavelet
Transform is used for Edge Detection and minimize the loss occurs
during Downsampling. Inverse Discrete Wavelet Transform is to get
highly resoluted image. Highly resoluted output is generated from the
Low resolution input with high quality. Noisy input will generate
output with low PSNR value. So Noisy resolution enhancement
technique has been used for adaptive sub-band thresholding is used.
Downsampling in each of the DWT subbands causes information loss
in the respective subbands. SWT is employed to minimize this loss.
Inverse Discrete wavelet transform (IDWT) is to convert the object
which is downsampled using DWT into a highly resoluted object.
Used Image denoising and resolution enhancement techniques will
generate image with high PSNR value. Our Proposed method will
improve Image Resolution and reached the optimized threshold.
Scintigraphic Image Coding of Region of Interest Based On SPIHT Algorithm Using Global Thresholding and Huffman Coding
Medical imaging produces human body pictures in
digital form. Since these imaging techniques produce prohibitive
amounts of data, compression is necessary for storage and
communication purposes. Many current compression schemes
provide a very high compression rate but with considerable loss of
quality. On the other hand, in some areas in medicine, it may be
sufficient to maintain high image quality only in region of interest
(ROI). This paper discusses a contribution to the lossless
compression in the region of interest of Scintigraphic images based
on SPIHT algorithm and global transform thresholding using
Nature Inspired Metaheuristic Algorithms for Multilevel Thresholding Image Segmentation - A Survey
Segmentation is one of the essential tasks in image
processing. Thresholding is one of the simplest techniques for
performing image segmentation. Multilevel thresholding is a simple
and effective technique. The primary objective of bi-level or
multilevel thresholding for image segmentation is to determine a best
thresholding value. To achieve multilevel thresholding various
techniques has been proposed. A study of some nature inspired
metaheuristic algorithms for multilevel thresholding for image
segmentation is conducted. Here, we study about Particle swarm
optimization (PSO) algorithm, artificial bee colony optimization
(ABC), Ant colony optimization (ACO) algorithm and Cuckoo
search (CS) algorithm.
Level Set and Morphological Operation Techniques in Application of Dental Image Segmentation
Medical image analysis is one of the great effects of computer image processing. There are several processes to analysis the medical images which the segmentation process is one of the challenging and most important step. In this paper the segmentation method proposed in order to segment the dental radiograph images. Thresholding method has been applied to simplify the images and to morphologically open binary image technique performed to eliminate the unnecessary regions on images. Furthermore, horizontal and vertical integral projection techniques used to extract the each individual tooth from radiograph images. Segmentation process has been done by applying the level set method on each extracted images. Nevertheless, the experiments results by 90% accuracy demonstrate that proposed method achieves high accuracy and promising result.
A Trends Analysis of Image Processing in Unmanned Aerial Vehicle
This paper describes an analysis of domestic and international trends of image processing for data in UAV (unmanned aerial vehicle) and also explains about UAV and Quadcopter. Overseas examples of image processing using UAV include image processing for totaling the total numberof vehicles, edge/target detection, detection and evasion algorithm, image processing using SIFT(scale invariant features transform) matching, and application of median filter and thresholding. In Korea, many studies are underway including visualization of new urban buildings.
Enhanced Approaches to Rectify the Noise, Illumination and Shadow Artifacts
Enhancing the quality of two dimensional signals is one of the most important factors in the fields of video surveillance and computer vision. Usually in real-life video surveillance, false detection occurs due to the presence of random noise, illumination
and shadow artifacts. The detection methods based on background subtraction faces several problems in accurately detecting objects in realistic environments: In this paper, we propose a noise removal algorithm using neighborhood comparison method with thresholding. The illumination variations correction is done in the detected foreground objects by using an amalgamation of techniques like homomorphic decomposition, curvelet transformation and gamma adjustment operator. Shadow is removed using chromaticity estimator with local relation estimator. Results are compared with the existing methods and prove as high robustness in the video surveillance.
Optical Flow Based Moving Object Detection and Tracking for Traffic Surveillance
Automated motion detection and tracking is a challenging task in traffic surveillance. In this paper, a system is developed to gather useful information from stationary cameras for detecting moving objects in digital videos. The moving detection and tracking system is developed based on optical flow estimation together with application and combination of various relevant computer vision and image processing techniques to enhance the process. To remove noises, median filter is used and the unwanted objects are removed by applying thresholding algorithms in morphological operations. Also the object type restrictions are set using blob analysis. The results show that the proposed system successfully detects and tracks moving objects in urban videos.
Texture Feature-Based Language Identification Using Wavelet-Domain BDIP and BVLC Features and FFT Feature
In this paper, we propose a texture feature-based
language identification using wavelet-domain BDIP (block difference
of inverse probabilities) and BVLC (block variance of local
correlation coefficients) features and FFT (fast Fourier transform)
feature. In the proposed method, wavelet subbands are first obtained
by wavelet transform from a test image and denoised by Donoho-s
soft-thresholding. BDIP and BVLC operators are next applied to the
wavelet subbands. FFT blocks are also obtained by 2D (twodimensional)
FFT from the blocks into which the test image is
partitioned. Some significant FFT coefficients in each block are
selected and magnitude operator is applied to them. Moments for each
subband of BDIP and BVLC and for each magnitude of significant
FFT coefficients are then computed and fused into a feature vector. In
classification, a stabilized Bayesian classifier, which adopts variance
thresholding, searches the training feature vector most similar to the
test feature vector. Experimental results show that the proposed
method with the three operations yields excellent language
identification even with rather low feature dimension.
Bridging Quantitative and Qualitative of Glaucoma Detection
Glaucoma diagnosis involves extracting three features
of the fundus image; optic cup, optic disc and vernacular. Present
manual diagnosis is expensive, tedious and time consuming. A
number of researches have been conducted to automate this process.
However, the variability between the diagnostic capability of an
automated system and ophthalmologist has yet to be established. This
paper discusses the efficiency and variability between
ophthalmologist opinion and digital technique; threshold. The
efficiency and variability measures are based on image quality
grading; poor, satisfactory or good. The images are separated into
four channels; gray, red, green and blue. A scientific investigation
was conducted on three ophthalmologists who graded the images
based on the image quality. The images are threshold using multithresholding
and graded as done by the ophthalmologist. A
comparison of grade from the ophthalmologist and threshold is made.
The results show there is a small variability between result of
ophthalmologists and digital threshold.
Fragile Watermarking for Color Images Using Thresholding Technique
In this paper, we propose ablock-wise watermarking scheme for color image authentication to resist malicious tampering of digital media. The thresholding technique is incorporated into the scheme such that the tampered region of the color image can be recovered with high quality while the proofing result is obtained. The watermark for each block consists of its dual authentication data and the corresponding feature information. The feature information for recovery iscomputed bythe thresholding technique. In the proofing process, we propose a dual-option parity check method to proof the validity of image blocks. In the recovery process, the feature information of each block embedded into the color image is rebuilt for high quality recovery. The simulation results show that the proposed watermarking scheme can effectively proof the tempered region with high detection rate and can recover the tempered region with high quality.
Coding of DWT Coefficients using Run-length Coding and Huffman Coding for the Purpose of Color Image Compression
In present paper we proposed a simple and effective method to compress an image. Here we found success in size reduction of an image without much compromising with it-s quality. Here we used Haar Wavelet Transform to transform our original image and after quantization and thresholding of DWT coefficients Run length coding and Huffman coding schemes have been used to encode the image. DWT is base for quite populate JPEG 2000 technique.
An Amalgam Approach for DICOM Image Classification and Recognition
This paper describes about the process of recognition and classification of brain images such as normal and abnormal based on PSO-SVM. Image Classification is becoming more important for medical diagnosis process. In medical area especially for diagnosis the abnormality of the patient is classified, which plays a great role for the doctors to diagnosis the patient according to the severeness of the diseases. In case of DICOM images it is very tough for optimal recognition and early detection of diseases. Our work focuses on recognition and classification of DICOM image based on collective approach of digital image processing. For optimal recognition and classification Particle Swarm Optimization (PSO), Genetic Algorithm (GA) and Support Vector Machine (SVM) are used. The collective approach by using PSO-SVM gives high approximation capability and much faster convergence.