Here you can find the Conference papers' abstracts (reverse chronological order), please visit Selected Publications page for recent Journal Papers and Preprints, and All Publications for a detailed list.
Title: Building Novel Approach for Context-based Image Retrieval in the Area of Healthcare
ADSSS-22. (Best paper award). Full paper@AIP Conference Proceedings.
In this study, we propose the use of a hybrid feature extraction method for content-based image retrieval relevant to healthcare. In the investigation, three alternative situations are being simulated. We combine DWT, PCA, and GLCM into a particular case study as a starting point. The second use case involves the incorporation of OKE, DWT, PCA, and GLCM. The Otsu binarization technique is used in the OKE hybrid method to convert a grayscale image into a binary image. The medical image is converted to grayscale before clustering. In the third scenario considers canny edge detection, DWT, PCA, and GLCM. Before clustering, the images edges are found using edge detection. K-means is used to group it, and the Euclidean distance metric was used to calculate the distance. Finally, a method for extracting hybrid features from the data is proposed that combines DWT, PCA, and GLCM. A detailed comparative analysis of the many aspects of the proposed work has been investigated and noticed while simulating the suggested approaches in MATLAB 2022a and are seen in detail.
Title: Simulation of Content Based on Image retrieval for Financial Institutions
ADSSS-22. Full paper@AIP Conference Proceedings.
CCTV systems are commonly employed by financial institutions and banks to ensure building security, manage currencies and keep an eye on customers as well. The CBIR could play a significant role in this situation. Two alternative names for content-based image information retrieval are QBIC and CBVIR, both of which use computer vision techniques for finding digital images in large databases by searching for phrases like "content-based image retrieval". If you're looking for a specific image and not its information, you can use a "content-based" search that looks at the image itself. In this research different images of the face have been considered for the security of the financial institution. Content-based image retrieval has been applied to get features of the image sets such as contrast, energy, and entropy. We consider edge detection methods including Canny Edge detection, DWT, PCA, and GLCM to get image edges before clustering. During simulation data collection takes place. Images are captured from camera and stored in dataset for real world implementation. Data collection considered in present research work belongs to real world dataset. Several features of the resulting images like Contrast, Energy, Correlation, RMS, Mean, Standard Deviation, Homogeneity, Entropy, Variance, Smoothness, Kurtosis, Skewness, and IDM have been compared and illustrated.
Title: Ensemble Learning for Data-driven Diagnosis of Polycystic Ovary Syndrome
ISDA-21. Full paper@Springer LNNS.
The emphasis of this article is on the data-driven diagnosis of polycystic ovary syndrome (PCOS) in women. Data from the Kaggle repository is used to train ensemble machine learning algorithms. There are 177 women with PCOS in this dataset, which includes 43 different characteristics. To begin, a univariate feature selection and feature elimination method are used to identify the most accurate characteristics for predicting PCOS. The characteristics are ranked, and the ratio of Follicle-stimulating hormone (FSH) to Luteinizing hormone (LH) is determined to be the most significant one. Cross-validation method is applied while the feature selection and feature elimination are occurring. Voting hard, voting soft and CatBoost are among the classifiers used on the dataset. According to the findings, the top 13 most significant risk factors accurately predict the onset of PCOS. With the use of 5, 10, 20-fold cross-validation on ensemble learning’s 13 most critical characteristics, results show that soft voting has the highest accuracy of 91.12%. As a result, ensemble learning can be used to accurately identify PCOS patients.
Title: SIMRES-TV: Noise and Residual Similarity for Parameter Estimation in Total Variation
PSBB-21. Full paper@Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci..
We present a multiregion image segmentation approach which utilizes multiscale anisotropic diffusion based scale spaces. By combining powerful edge preserving anisotropic diffusion smoothing with isolevel set linking and merging, we obtain coherent segments which are tracked across multiple scales. A hierarchical tree representation of the given input image with progressively simplified regions is used with intra-scale splitting and inter-scale merging for obtaining multiregion segmentations. Experimental results on natural and medical images indicate that multiregion, multiscale image segmentation (MMIS) approach obtains coherent segmentation results.
Title: Multiregion Multiscale Image Segmentation with Anisotropic Diffusion
ICPR-21. Full paper@Springer LNCS.
We present a multiregion image segmentation approach which utilizes multiscale anisotropic diffusion based scale spaces. By combining powerful edge preserving anisotropic diffusion smoothing with isolevel set linking and merging, we obtain coherent segments which are tracked across multiple scales. A hierarchical tree representation of the given input image with progressively simplified regions is used with intra-scale splitting and inter-scale merging for obtaining multiregion segmentations. Experimental results on natural and medical images indicate that multiregion, multiscale image segmentation (MMIS) approach obtains coherent segmentation results.
Title: Single Image Dehazing with Optimal Color Channels and Nonlinear Transformation
ICCE-21. Full paper@IEEE.
Image dehazing is an important problem and it is useful as a preprocessing step in various automatic image analysis systems. The goal of image dehazing is the quality improvement of digital images by removing haze across the scene. In the present work, we consider an automatic image dehazing approach that is based on optimal color channels and nonlinear transformations. The proposed dehazing approach can remove haze fast and effectively with features preservation. In our experiments, we compare the image dehazing results with related image dehazing methods from the literature. Visual assessments, as well as quantitative assessments, are also done to show the improvements obtained by the dehazing model across different natural images. Obtained experimental results indicate that the dehazing approach proposed here performs better than other dehazing models in terms of overall better visual quality and higher blind image quality metric values.
Title: On Numerical Implementation of the Laplace Equation-Based Image Inpainting
ICCRDA-20. Full paper@IOP Conf. Series.
In the current study, we focus on the development of a numerical algorithm for the Laplace equation-based image inpainting problem. A software program to implement the proposed algorithm is also developed. In experiments, the proposed method recovered corrupted images effectively. It can also remove defects as well as unnecessary objects successfully. The program provides an effective, convenient, and easy solution to perform the image inpainting in practical tasks. It also provides a useful feature to assess image quality after processing with both full reference and blind image quality assessment.
Title: Chest X-ray Image Denoising using Nesterov Optimization Method with Total Variation Regularization
CoCoNet-19. Full paper@Procedia Computer Science.
We propose a chest X-Ray image denoising method based on Total variation regularization with implementation on the Nesterov optimization method. The denoising problem is formulated in the form of the second-order cone programming problem and then it is transformed to a saddle point problem under the min-max form. The chest X-Ray images are also processed by the Anscombe transform to be appropriate for the formulated denoising problem. In the experiments, we test on chest X-Ray images of the Radiopaedia dataset. Denoising results are evaluated by using the Peak signal-to-noise ratio and the Structure similarity metrics. Based on the image quality assessment metrics, we compared images quality after denoising of the proposed method with ones of other similar denoising methods. The results confirmed that the proposed method outperforms other compared denoising methods.
Title: Adaptive Switching Weight Mean Filter for Salt and Pepper Image Denoising
CoCoNet-19. Full paper@Procedia Computer Science.
We propose Adaptive Switching Weight Mean Filter (ASWMF) to remove the salt and pepper noise. Instead of using median or mean, ASWMF assigns value of a switching weight mean (SWM) to grey value of the centre pixel of an adaptive window. SWM is evaluated by eliminating all noisy pixels from the adaptive window and putting a low weight for pixels on the diagonals and a high weight for pixels outside the diagonals. ASWMF can remove noise with various noise levels effectively. It not only successes for low-density denoising, but also removes medium-density and high-density noise impressively. In experiments, we compare denoising results with other similar denoising methods. According to intuition as well as error metrics such as the peak signal-to-noise ratio and the structure similarity, ASWMF outperforms other methods.
Title: Image Denoising with Overlapping Group Sparsity and Second Order Total Variation Regularization
NICS-19. Full paper@Proc. IEEE.
We propose an image denoising method by combining overlapping group sparsity and second-order total variation regularization. The method is named OGS-SOTV (image denoising method based on Overlapping Group Sparsity and Second-Order Total Variation regularization). The method utilizes performance of noise removal of overlapping group sparsity and performance of artifacts elimination of second-order total variation. A regularization parameter estimation is also proposed to implement the method automatically. In experiments, we compare denoising results of OGS-SOTV with ones of other similar methods. Results confirmed that OGS-SOTV works effectively and outperforms other similar denoising methods.
Title: An Image Inpainting Method based on Adaptive Fuzzy Switching Median
NICS-19. Full paper@Proc. IEEE.
Image inpainting is an important problem of image processing that has many applications. The goal of the image inpainting problem is to restore or fill the corrupted or missing regions of image. In this paper, we propose an image inpainting method based on the adaptive fuzzy switching median. The adaptive fuzzy switching median is to provide an accurate estimation for the values of corrupted pixels when there are many corrupted pixels on image. In the experiments, we implement the proposed method on an open dataset of real natural images. We utilize standard image quality assessment metrics such as the peak signal-to-noise ratio metric and the structured similarity metric to compare the inpainting result of the proposed method with other similar inpainting methods to prove its effectiveness. The proposed inpainting method is also extended to process colorful images.
Title: Multi-class Segmentation of Lung Immunofluorescence Confocal Images using Deep CNN
BIBM-19. Full paper@Proc. IEEE.
Deep learning models are now widely applied to various biomedical image analysis tasks such as the image segmentation and classification. However, automation of biomedical image analysis with deep learning is challenging since it requires highly specialized knowledge and large amounts of training data. In this work, we detail automatic multi-class segmentations using deep learning models for lung immunofluorescent (IF) confocal images, along with synthetic image generation of lung images for training these models. Analysis of lung imaging data is important for understanding the lung development at the molecular level and cross-sectional IF images are useful in identifying various structures of the lung. We tested multi-class segmentation using deep learning convolutional neural network (CNN) models with overwrap cropping method as preprocessing to make the dataset larger. Further, we generated synthetic images using deep convolution generative synthetic adversarial network (DCGAN) and use them in learned segmentation networks for creating segmentation masks. In terms of deep learning segmentation models, we adapted the state-of-the-art U-Net, SegNet, and DeepLabv3+ based models for multi-class segmentation from lung IF images. Our experimental results on these challenging lung IF images show that the highest dice score for training 98.7%, and testing 87.0% is obtained by an adapted multi-class U-Net method. Further, our synthetic image generation shows promise for future training paradigms in improving the segmentation of various lung structures in IF confocal images.
Title: Adaptive Texts Deconvolution Method for Real Natural Images
APCC-19. Full paper@Proc. IEEE.
Understanding of real scenes is an important task in augmented reality (AR). Identifying and comprehension of texts from real scene images is useful in implementing robust AR devices. Therefore, improving the quality of text images for better readability is very important and text images deconvolution is useful to increase the accuracy of AR pattern recognition algorithms. In this work, we propose an estimation method for the filtering operator within total variation deconvolution model. This method is applied for the texts image deconvolution problem from natural images. In the experiments, we use NIQE score – the blind quality assessment metric – to assess the deconvolution quality. We further compare the proposed method with other image deconvolution models such as the blind deconvolution, the Lucy and the Wiener methods. The experimental indicate that our deconvolution method works effectively for texts enhancement across different scenes with high quality results.
Title: Single Image Dehazing Based on Adaptive Histogram Equalization and Linearization of Gamma Correction
APCC-19. Full paper@Proc. IEEE.
Visibility of outdoor images is usually limited due to haze, dust, smoke and other particles in air. Visibility limit can cause many difficulties for activities of transport, rescue, oceanography etc. Hence, image dehazing is very necessary. In this paper, we propose a single image dehazing method based on combination of adaptive histogram equalization, HSV color model and linearization of Gamma correction. In the experiments, we test the proposed method on hazy images of the TAU dataset. To assess dehazing quality, we utilize NIQE metric and compare to other dehazing methods. The results confirm that the proposed method dehazes effectively and can compete with other state-of-the-art dehazing methods.
Title: A Fast Denoising Algorithm for X-ray Images with Variance Stabilizing Transform
KSE-19. Full paper@Proc. IEEE.
We propose a fast denoising algorithm for X-Ray images with variance stabilizing transformations. The variance stabilizing transformations are used to transform Poisson noisy images to Gaussian noisy images. Therefore, we can utilize advantages of the fast denoising algorithm based on the alternative direction method of multipliers. In experiments, we evaluate denoising quality by the Peak signal-to-noise ratio and the Structure Similarity metrics. Comparing results show that our method outperforms other similar denoising methods.
Title: Persian Classical Music Instrument Recognition (PCMIR) Using a Novel Persian Music Database
ICCKE-19. Full paper@Proc. IEEE.
Audio signal classification is an important field in pattern recognition and signal processing. Classification of musical instruments is a branch of audio signal classification and poses unique challenges due to the diversity of available instruments. Automatic expert systems could assist or be used as a replacement for humans. The aim of this work is to classify Persian musical instruments using combination of extracted features from audio signal. We believe such an automatic system to recognize Persian musical instruments could be very useful in an educational context as well as art universities. Features like Mel-Frequency Cepstrum Coefficients (MFCCs), Spectral Roll-off, Spectral Centroid, Zero Crossing Rate and Entropy Energy are employed and work well for this purpose. These features are extracted from audio signals out of our novel database. This database contains audio samples for 7 Persian musical instrument classes: Ney, Tar, Santur, Kamancheh, Tonbak, Ud and Setar. In feature selection part, Fuzzy entropy measure is employed and classification task takes place by Multi-Layer Neural Network (MLNN). It should be mentioned that this research is one of the first researches on Persian musical instrument classification. Validation confusion matrix made of true positive and false negative rates along with true and false observations numbers. Acquired results are so promising and satisfactory.
Title: Adaptive Thresholding Skin Lesion Segmentation with Gabor Filters and Principal Component Analysis
RICE-19. Full paper@Springer AISC.
In this article, we study and propose an adaptive thresholding segmentation method for dermoscopic images with Gabor filters and Principal Compo-nent Analysis. The Gabor filters is used for extracting statistical features of image and the Principal Component Analysis is applied for transforming features to various bases. In experiments, we implement tests with the ISIC dataset. Segmentation results are assessed by the Dice and the Jaccard similarities. We also compare the proposed method to other similar methods to prove its own effectiveness.
Title: New Features for Eye-tracking Systems: Preliminary Results
DOI Bookmark: 10.1109/IACS.2019.8809129
Due to their large number of applications, eye-tracking systems have gain attention recently. In this work, we propose 4 new features to support the most used feature by these systems, which is the location (x, y). These features are based on the white areas in the four corners of the sclera; the ratio of the white’s area (after segmentation) to the corner’s area is used as a feature coming from each corner. In order to evaluate the new features, we designed a simple eye-tracking system using a simple webcam, where the users’ faces and eyes are detected, which allows for extracting the traditional and the new features. The system was evaluated using 10 subjects, who looked at 5 objects on the screen. The experimental results using some machine learning algorithms show that the new features are user dependent, and therefore, they cannot be used (in their current format) for a multiuser eye-tracking system. However, the new features might be used to support the traditional features for a better single-user eye-tracking system, where the accuracy results were in the range of 0.90 to 0.98.
Title: Image Inpainting Method based on Mixed Median
The image inpainting is an interesting problem of image processing that has wide range of application in science, engineering, medicine and other fields of life. The inpainting is a process to fill the corrupted or missing parts of images. So, the image inpainting can be used in various cases, such as restoration of corrupted images, removal of unnecessary objects on images to improve quality of postprocessing tasks, image zooming etc. In this paper, we propose an image inpainting method based on mixed median. The mixed median proposed here is a combination of two kinds of median of pixels intensity: median of uncorrupted pixels in a clipping window and median of the maximum repetitive pixel values. In the experiment, we handle the test on the open dataset with given masks to generate the corrupted regions. This method is helpful to assess the inpainting quality based on the peak signal-to-noise ratio and the structure similarity metrics. We also compare the proposed method with the harmonic inpainting method to prove its own effectiveness.
Title: Wireless Capsule Endoscopy Image Enhancement with a Human Visual System Consistent Model
Wireless capsule endoscopy (WCE) is used in visualizing gastrointestinal (GI) tract without much pain to the patients. This near-lights optical imaging system is painless as it moves through the GI tract with peristalsis movements and wirelessly transmits color video data to the data receiver worn by the patients. However, due to the near-lights imaging model and the usage of burst light emitting diodes (LED) lights, to save battery power, the obtained images are of uneven illumination and result in dark regions. We utilize a human visual system inspired enhancement model that relies on a feature-linking neural network which utilizes the precision timing of spikes of neurons. Experimental results on a variety of WCE images indicate that we obtain better enhancements with better structure differentiation than related models and compared quantitatively the quality of our restored images are higher as well.
Title: Adaptive Thresholding Segmentation Method for Skin Lesion with Normalized Color Channels of NTSC and YCbCr
Medical image segmentation is an important problem in the medical imaging, because it is a necessary stage for many other postprocessing tasks to improve accuracy of diagnosis and treatment of diseases. This problem plays a vital role for skin cancer detection. In this paper, we propose two adaptive thresholding methods to segment skin lesion for dermoscopic images. The proposed methods are developed based on normalization of color channels of NTSC and YCbCr color models. To assess the segmentation quality, we use the Dice and the Jaccard similarities. Experiments are also tested on the popular dataset of skin lesion – the ISIC 2017 challenge. We also compare with other similar skin lesion segmentation methods to prove effectiveness of the proposed methods.
Title: Blood Vessels Segmentation Method for Retinal Fundus Images based on Adaptive Principal Curvatures and Image Derivative Operators
PSBB-19. Full paper@Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci..
DOI Bookmark: 10.5194/isprs-archives-XLII-2-W12-211-2019
Diabetes is a common disease in the modern life. According to WHO’s data, in 2018, there were 8.3% of adult population had diabetes. Many countries over the world have spent a lot of finance, force to treat this disease. One of the most dangerous complications that diabetes can cause is the blood vessel lesion. It can happen on organs, limbs, eyes, etc. In this paper, we propose an adaptive principal curvature and three blood vessels segmentation methods for retinal fundus images based on the adaptive principal curvature and images derivatives: the central difference, the Sobel operator and the Prewitt operator. These methods are useful to assess the lesion level of blood vessels of eyes to let doctors specify the suitable treatment regimen. It also can be extended to apply for the blood vessels segmentation of other organs, other parts of a human body. In experiments, we handle proposed methods and compare their segmentation results based on a dataset – DRIVE. Segmentation quality assessments are computed on the Sorensen-Dice similarity, the Jaccard similarity and the contour matching score with the given ground truth that were segmented manually by a human.
Title: A Skin Lesion Segmentation Method for Dermoscopic Images Based on Adaptive Thresholding with Normalization of Color Models
DOI Bookmark: 10.1109/ICEEE2019.2019.00030
In medical image processing, the skin lesion segmentation problem plays a vital role, because it is necessary to improve quality of extracting skin lesion features to classify the skin lesion. Hence, imaging diagnosis systems can detect skin cancer early. It is necessary to treat the skin cancer, especially, melanoma – one of the most dangerous form of skin cancer. In this paper, we proposed two adaptive methods to estimate the global threshold used for skin lesion segmentation based on normalization of the color models: RGB and XYZ. The skin lesion segmentation based on our proposed methods gives better result than the Otsu segmentation method regarding the grayscale model. This comparison is assessed on popular metrics for image segmentation, such as Dice and Jaccard scores. Experiments are tested on the famous ISIC dataset.
Title: Distorted Image Reconstruction Method with Trimmed Median
SigTelCom-19. Full paper@IEEE.
DOI Bookmark: 10.1109/SIGTELCOM.2019.8696138
The distorted image reconstruction is an interesting problem that has many applications in practice. The distorted images are usually corrupted by scratches, dust, noise, human actions and/or environment factors. In this paper, we propose a distorted image reconstruction method based on the trimmed median. This method is effective for images corrupted by scratches caused by human, environment etc. In the experiment, we generate masks to simulate the scratches and apply them over the reference original images to make the distorted images. This way is necessary to assess the image quality after recovering by the peak signal-to-noise ratio and the structure similarity metrics. We also compare the proposed method with the harmonic method.
Title: An Improved BPDF Filter for High Density Salt and Pepper Denoising
DOI Bookmark: 10.1109/RIVF.2019.8713669
The BPDF (based on pixel density filter) is an effective filter to remove the salt-and-pepper noise, but it only can work on low and medium noise levels. In this paper we propose an improved version of the BPDF to remove the high-density salt-and-pepper noise. Our proposed method can work effectively on high and very high noise levels (above 90%). In the experiment, we compare the denoising result of the proposed method with the denoising results by BPDF filter and DAMF filter to prove effectiveness of the proposed method. The denoising quality is assessed by the peak signal-to-noise ratio and the structure similarity metrics.
Automatic fruit weight estimation is an essential in todays automated fruit and agriculture industry. To increase the weighing efficiency of fruits, and to decrease the usage various tools that require dedicated human efforts, robust and easy to use automatic machine vision-based systems are required. Such systems can further reduce human errors, willful manipulation at point of sale counters in traditional shops in weighing fruits as well as decrease the overall costs of manual systems. Despite a wealth works in automatic fruit inspection systems with image processing techniques, there are not many works that adopt consumer depth cameras like Microsoft Kinect. In this work, we study the feasibility of an automatic fruit weight estimation method for Sweet Lemons (Citrus limetta), Sweet Peppers (Capsicum annuum), and Tomatoes (Solanum lycopersicum) based on their color (RGB) and depth images. Given the lack of available public datasets for this research direction, we create a dedicated database that consist of color plus depth information using data from Microsoft Kinect V.2 sensors with 50 samples in each of the three fruits. Our novel method is evaluated using the quality metrics such as mean absolute error (MAE), mean squared error (MSE) and root mean squared error (RMSE), which are calculated on three different distances, namely 0.8, 1.0, and 1.3 meters, from the sensor. One of the main goals of this paper is to study whether such depth sensor-based weighing method can remove the need for cumbersome industrial machines. Our proposed method is a non-learning-based system; hence it is fast and potentially could be used on real time automatic inspection systems. Our feasibility study indicates we obtain satisfactory accuracy in weighing these fruits and provides a potential for other fruits.
Title: Automatic Initial Boundary Generation Methods based on Edge Detectors for the Level Set Function of the Chan-Vese Segmentation Model and Applications in Biomedical Image Processing
FICTA-18. Full paper@Springer AISC.
DOI Bookmark: 10.1007/978-981-13-9920-6_18
Image segmentation is an important problem in image processing that has wide range of applications in medicine, biomedicine and other fields of science and engineering. During the non-learning-based approaches, the techniques based on the partial differential equations and calculus of variation have attracted a lot of attention and acquired many achievements. Among the variational models, the Chan-Vese variational segmentation is a well-known model to solve the image segmentation problem. The level set methods are highly accurate methods to solve this model and they do not depend on the edges. However, the performance of these methods depends on the level set function and its initial boundary too much. In this paper, we propose automatic initial boundary generation methods based on the edge detectors: Sobel, Prewitt, Roberts and Canny. In the experiments, we prove that among four proposed initial boundary generation methods, the method based on the Canny edge detector brings the highest performance for the segmentation method. By combining the proposed initial boundary generation method based on the Canny edge detector, we implement the Chan-Vese model to segment biomedical images. Experimental results indicate we obtain improved segmentation results and compare different edge detectors in terms of performance.
Title: Total Variation L1 Fidelity Salt-and-Pepper Denoising with Adaptive Regularization Parameter
DOI Bookmark: 10.1109/NICS.2018.8606870
Total variation (TV) is an effective tool to solve the image denoising and many other image processing problems. For the denoising problem, it is necessary to create automatic image processing methods based on parameters estimation of the corresponding models. For TV image denoising problem, almost methods focus on TV-L2 norm. The works for parameter estimation of denoising model based on TV-L1 norm is very little. The denoising model with TV-L1 norm is impressive to treat the salt-and-pepper noise. In this paper, we propose a parameter estimation method based on characteristics of the salt-and-pepper noise. This method is especially effective for the images without very high contrast and with high noise level. We will handle the comparison to other salt-and-pepper denoising methods, such as TV-L1 method and the BPDF method to prove the effectiveness of the proposed parameter estimation method for the adaptive TV-L1 denoising model.
Title: Regularization Parameter Selection in Image Restoration with Inverse Gradient: Single Scale or Multiscale?
DOI Bookmark: 10.1109/CCE.2018.8465720
Regularization methods are effective in solving ill-posed vision problems such as image denoising and restoration. These methods typically involve a smoothness/regularization term (prior) and a data term (fidelity). The importance of the regularization parameter that weights the smoothness prior term is well known in the image processing literature. In this work, we consider a particular class of adaptive regularization terms, which depend on the inverse gradient of the image. A pre-smoothing operation with Gaussian kernel is performed in computing the inverse gradient based adaptive regularization term with a fixed scale. However, in general, digital images contain objects of varying sizes, hence a multiscale regularization can improve the edge preserving restorations. We study here a comparison of single scale versus multiscale inverse gradient regularization parameter selection in image restoration along with quadratic and total variation regularization priors. Our experimental results conducted on standard test images indicate that using multiscale strategy improves the restoration quality both in terms of noise reduction and structure preservation. This assertion is augmented by various error metrics such as peak signal to noise ratio, and structural similarity.
Title: On Selecting the Appropriate Scale in Image Selective Smoothing by Nonlinear Diffusion
DOI Bookmark: 10.1109/CCE.2018.8465764
Image denoising and selective smoothing are important research problems in the area of image processing and computer vision. Partial differential equation (PDE) model filters were widely utilized due to their robust anisotropic diffusion properties that preserve edges. Spatial regularization via Gaussian low-pass filtering is used in well posed anisotropic diffusion PDE for image restoration that involves a crucial scale parameter. In this work, we provide an appropriate scale selection approach that obtains improved selective smoothing with nonlinear diffusion. Experimental results indicate the promise of such a strategy on a variety of synthetic and real noisy images. Further, compared to other diffusion PDE models the proposed technique improves the quality of final denoised images in terms of higher peak signal to noise ratio and structural similarity.
Title: Glioma Subtypes Clustering Method using Histopathological Image Analysis
DOI Bookmark: 10.1109/ICIEV.2018.8641031
Evaluations of surgical biopsy, postmortem tissue specimens for the diagnosis and understanding of human disease are one of the critical components for biomedical studies. One of the severe brain tumors is the Glioma and histopathological tissue images can provide unique insights into identifying and grading disease stages. Actually, subtypes of Glioma is not specified so far. In this study, we tried to specify subtypes of LGG with the significant features related to nuclei. We employed CellProfiler to implement our nuclei segmentation method. As a result, we discovered 3 subtypes of LGG using MORPHEUS to create the heatmap.
Title: Development of a Web based Image Annotation Tool for Lung Immunofluorescent Confocal Images
ISASE-MAICS-18. Full paper@J-STAGE.
DOI Bookmark: 10.5057/isase.2018-C000036
A molecular atlas of the human lung is important to inform basic mechanisms and treatments for lung diseases, and imaging data provide us the foundation upon which to build the lung atlas. For analyzing immunofluorescent confocal images, annotations describing precise anatomical structures are necessary. However, it is hard to annotate increasing images manually. Thus, this study aims to develop an automatic annotation system as a combination of automatic region detection and automatic structure classification modules. As an important and first step to achieving the aim, we developed an efficient annotation data collection tool that will be used collected data to develop the automatic annotation system for the lung atlas. We describe the details of our web based annotation tool that is web based and includes user control.
Title: Image Restoration with Total Variation and Iterative Regularization Parameter Estimation
DOI Bookmark: 10.1145/3155133.3155191
Regularization techniques are widely used for solving ill-posed image processing problems and in particular for image noise removal. Total variation (TV) regularization is one of the foremost edge preserving methods for noise removal from images that can overcome the over-smoothing effects of the classical Tikhonov regularization. One of the important aspects in this approach is the involvement of the regularization parameter that needs to be set appropriately to obtain optimal restoration results. In this work, we utilize a fast split Bregman based implementation of the TV regularization for denoising along with an iterative parameter estimation from local image information. Experimental results on a variety noisy images indicate the promise of our TV regularization with iterative parameter estimation with local variance method, and comparison with related schemes show better edge preservation and robust noise removal.
Title: Near-Light Perspective Shape from Shading for 3D Visualizations in Endoscopy Systems
DOI Bookmark: 10.1109/BIBM.2017.8218031
A near-light perspective shape from shading (SfS) technique applied to endoscopy for 3D visualizations of the gastrointestinal tract regions is presented. By utilizing an extensible reflectance model, we study a robust Huber regularization function based variational SfS model. A balancing parameter is used for weighting the irradiance ad smoothness/regularization terms. Experimental results on different endoscopy systems show that we obtain 3D visualizations without shrinkage and dilation artifacts on mucosa tissues associated with other SfS models from the past.
Title: Improving the Generalization of Disease Stage Classification with Deep CNN for Glioma Histopathological Images
DOI Bookmark: 10.1109/BIBM.2017.8217831
In the field of histopathology, computer-assisted diagnosis systems are important in obtaining patient-specific diagnosis for various diseases and help define precision medicine. Therefore, many studies on automatic analysis methods for digital pathology images have been reported. One of the severe brain tumors is the Glioma can provide unique insights into identifying and grading disease stages. However, the number of tissue samples to be examined is enormous, and is a burden to pathologists because of the tedious manual evaluation required for efficient decision-making and diagnosis. Therefore, there is a strong demand for quick and automatic analysis to do that. In this study, we consider feature extraction and disease stage classification for Glioma images using automatic image analysis methods with deep learning techniques. By devising a custom made deep convolutional neural network (CNN) for disease stage classification we apply it on image data available on the cancer genome atlas for brain glioma in histopathology.
Title: 3D Workflow for Segmentation and Interactive Visualization in Brain MR Images using Multiphase Active Contours
DOI Bookmark: 10.1109/BIBM.2017.8217780
In this paper, we are proposing a 3D segmentation and interactive visualization workflow. The segmentation implementation uses a globally convex multiphase active contours without edges. This algorithm has been proven to be initialization independent due to their globally convex formulation and better than other approaches due to robustness to image variations and adaptive energy functionals. The workflow includes a flexible 3D visualization application that can handle very large volumes using multi-resolution hierarchical data formats following the segmentation. We also designed a custom fragment shader that is capable of meaningfully fusing the data from three different volumes: a segmented label volume, a mean value per voxel volume and a skull striped volume for effective visualization without modifying the segmented results. Giving researchers the access to a whole end to end pipeline, from 3D segmentation to custom real time interactive 3D visualization is, in our opinion, a powerful tool focused on an analyst/expert centric workflow.
Title: Microvasculature Segmentation of Arterioles Using Deep CNN
ICIP-17. Full paper@IEEE. Slides at SigPort.
DOI Bookmark: 10.1109/ICIP.2017.8296347
Segmenting microvascular structures is an important requirement in understanding angioadaptation by which vascular networks remodel their morphological structures. Accurate segmentation for separating microvasculature structures is important in quantifying remodeling process. In this work, we utilize a deep convolutional neural network (CNN) framework for obtaining robust segmentations of microvasculature from epifluorescence microscopy imagery of mice dura mater. Due to the inhomogeneous staining of the microvasculature, different binding properties of vessels under fluorescence dye, uneven contrast and low texture content, traditional vessel segmentation approaches obtain sub-optimal accuracy. We consider a deep CNN for the purpose keeping small vessel segments and handle the challenges posed by epifluorescence microscopy imaging modality. Experimental results on ovariectomized - ovary removed (OVX) - mice dura mater epifluorescence microscopy images show that the proposed modified CNN framework obtains an highest accuracy of 99% and better than other vessel segmentation methods.
Title: Glioblastoma Multiforme Tissue Histopathology Images based Disease Stage Classification with Deep CNN
DOI Bookmark: 10.1109/ICIEV.2017.8338558
Recently, many feature extraction methods for histopathology images have been reported for automatic quantitative analysis. One of the severe brain tumor is the Glioblastoma multiforme (GBM) and histopathological tissue images can provide unique insights into identifying and grading disease stages. However, the number of tissue samples to be examined is enormous, and is a burden to pathologists because of tedious manual evaluation traditionally required for efficient evaluation. In this study, we consider feature extraction and disease stage classification for brain tumor histopathology images using automatic image analysis methods. In particular, we utilized an automatic feature extraction and labeling for histopathology imagery data given by The Cancer Genome Atlas (TCGA) and checked the classification accuracy of disease stages in GBM tissue images using deep Convolutional Neural Network (CNN). Experimental results indicate promise in automatic disease stage classification, and high level of accuracy were obtained for tested image data.
Title: Disease Stage Classification for Glioblastoma Multiforme Histopathological Images using Deep Convolutional Neural Network
IFSA-SCIS-17. Full paper@IFSA.
This paper discusses the classification accuracy for evaluating disease stage of Glioma histopathological images. In the field of histopathology, computer-assisted diagnosis (CAD) systems are important in obtaining patient-specific diagnosis for various diseases and helps define precision medicine. One of the main brain tumors is the Glioblastoma multiforme (GBM), and histopathological tissue images can provide unique insights into identifying and grading disease stages. However, the number of tissue samples to be examined is enormous, and is a burden to pathologists because of tedious manual evaluation required for efficient evaluation. In some cases, pathologists diagnose these images during surgery which necessitates instant decision making process. Therefore, there is a demand for quick analysis to do that. In addition, the criteria of evaluation heavily depends on the experience of each pathologist. In this study, the authors consider feature extraction and disease stage classification for brain tumor histopathological images using automatic image analysis methods. In particular, we utilized automatic feature extraction and labeling for histopathological image data given by TCGA and checked the classification accuracy using Convolutional Neural Network (CNN). CNN is one of the popular types of Deep Learning techniques that is increasingly applied in various image analysis problems.
Title: Victory Sign Biometric for Terrorists Identification: Preliminary Results
ICICS-17. Full paper@IEEE. MIT Technology Review.
DOI Bookmark: 10.1109/IACS.2017.7921968
Covering the face and all body parts, sometimes the only evidence to identify a person is their hand geometry, and not the whole hand- only two fingers (the index and the middle fingers) while showing the victory sign, as seen in many terrorists videos. This paper investigates for the first time a new way to identify persons, particularly (terrorists) from their victory sign. We have created a new database in this regard using a mobile phone camera, imaging the victory signs of 50 different persons over two sessions. Simple measurements for the fingers, in addition to the Hu Moments for the areas of the fingers were used to extract the geometric features of the shown part of the hand shown after segmentation. The experimental results using the KNN classifier with three different distance metrics were encouraging for most of the recorded persons; with about 40% to 93% total identification accuracy, depending on the features, distance metric and K used.
Title: HEp-2 Cell Classification and Segmentation Using Motif Texture Patterns and Spatial Features with Random Forests
DOI Bookmark: 10.1109/ICPR.2016.7899614
Human epithelial (HEp-2) cell specimens is obtained from indirect immunofluorescence (IIF) imaging for diagnosis and management of autoimmune diseases. Analysis of HEp2 cells is important and in this work we consider automatic cell segmentation and classification using spatial and texture pattern features and random forest classifiers. In this paper, we summarize our efforts in classification and segmentation tasks proposed in ICPR 2016 contest. For the cell level staining pattern classification (Task 1), we utilized texture features such as rotational invariant co-occurrence (RIC) versions of the well-known local binary pattern (LBP), median binary pattern (MBP), joint adaptive median binary pattern (JAMBP), and motif cooccurrence matrix (MCM) along with other optimized features. We report the classification results utilizing different classifiers such as the k-nearest neighbors (kNN), support vector machine (SVM), and random forest (RF). We obtained the best mean class accuracy of 94.29% for six cell classes with RIC-LBP combined with a RIC variant of MCM. For specimen level staining pattern classification (Task 2) we utilize a combination RIC-LBP with RF classifier and obtain 80% mean class accuracy (MCA) for seven classes. For cell segmentation (Task 4), we use our optimized multiscale spatial feature bank along with RF classifier for pixelwise labeling to achieve an F-measure of 84.26% for 1008 images.
Title: Video Haze Removal and Poisson Blending Based Mini-mosaics for Wide Area Motion Imagery
DOI Bookmark: 10.1109/AIPR.2016.8010552
In this work we consider three different haze removal approaches to dehaze videos with applications wide area motion imagery (WAMI). Out of three, we mainly focused on dark channel haze removal method. Most of the haze-free nonsky outdoor images contain at least one color channel with average intensity close to zero; this channel is called dark channel and provides a good estimate of the transmission of haze which is used to remove haze from image. By utilizing neighborhood consistency in temporal direction of videos we apply haze removal algorithm sequentially. Once the haze-free images are produced, we register them in order to generate mini-mosaics and apply poison blending to remove seam effect from the mini-mosaics. Additionally, in order to avoid dehazing each image, we create the mini-mosaic first and the apply dehazing on the mini-mosaic only. Experimental results on synthetic image sequences and WAMI indicate we obtain good haze-free and mosaiced results in both approaches.
Title: Assisted Ground Truth Generation using Interactive Segmentation on a Visualization and Annotation Tool.
DOI Bookmark: 10.1109/AIPR.2016.8010603
User driven interactive image segmentation provides a flexible framework for obtaining supervised object detection in digital images. Firefly is a rich interactive web-based tool that helps researchers from multiple domains to work collaboratively on ground truth generation, annotating, and tracking events/objects in images and videos on a remote (secure) archive. It is based on a Client-Server model written in Adobe Flex and PHP which uses MySQL database for storing the annotations. We have recently proposed a novel user seed points based interactive segmentation approach with spline interpolation. In this work, we integrate the elastic spline based segmentation method with a Firefly. Firefly provides an automatic analysis of segmentation results by executing Matlab scripts in the background. We provide user cases in segmenting natural, and biomedical imagery which requires minimal effort and no storage requirement on the user side.
Title: CSANG: Continuous Scale Anisotropic Gaussians for Robust Linear Structure Extraction
DOI Bookmark: 10.1109/AIPR.2016.8010551
Robust estimation of linear structures such as edges and junction features in digital images is an important problem. In this paper, we use an adaptive robust structure tensor (ARST) method where the local adaptation process uses spatially varying adaptive Gaussian kernel that is initialized using the total least-squares structure tensor solution. An iterative scheme is designed with size, orientation, and weights of the Gaussian kernel adaptively changed at each iteration step. Such an adaptation and continuous scale anisotropic Gaussian kernel change for local orientation estimation helps us obtain robust edge and junction features. We consider an efficient graphical processing unit (GPU) implementation which obtained 30x speed improvement over traditional central processing unit (CPU) based implementations. Experimental results on noisy synthetic and natural images indicate that we obtain robust edge detections and further comparison with other edge detectors shows that our approach obtains better localization and accuracy under noisy conditions.
Title: Confocal Vessel Structure Segmentation with Optimized Feature Bank and Random Forests
DOI Bookmark: 10.1109/AIPR.2016.8010580
In this paper, we consider confocal microscopy based vessel segmentation with optimized features and random forest classification. By utilizing vessel-specific features tuned to capture curvilinear structures such as multiscale Frobenius norm of the Hessian eigenvalues, Laplacian of Gaussians we obtain better segmentation results in challenging imaging conditions. We obtain binary segmentations using random forest classifier trained on physiologists marked ground-truth. Experimental results on mice dura mater confocal microscopy vessel segmentations indicate that we obtain better results compared to global segmentation approaches.
Title: A Study on Nuclei Segmentation, Feature Extraction and Disease Stage Classification for Human Brain Histopathological Images
KES-16. Full paper@Procedia Computer Science.
DOI Bookmark: 10.1016/j.procs.2016.08.164
Computer aided diagnosis (CAD) systems are important in obtaining precision medicine and patient driven solutions for various diseases. One of the main brain tumor is the Glioblastoma multiforme (GBM) and histopathological tissue images can provide unique insights into identifying and grading disease stages. In this study, we consider nuclei segmentation method, feature extraction and disease stage classification for brain tumor histopathological images using automatic image analysis methods. In particular we utilized automatic nuclei segmentation and labeling for histopathology image data obtained from The Cancer Genome Atlas (TCGA) and check for significance of feature discriptors using K-S test and classification accuracy using support vector machine (SVM) and Random Forests (RF). Our results indicate that we obtain classification accuracy 98.6% and 99.8% in the case of Object-Level features and 82.1% and 86.1% in the case of Spatial Arrangement features, respectively.
Title: Multiquadric Spline-based Interactive Segmentation of Vascular Networks
DOI Bookmark: 10.1109/EMBC.2016.7592074
Commonly used drawing tools for interactive image segmentation and labeling include active contours or boundaries, scribbles, rectangles and other shapes. Thin vessel shapes in images of vascular networks are difficult to segment using automatic or interactive methods. This paper introduces the novel use of a sparse set of user-defined seed points (supervised labels) for precisely, quickly and robustly segmenting complex biomedical images. A multiquadric spline-based binary classifier is proposed as a unique approach for interactive segmentation using as features color values and the location of seed points. Epifluorescence imagery of the dura mater microvasculature are difficult to segment for quantitative applications due to challenging tissue preparation, imaging conditions, and thin, faint structures. Experimental results based on twenty epifluorescence images is used to illustrate the benefits of using a set of seed points to obtain fast and accurate interactive segmentation compared to four interactive and automatic segmentation approaches.
Title: Random Forests for Dura Mater Microvasculature Segmentation Using Epifluorescence Images
EMBS-16. Full paper@IEEE. (EMBS Student Paper Competition Finalist)
DOI Bookmark: 10.1109/EMBC.2016.7591336
Automatic segmentation of microvascular structures is a critical step in quantitatively characterizing vessel remodeling and other physiological changes in the dura mater or other tissues. We developed a supervised random forest (RF) classifier for segmenting thin vessel structures using multiscale features based on Hessian, oriented second derivatives, Laplacian of Gaussian and line features. The latter multiscale line detector feature helps in detecting and connecting faint vessel structures that would otherwise be missed. Experimental results on epifluorescence imagery show that the RF approach produces foreground vessel regions that are almost 20 and 25 percent better than Niblack and Otsu threshold-based segmentations respectively.
Title: A Study on Feature Extraction and Disease Stage Classification for Glioma Pathology Images
FUZZ-IEEE-16. Full paper@IEEE.
DOI Bookmark: 10.1109/FUZZ-IEEE.2016.7737958
Computer aided diagnosis (CAD) systems are important in obtaining precision medicine and patient driven solutions for various diseases. One of the main brain tumor is the Glioblastoma multiforme (GBM) and histopathological tissue images can provide unique insights into identifying and grading disease stages. In this work, we consider feature extraction and disease stage classification for brain tumor histopathological images using automatic image analysis methods. In particular we utilized automatic nuclei segmentation and labeling for histopathology image data obtained from The Cancer Genome Atlas (TCGA) and check for classification accuracy using support vector machine (SVM), Random Forests (RF). Our results indicate that we obtain classification accuracy 98.9% and 99.6% respectively.
Title: Mixed Noise Removal Using Hybrid Fourth Order Mean Curvature Motion
SIRS-15. Full paper@Springer AISC.
DOI Bookmark: 10.1007/978-3-319-28658-7_53
Image restoration is one of the fundamental problems in digital image processing. Although there exists a wide variety of methods for removing additive Gaussian noise, relatively few works tackle the problem of removing mixed noise type from images. In this work we utilize a new hybrid partial differential equation (PDE) model for mixed noise corrupted images. By using a combination mean curvature motion (MMC) and fourth order diffusion (FOD) PDE we study a hybrid method to deal with mixture of Gaussian and impulse noises. The MMC-FOD hybrid model is implemented using an efficient essentially non-dissipative (ENoD) scheme for the MMC first to eliminate the impulse noise with a no dissipation. The FOD component is implemented using explicit finite differences scheme. Experimental results indicate that our scheme obtains optimal denoising with mixed noise scenarios and also outperforms related schemes in terms of signal to noise ratio improvement and structural similarity.
Title: Adaptive Nonlocal Filtering for Brain MRI Restoration
SIRS-15. Full paper@Springer AISC.
DOI Bookmark: 10.1007/978-3-319-28658-7_48
Brain magnetic resonance images (MRI) plays a crucial role in neuroscience and medical diagnosis. Denoising brain MRI images is an important pre-processing step required in many of the automatic computed aided-diagnosis systems in neuroscience. Recently, nonlocal means (NLM) and variants of these filters, which are widely used in Gaussian noise removal from digital image processing, have been adapted to handle Rician noise which occur in MRI. One of the crucial ingredient for the successful image filtering with NLM is the patch similarity. In this work we consider the use of fuzzy Gaussian mixture model (FGMM) for determining the patch similarity in NLM instead of the usual Euclidean distance. Experimental results with different noise levels on synthetic and brain MRI images are given to highlight the advantage of the proposed approach. Comparison with other image filtering methods our scheme obtains better results in terms of peak signal to noise ratio and structure preservation.
Title: Automatic Mucosa Detection in Video Capsule Endoscopy with Adaptive Thresholding
ICCIDM-15. Full paper@Springer SIST.
DOI Bookmark: 10.1007/978-81-322-2734-2_10
Video capsule endoscopy (VCE) is a revolutionary imaging technique (CAD) systems to aid the experts in making clinically relevant decisions. In this work, we consider an automatic tissue detection method that uses an adaptive entropy thresholding for better separation of mucosa which lines the colon wall from lumen, which is the hollowed gastrointestinal tract. Comparison with other thresholding methods such as Niblack, Bernsen, Otsu, and Sauvola as well as active contour is undertaken. Experimental results indicate that our method performs better than other methods in terms of segmentation accuracy in various VCE videos.
Title: Vascularization Features for Polyp Localization in Capsule Endoscopy
DOI Bookmark: 10.1109/BIBM.2015.7359946
Polyps in gastrointestinal (GI) tract can be precursors to cancer and detecting them early is important in determining their malignancy. Video capsule endoscopy imaging is a useful technology and provides visualization of GI without discomfort to the patient. We consider polyp localization from capsule endoscopy imagery using vascularization features. By using texture features computed from principle curvatures of the image surface, and multiscale directional vesselness stamping we obtain localization of polyps from a given video frame. We present our preliminary results for malignant and benign polyps from capsule endoscopy images.
Title: Cell Nuclei Segmentation in Glioma Histopathology Images with Color Decomposition based Active Contours
DOI Bookmark: 10.1109/BIBM.2015.7359944
This work discusses the performance of a color decomposition based active contours for segmenting cell nuclei from glioma histopathology. By combining a nuclear staining information obtained from color decomposition with fast variational active contours we obtain unsupervised segmentation of nuclei in histopathological images. Experimental results show promise when compared with different state of the art techniques.
Title: Feature Extraction and Disease Stage Classification for Glioma Histopathology Images
Healthcom-15. Full paper@IEEE.
DOI Bookmark: 10.1109/HealthCom.2015.7454574
This paper discusses the performance of feature descriptors for disease stage evaluation of Glioma images. In the field of histopathology, many evaluation methods for tissue images have been reported. However, pathologists have to analyze and evaluate many tissue images manually. In addition, the criteria of evaluation heavily depend on each pathologist’s experience and feelings. From this background, studies on computational pathology using computer vision have been reported. The proposed feature descriptors were, however, applied to specified diseases only, and we do not know whether these descriptors will be effective to other tissues or not. This paper applied the feature descriptors defined by previous studies to the Glioma images and investigated the effectiveness of them by using a statistical method. We also discussed a method to distinguish lowgrade from high-grade Glioma images by using the significant descriptors. After the experiments, more than 98% of Glioma images were classified correctly.
Advances in image and video processing algorithms and the availability of computational resources has paved the way for real-time medical decision support systems to be a reality. Video capsule endoscopy is a direct imaging method for gastrointestinal regions and produces large scale color video data. Fuzzification of color spaces can improve contextual description based tasks that are required in medical decision support. We consider abnormalities detection in video capsule endoscopy using fuzzy sets and logic theory on different colorspaces. Application in retrieval of bleeding detection and polyp vascularization are given as examples of the methodology considered here and preliminary results indicate we obtain promising retrieval results.
Title: Multiscale Anisotropic Filtering of Fluorescence Microscopy for Denoising Microvasculature
DOI Bookmark: 10.1109/ISBI.2015.7163930
Fluorescence microscopy images are plagued by noise and removing it without blurring vascular structures is an important step in automatic image analysis. The application of interest here is to automatically extract the structural components of the microvascular system with accuracy from images acquired by fluorescence microscopy. A robust denoising process is necessary in order to extract accurate biological information. For this purpose, we consider here a multi scale tensor anisotropic diffusion model which progressively switches the amount of smoothing and preserves vessel boundaries accurately. Based on a coherency enhancing flow with planar confidence measure fused with 3D structure information, our method integrates multiple scales for microvasculature preservation and noise removal from background membrane. Experimental results on simulated synthetic images and epi-fluorescence images show the advantage of our improvement over other related diffusion filters. We further show that the proposed multi-scale integration approach improves denoising accuracy of different tensor diffusion methods to obtain better microvasculature segmentation.
Title: Color Image Restoration with Fuzzy Gaussian Mixture Model Driven Nonlocal Filter
AIST-15. Full paper@Springer CCIS 542.
DOI Bookmark: 10.1007/978-3-319-26123-2_13
Color image denoising is one of the classical image processing problem and various techniques have been explored over the years. Recently, nonlocal means (NLM) filter is proven to obtain good results for denoising Gaussian noise corrupted digital images using weighted mean among similar patches. In this paper, we consider fuzzy Gaussian mixture model (GMM) based NLM method for removing mixed Gaussian and impulse noise. By computing an automatic homogeneity map we identify impulse noise locations and utilize an adaptive patch size. Experimental results on mixed noise affected color images show that our scheme performs better than NLM, anisotropic diffusion and GMM-NLM over di.erent noise levels. Comparison with respect to structural similarity, color image difference, and peak signal to noise ratio error metrics are undertaken and our scheme performs well overall without generating color artifacts.
Title: Automatic Image Segmentation for Video Capsule Endoscopy
CIHD-14. Full paper@Springer CIMI.
DOI Bookmark: 10.1007/978-981-287-260-9_7
Video capsule endoscopy (VCE) has proven to be a pain-free imaging technique of gastrointestinal (GI) tract and provides continuous stream of color imagery. Due to the amount of images captured automatic computer-aided diagnostic (CAD) methods are required to reduce the burden of gastroenterologists. In this work, we propose a fast and efficient method for obtaining segmentations of VCE images automatically without manual supervision. We utilize an efficient active contour without edges model which accounts for topological changes of the mucosal surface when the capsule moves through the GT tract. Comparison with related image segmentation methods indicate we obtain better results in terms of agreement with expert ground-truth boundary markings.
Title: Automatic Contrast Enhancement for Wireless Capsule Endoscopy Videos with Spectral Optimal Contrast-tone Mapping
ICCIDM-14. Full paper@Springer SIST.
DOI Bookmark: 10.1007/978-81-322-2205-7_23
Wireless capsule endoscopy (WCE) is a revolutionary imaging method for visualizing gastrointestinal tract in patients. Each exam of a patient creates large-scale color video data typically in hours and automatic computer aided diagnosis (CAD) are of important in alleviating the strain on expert gastroenterologists. In this work we consider an automatic contrast enhancement method for WCE videos by using an extension of the recently proposed optimal contrast-tone mapping (OCTM) to color images. By utilizing the transformation of each RGB color from of the endoscopy video to the spectral color space La*b* and utilizing the OCTM on the intensity channel alone we obtain our spectral OCTM (SOCTM) approach. Experimental results comparing histogram equalization, anisotropic diffusion and original OCTM show that our enhancement works well without creating saturation artifacts in real WCE imagery.
Title: Image Inpainting with Modified F-transform
SEMCCO-14. Full paper@Springer LNCS.
DOI Bookmark: 10.1007/978-3-319-20294-5_73
Restoring damaged images is an important problem in image processing and has been studied for applications such as inpainting missing regions, art restoration. In this work, we consider a modified (fuzzy transform) F-transform for restoration of damages such as holes, scratches. By utilizing weights calculated from known image regions using local variance from patches, we modify the classical F-transform to handle the missing regions effectively with edge preservation and local smoothness. Comparison with interpolation - nearest neighbor, bilinear and modern inpainting - Navier - Stokes, fast-marching methods illustrate that by using our proposed modified F-transform we obtain better results.
Title: Image Restoration with Fuzzy Coefficient Driven Anisotropic Diffusion
SEMCCO-14. Full paper@Springer LNCS.
DOI Bookmark: 10.1007/978-3-319-20294-5_13
Nonlinear anisotropic diffusion is widely used in image processing and computer vision for various problems. One of the basic and important problem is that of restoring noisy images and diffusion filters are an important class of denoising methods. In this work, we proposed a fuzzy diffusion coefficient which takes into account local pixel variability for better denoising and selective smoothing of edges. By using smoothed gradients along with the fuzzy diffusion coefficient function we obtain edge preserving restoration of noisy images. Experimental results on standard test images and real medical data illustrate that the proposed fuzzy diffusion improves over traditional filters in terms of structural similarity and signal to noise ratio.
Title: Elastic Body Spline based Image Segmentation
DOI Bookmark: 10.1109/ICIP.2014.7025888
Elastic body splines (EBS) belonging to the family of 3D splines were recently introduced to capture tissue deformations within a physical model-based approach for non-rigid biomedical image registration. EBS model the displacement of points in a 3D homogeneous isotropic elastic body subject to forces. We propose a novel extension of using elastic body splines for learning driven figure-ground segmentation. The task of interactive image segmentation, with user provided foreground-background labeled seeds or samples, is formulated as learning an interpolating pixel classification function that is then used to assign labels for all unlabeled pixels in the image. The spline function we chose to model the supervised pixel classifier is the Gaussian elastic body spline (GEBS) which can use sparse scribbles from the user and has a closed form solution enabling a fast on-line implementation. Experimental results demonstrate the applicability of the GEBS approach for image segmentation. The GEBS method for interactive foreground image labeling shows promise and outperforms a previous approach using the thin-plate spline model.
Camera pose estimation has been explored for the past few decades but still remains an active topic with the prevalence of new sensors and platforms. Among the existing pose estimation methods, Bundle Adjustment (BA) based approaches are robust providing reasonable results even when only partial information is available. BA refers to simultaneously refining the pose of N-cameras and 3D structure of scene points subject to a set of projective constraints such that an appropriate error measure is minimized. Normally, in BA after extracting salient features and establishing correspondences, an estimate of the camera rotation and translation, together known as the camera pose, is obtained using either fundamental or homography matrix estimation. These initial estimates are then used in a triangulation step where corresponding features are geometrically fused to obtain an initial estimate for the 3D point cloud reconstruction of the scene structure. The crucial part of BA is the optimization and refinement steps, given the initial estimates. Unlike general BA utilized in other computer vision tasks where there is often no sensor metadata, in BA for Wide Area Motion Imagery (WAMI) noisy camera pose measurements from on-board sensors are available. This enables us to develop efficient streamlined BA algorithms exploiting sensor and platform geometries and flight paths. We show that the fundamental matrix or homography transformation estimation step can be bypassed, but errors in the metadata due to noisy sensor measurements and adverse operating environments, must be taken into account. In this paper, we analyze the effects of measurement noise in position (from GPS) and rotation (from IMU) sensors on BA results, in terms of accuracy and robustness of the recovered camera parameters using a simulation testbed. We also investigate how matching errors in a sequence of corresponding features used to perform 3D triangulation can affect overall precision. The impact on the robustness of camera pose estimation for N-view BA in the context of large scale WAMI-based 3D reconstruction is discussed.
Title: Multi-focus Image Fusion Using Epifluorescence Microscopy for Robust Vascular Segmentation
DOI Bookmark: 10.1109/EMBC.2014.6944682
Automatic segmentation of three-dimensional microvascular structures is needed for quantifying morphological changes to blood vessels during development, disease and treatment processes. Single focus two-dimensional epifluorescent imagery lead to unsatisfactory segmentations due to multiple out of focus vessel regions that have blurred edge structures and lack of detail. Additional segmentation challenges include varying contrast levels due to diffusivity of the lectin stain, leakage out of vessels and fine morphological vessel structure. We propose an approach for vessel segmentation that combines multi-focus image fusion with robust adaptive filtering. The robust adaptive filtering scheme handles noise without destroying small structures, while multi-focus image fusion considerably improves segmentation quality by deblurring out-of-focus regions through incorporating 3D structure information from multiple focus steps. Experiments using epifluorescence images of mice dura mater show an average of 30.4% improvement compared to single focus microvasculature segmentation.
Title: Feature Fusion and Label Propagation for Textured Object Video Segmentation
SPIE DSS-14. Full paper@SPIE.
DOI Bookmark: 10.1117/12.2052983
We study an efficient texture segmentation model for multichannel videos using a local feature fitting based active contours scheme. We propose a flexible motion segmentation approach using features computed from texture, intensity components in a globally convex continuous optimization and fusion framework. A fast numerical implementation is described using an efficient dual minimization formulation and experimental results on synthetic and real color videos indicate the superior performance of the proposed method compared to related approaches. The novel contributions include the use of local feature density functions in the context of a luminance-chromaticity decomposition combined with a globally convex active contour variational method to capture texture variations for video object segmentation.
Title: Automatic Contrast Parameter Estimation in Anisotropic Diffusion for Image Restoration
AIST-14. Full paper@Springer CCIS 436.
DOI Bookmark: 10.1007/978-3-319-12580-0_20
Anisotropic diffusion is used widely in image processing for edge preserving filtering and image smoothing tasks. One of the important class of such model is by Perona and Malik (PM) who used a gradient based diffusion to drive smoothing along edges and not across it. The contrast parameter used in the PM method needs to be carefully chosen to obtain optimal denoising results. Here we consider a local histogram based cumulative distribution approach for selecting this parameter in a data adaptive way so as to avoid manual tuning. We use spatial smoothing based diffusion coefficient along with adaptive contrast parameter estimation for obtaining better edge maps. Moreover, experimental results indicate that this adaptive scheme performs well for a variety of noisy images and comparison results indicate we obtain better peak signal to noise ratio and structural similarity scores with respect to fixed constant parameter values.
Title: Feature Preserving Anisotropic Diffusion for Image Restoration
NCVPRIPG-13. Full paper@IEEE.
DOI Bookmark: 10.1109/NCVPRIPG.2013.6776250
Anisotropic diffusion based schemes are widely used in image smoothing and noise removal. Typically, the partial differential equation (PDE) used is based on computing image gradients or isotropically smoothed version of the gradient image. To improve the denoising capability of such nonlinear anisotropic diffusion schemes, we introduce a multi-direction based discretization along with a selection strategy for choosing the best direction of possible edge pixels. This strategy avoids the directionality based bias which can over-smooth features that are not aligned with the coordinate axis. The proposed hybrid discretization scheme helps in preserving multi-scale features present in the images via selective smoothing of the PDE. Experimental results indicate such an adaptive modification provides improved restoration results on noisy images.
Title: Robust Periocular Recognition by Fusing Local to Holistic Sparse Representations
DOI Bookmark: 10.1145/2523514.2523540
Sparse representations have been advocated as a relevant advance in biometrics research. In this paper a new algorithm for fusion at the data level of sparse representations is proposed, each one obtained from image patches. The main novelties are two-fold: 1) a dictionary fusion scheme is formalised, using the L1 minimization with the gradient projection method; 2) the proposed representation and classification method does not require the non-overlapping condition of image patches from where individual dictionaries are obtained. In the experiments, we focused in the recognition of periocular images and obtained independent dictionaries for the eye, eyebrow and skin regions, that were subsequently fused. Results obtained in the publicly available UBIRIS.v2 data set show consistent improvements in the recognition effectiveness when compared to state-of-the-art related representation and classification techniques.
We show that an adaptive robust filtering based image segmentation model can be used for microvasculature network detection and quantitative characterization in epifluorescence-based high resolution images. Inhomogeneous fluorescence contrast due to the variable binding properties of the lectin marker and leakage hampers accurate vessel segmentation. We use a robust adaptive filtering approach to remove noise and reduce inhomogeneities without destroying small scale vascular structures. An adaptive variance based thresholding method combined with morphological filtering yields an effective detection and segmentation of the vascular network suitable for medial axis estimation. Quantitative parameters of the microvascular network geometry, including curvature, tortuosity, branch segments and branch angles are computed using post segmentation-based medial axis tracing. Experiments using epifluorescence-based high resolution images of porcine and murine microvasculature demonstrates the effectiveness of the proposed approach for quantifying morphological properties of vascular networks.
Title: Multichannel Texture Image Segmentation using Weighted Feature Fitting based Variational Active Contours
DOI Bookmark: 10.1145/2425333.2425411
We study an efficient texture image segmentation model for multichannel images using a local feature fitting based active contours scheme. Using a chromaticity-brightness decomposition, we propose a flexible segmentation approach using multi-channel texture and intensity in a globally convex continuous optimization framework. We make use of local feature histogram based weights with the smoothed gradients from the brightness channel and localized fitting for the chromaticity channels. A fast numerical implementation is described using an efficient dual minimization formulation and experimental results on synthetic and real color images indicate the superior performance of the proposed method compared to related approaches. The novel contributions include the use of local feature density functions in the context of a luminance-chromaticity decomposition combined with a globally convex active contour variational method to capture texture variations for image segmentation.
Title: Segmentation of Breast Cancer Tissue Microarrays for Computer-Aided Diagnosis in Pathology
IEEE HIC-12. Full paper@IEEE.
Computer-Aided Diagnosis (CAD) systems for pathologists can act as an intelligent digital assistant supporting automated grading and morphometric-based discovery of tissue features that are important in cancer diagnosis and patient prognosis. Automated image segmentation is an essential component of computer-based grading in CAD. We describe a novel tissue segmentation algorithm using local feature-based active contours in a globally convex formulation. Preliminary results using the Stanford Tissue MicroArray database shows promising stromal/epithelial superpixel segmentation.
Title: Texture Image Segmentation with Smooth Gradients and Local Information
CompIMAGE-12. Full paper@Taylor & Francis.
DOI Bookmark: 10.1201/b12753-28
We study a region based segmentation scheme for images with textures based on the gradient information weighted by local image intensity histograms. It relies on the Chan and Vese model without any edge-detectors, and incorporates a new input term, deﬁned by the product of the smoothed gradient and a local histogram of pixel intensity measure of the input image. Segmentation of images with texture objects is performed by effectively differentiating regions displaying different textural information via the local histogram features. A fast numerical scheme based on the dual formulation of the energy minimization is considered. The performance of the proposed scheme is tested on different natural images which contain texture objects.
Title: Mucosal Region Detection and 3D Reconstruction in Wireless Capsule Endoscopy Videos Using Active Contours
DOI Bookmark: 10.1109/EMBC.2012.6346847
Wireless capsule endoscopy (WCE) provides an inner view of the human digestive system. The inner tubular like structure of the intestinal tract consists of two major regions: lumen - intermediate region where the capsule moves, mucosa - membrane lining the lumen cavities. We study the use of the Split Bregman version of the extended active contour model of Chan and Vese for segmenting mucosal regions in WCE videos. Utilizing this segmentation we obtain a 3D reconstruction of the mucosal tissues using a near source perspective shape-from-shading (SfS) technique. Numerical results indicate that the active contour based segmentation provides better segmentations compared to previous methods and in turn gives better 3D reconstructions of mucosal regions.
Title: A Segmentation Model and Application to Endoscopic Images
ICIAR-12. Full paper@Springer LNCS 7325.
DOI Bookmark: 10.1007/978-3-642-31298-4_20
In this paper a variational segmentation model is proposed. It is a generalization of the Chan and Vese model, for the scalar and vector-valued cases. It incorporates extra terms, depending on the image gradient, and aims at approximating the smoothed image gradient norm, inside and outside the segmentation curve, by mean constant values. As a result, a flexible model is obtained. It segments, more accurately, any object displaying many oscillations in its interior. In effect, an external contour of the object, as a whole, is achieved, together with internal contours, inside the object. For determining the approximate solution a Levenberg-Marquardt Newton-type optimization method is applied to the finite element discretization of the model. Experiments on in vivo medical endoscopic images (displaying aberrant colonic crypt foci) illustrate the efficacy of this model.
Title: Weighted Laplacian Differences Based Multispectral Anisotropic Diffusion
DOI Bookmark: 10.1109/IGARSS.2011.6050119
We study a multichannel version of nonlinear diffusion PDE which is used to restore noisy multispectral images. Weighted coupling of interchannel edges is done by utilizing fast total variation for each channel. Anisotropic intrachannel smoothing is included to denoise and preserve edges. Numerical results on noisy multispectral images show the advantage of the proposed hybrid approach.
Wireless capsule endoscopy (WCE) provides an inner view of the human intestinal system, and helps physicians to identify abnormalities. Usually the WCE produces a huge amount of data (which, in terms of video recordings and per patient, is about 8 hours and approximately 55000 frames of RGB color data). This means the examiner needs to sift through each video for a long time. Digital image processing techniques can be used effectively to reduce this burden, by making use of automatic algorithms. The inner tubular like structure of the human large intestinal tract consist of two major regions. One is the lumen, the intermediate region where the capsule moves. The other part is the mucosa, which is a mucus-secreting membrane lining the lumen cavities. Abnormal tissues and lesions, like ulcers and polyps, are usually seen as exterior parts of the mucosa, and the lumen is filled with intestinal juices of the human digestive system. Here we apply a variational model for segmenting mucosa regions in WCE images and videos. A fast implementation using the Split-Bregman method is also applied to segment videos in real time. Results on WCE images and videos show that the approach provides reasonable segmentations in an efficient manner.
A variational segmentation model for images with texture is proposed. It relies on the Chan and Vese model, and incorporates an extended structure tensor of the image, defined by a coupling of the image with its first-order spatial derivatives. The average components of the extended structure tensor is also utilized in the model, by means of its gradient. Moreover, a pre-smoothing of this gradient and the structure tensor is performed, which makes the computations robust to noise. As a result, an effective segmentation model is obtained, which distinguishes and segments objects with texture information. For the numerical approximation of this model, a finite element discretization is used, and a Levenberg-Marquard-Newton type optimization method is applied. An application of the proposed scheme is to segment colonic polyps from human intestinal system images, captured by wireless capsule endoscopy. We present experiments on some synthetic and wireless capsule endoscopic textured images, for evaluation and validation of the proposed model.
Title: Color Image Segmentation Based on Vectorial Multiscale Diffusion with Inter-scale Linking
PReMI-09. Full paper@Springer LNCS 5909.
DOI Bookmark: 10.1007/978-3-642-11164-8_55
We propose a segmentation scheme for digital color images using vectorial multiscale anisotropic diffusion. By integrating the edge information, diffusion based schemes can remove noise effectively and create fine to coarse set of images known as scale-space. Segmentation is performed by effectively tracking edges in an inter-scale manner across this scale space family of images. The regions are connected according to color coherency, and scale relation along time axis of the family is taken into account for the final segmentation result. Fast total variation diffusion and anisotropic diffusion facilitate denoising and create homogenous regions separated by strong edges. They provide a road map for further segmentation with persistent edges and flat regions. The scheme is general in the sense that other anisotropic diffusion schemes can be incorporated depending upon the requirement. Numerical simulations show the advantage of the proposed scheme on noisy color images.
Title: Ringing Artifact Reduction in Blind Image Deblurring and Denoising Problems by Regularization Methods
DOI Bookmark: 10.1109/ICAPR.2009.57
Image deblurring and denoising are the main steps in early vision problems. A common problem in deblurring is the ringing artifacts created by trying to restore the unknown point spread function (PSF). The random noise present makes this task even harder. Variational blind deconvolution methods add a smoothness term for the PSF as well as for the unknown image. These methods can amplify the outliers correspond to noisy pixels. To remedy these problems we propose the addition of a first order reaction term which penalizes the deviation in gradients. This reduces the ringing artifact in blind image deconvolution. Numerical results show the effectiveness of this additional term in various blind and semi-blind image deblurring and denoising problems.
To mimic human vision is the ultimate goal of computer vision. Early vision refers to problems such as image denoising, selective smoothing and segmentation for partitioning the image into meaningful objects. Partial differential equations (PDEs) are fast becoming a versatile tool for solving such computer vision problems. In particular early vision problems can be solved effectively and faster by PDE based schemes than standard techniques presently available. In this paper we present such nonlinear schemes from image restoration and segmentation point of view. We review and indicate a connection between such multiscale models with adaptive neighborhood, statistical, and geometrical approaches. Finally a brief state of the art trends in this area are given. We hope to highlight how ‘classical’ mathematical analysis can help the ‘modern’ subjects like computer vision using this short survey.
Title: Edge Detectors Based Anisotropic Diffusion for Enhancement of Digital Images
ICVGIP-08. (Best paper award). Full paper@IEEE.
DOI Bookmark: 10.1109/ICVGIP.2008.68
Using the edge detection techniques we propose a new enhancement scheme for noisy digital images. This uses inhomogeneous anisotropic diffusion scheme via the edge indicator provided by well known edge detection methods. Addition of a fidelity term facilitates the proposed scheme to remove the noise while preserving edges. This method is general in the sense that it can be incorporated into any of the nonlinear anisotropic diffusion methods. Numerical results show the promise of this hybrid technique on real and noisy images.
Nonlinear diffusion schemes for image processing were introduced by Perona and Malik (Ref. IEEE Trans. Pattern Anal. Machine. Intell., 12(7), 1990) two decades back. Based on image gradients a special diffusion coefficient function drives the process in an edge preserving and even enhancing way. This PDE based approach combines forward diffusion for flat regions and a type of inverse diffusion for edges and features. In this paper we study this famous PDE from the perspective of the diffusion coefficient functions which are key ingredients for effective results. They act as an edge indicator function to diffuse the image selectively and also introduce the nonlinearity in the PDE. Wrongly classified pixels in one iteration of the PDE can disrupt the restoration process and affect the later iterations and using only gradients for edge estimation can lead to over-locality problem. To avoid these drawbacks modifications are proposed in recent times like spatial regularization, adaptive schemes and coupled PDEs. For comparing these approaches we take real image with known classified edge pixels. The methods are numerically implemented and results reported. These results show the effectiveness of using such modifications and explain the importance of choosing a proper diffusion function.
In this paper we review some well known robust statistics methods applied to early computer vision problems such as denoising and selective smoothing. Digital images are corrupted by noise and pose a real challenge in identifying the objects for further pattern recognition processes. By using a discontinuity adaptive measure function, robust M-estimators control the smoothing selectively and remove outliers corresponding to noisy pixels. Classical functions like Huber’s minmax and Tukey’s bisquare functions are used and their performance are evaluated in real and noisy images. A related M-estimator function is proposed and it has an advantage over other functions in terms of image enhancement and good performance under severe noise levels. A fast optimization using gradient descent scheme is used to restore images effectively. The proposed energy function combines the advantages of smooth quadratic and total variation type regularizing functions and is theoretically well-posed. This facilitates the optimization in finding the global minimizer and makes the algorithm stable. We conclude the paper indicating possible future directions in this field.
Title: Controlled Inverse Diffusion models for Image Restoration and Enhancement
DOI Bookmark: 10.1109/ICETET.2008.69
A new class of variational-PDE based models is proposed for edge preserving image enhancement. Using the equivalence relation between regularization and PDEs we propose a general class of models which restore noisy images with selective enhancement. It contains a controlled inverse diffusion term which sharpens the image. Experiments show the effectiveness of the model with real and MRI images.