EDGE EXTRACTION OF IMAGES BY RECONSTRUCTION USING WAVELET DECOMPOSITION DETAILS AT DIFFERENT RESOLUTION LEVELS

Author(s):  
L. FENG ◽  
C. Y. SUEN ◽  
Y. Y. TANG ◽  
L. H. YANG

This paper describes a novel method for edge feature detection of document images based on wavelet decomposition and reconstruction. By applying the wavelet decomposition technique, a document image becomes a wavelet representation, i.e. the image is decomposed into a set of wavelet approximation coefficients and wavelet detail coefficients. Discarding wavelet approximation, the edge extraction is implemented by means of the wavelet reconstruction technique. In consideration of the mutual frequency, overlapping will occur between wavelet approximation and wavelet details, a multiresolution-edge extraction with respect to an iterative reconstruction procedure is developed to ameliorate the quality of the reconstructed edges in this case. A novel combination of this multiresolution-edge results in clear final edges of the document images. This multi-resolution reconstruction procedure follows a coarser-to-finer searching strategy. The edge feature extraction is accompanied by an energy distribution estimation from which the levels of wavelet decomposition are adaptively controlled. Compared with the scheme of wavelet transform, our method does not incur any redundant operation. Therefore, the computational time and the memory requirement are less than those in wavelet transform.

2019 ◽  
Vol 8 (S1) ◽  
pp. 50-53
Author(s):  
N. P. Revathy ◽  
S. Janarthanam ◽  
S. Sukumaran

Document images are more popular in today’s world and being made available over the internet for Information retrieval. The document images becomes a difficult task compared with digital texts and edge detection is an important task in the document image retrieval, edge detection indicates to the process of finding sharp discontinuation of characters in the document images. The single edge detection methods causing the weak gradient and edge missing problems adopts the method of combining global with local edge detection to extract edge. The global edge detection obtains the whole edges and uses to improve adaptive smooth filter algorithm based on canny operator. These combinations increase the detection efficiency and reduce the computational time. In addition, the proposed algorithm has been tested through real-time document retrieval system to detect the edges in unstructured environment and generate 2D maps. These maps contain the starting and destination points in addition to current positions of the objects. This proposed work enhancing the searching ability of the document to move towards the optimal solution and to verify the capability in terms of detection efficiency.


2020 ◽  
Vol 64 (3) ◽  
pp. 30401-1-30401-14 ◽  
Author(s):  
Chih-Hsien Hsia ◽  
Ting-Yu Lin ◽  
Jen-Shiun Chiang

Abstract In recent years, the preservation of handwritten historical documents and scripts archived by digitized images has been gradually emphasized. However, the selection of different thicknesses of the paper for printing or writing is likely to make the content of the back page seep into the front page. In order to solve this, a cost-efficient document image system is proposed. In this system, the authors use Adaptive Directional Lifting-Based Discrete Wavelet Transform to transform image data from spatial domain to frequency domain and perform on high and low frequencies, respectively. For low frequencies, the authors use local threshold to remove most background information. For high frequencies, they use modified Least Mean Square training algorithm to produce a unique weighted mask and perform convolution on original frequency, respectively. Afterward, Inverse Adaptive Directional Lifting-Based Discrete Wavelet Transform is performed to reconstruct the four subband images to a resulting image with original size. Finally, a global binarization method, Otsu’s method, is applied to transform a gray scale image to a binary image as the output result. The results show that the difference in operation time of this work between a personal computer (PC) and Raspberry Pi is little. Therefore, the proposed cost-efficient document image system which performed on Raspberry Pi embedded platform has the same performance and obtains the same results as those performed on a PC.


Author(s):  
Tu Huynh-Kha ◽  
Thuong Le-Tien ◽  
Synh Ha ◽  
Khoa Huynh-Van

This research work develops a new method to detect the forgery in image by combining the Wavelet transform and modified Zernike Moments (MZMs) in which the features are defined from more pixels than in traditional Zernike Moments. The tested image is firstly converted to grayscale and applied one level Discrete Wavelet Transform (DWT) to reduce the size of image by a half in both sides. The approximation sub-band (LL), which is used for processing, is then divided into overlapping blocks and modified Zernike moments are calculated in each block as feature vectors. More pixels are considered, more sufficient features are extracted. Lexicographical sorting and correlation coefficients computation on feature vectors are next steps to find the similar blocks. The purpose of applying DWT to reduce the dimension of the image before using Zernike moments with updated coefficients is to improve the computational time and increase exactness in detection. Copied or duplicated parts will be detected as traces of copy-move forgery manipulation based on a threshold of correlation coefficients and confirmed exactly from the constraint of Euclidean distance. Comparisons results between proposed method and related ones prove the feasibility and efficiency of the proposed algorithm.


2021 ◽  
Vol 2021 (1) ◽  
Author(s):  
Wei Xiong ◽  
Lei Zhou ◽  
Ling Yue ◽  
Lirong Li ◽  
Song Wang

AbstractBinarization plays an important role in document analysis and recognition (DAR) systems. In this paper, we present our winning algorithm in ICFHR 2018 competition on handwritten document image binarization (H-DIBCO 2018), which is based on background estimation and energy minimization. First, we adopt mathematical morphological operations to estimate and compensate the document background. It uses a disk-shaped structuring element, whose radius is computed by the minimum entropy-based stroke width transform (SWT). Second, we perform Laplacian energy-based segmentation on the compensated document images. Finally, we implement post-processing to preserve text stroke connectivity and eliminate isolated noise. Experimental results indicate that the proposed method outperforms other state-of-the-art techniques on several public available benchmark datasets.


Author(s):  
Alla Levina ◽  
Sergey Taranov

Theory of wavelet transform is a powerful tool for image and video processing. Mathematical concepts of wavelet transform and filter bank have been studied carefully in many works. This work presents application of new construction of linear and robust codes based on wavelet decomposition and its application in ADV612 chips. We present the model of the error-coding scheme that allows to detect errors in the ADV612 chips with high probability. In our work, we will show that developed and presented scheme of protection drastically improves the resistance of ADV612 chips to malfunctions and errors.


2012 ◽  
Vol 562-564 ◽  
pp. 1394-1397
Author(s):  
Yu Hua Dong ◽  
Hai Chun Ning

This paper proposes a method of wavelet transform combined with SVD (Singular Value Extracting), and the abnormal data elimination in its trajectory measurement is studied. After the wavelet decomposition of the observed data, combining the approximate component and the detail component, the phase space is reconstructed. The increment criterion of singular entropy is used for the input observed matrix of SVD, and the singular value is selected. Then the original signal is reconstructed by SVD inverse transform. This method overcomes the distortion problem of data end in phase space reconstruction by Hankel matrix. The reconstructed phase space by components of wavelet decomposition is orthogonal. So it further improves the accuracy of noise reduction and abnormal detection by SVD. The results of experimental data processing show the effectiveness of this method proposed in the paper.


Author(s):  
Yung-Kuan Chan ◽  
Tung-Shou Chen ◽  
Yu-An Ho

With the rapid progress of digital image technology, the management of duplicate document images is also emphasized widely. As a result, this paper suggests a duplicate Chinese document image retrieval (DCDIR) system, which uses the ratio of the number of black pixels to that of white pixels on the scanned line segments in a character image block as the feature of the character image block. Experimental results indicate that the system can indeed effectively and quickly retrieve the desired duplicate Chinese document image from a database.


2015 ◽  
pp. 1295-1318
Author(s):  
Robert Keefer ◽  
Nikolaos Bourbakis

Page layout analysis and the creation of an XML document from a document image are useful for many applications including the preservation of archived documents, robust electronic access to printed documents, and access to print materials by the visually impaired. In this paper, the authors describe a document image process pipeline comprised of techniques for the identification of article headings and the related body text, the aggregation of the body text with the headings, and the creation of an XML document. The pipeline was developed to support multiple document images captured by the head-mounted cameras of a reading device for the visually impaired. Both automatic and manual adaptations of the pipeline processed a sample of 25 newspaper document images. By comparing the automatic and manual processes, we show that overall our approach generates high-quality XML encoded documents for use in further processing, such as a text-to-speech for the visually impaired.


Sign in / Sign up

Export Citation Format

Share Document