Catching Data from Displayers by Machine Vision
With the emergence of eHealth, the importance of keeping digital personal health statistics is quickly rising in demand. Many current health assessment devices output values to the user without a method of digitally saving the data. This paper presents a method to directly translate the numeric displays of the devices into digital records using machine vision. A wireless-based machine vision system is designed to image the display and a tracking algorithm based on SIFT (Scale Invariant Feature Transform) is developed to recognize the numerals from the captured images. First, a local camera captures an image of the display and transfers it wirelessly to a remote computer, which generates the gray-scale and binary figures of the images for further processing. Next, the computer applies the watershed segmentation algorithm to divide the image into regions of individual values. Finally, the SIFT features of the segmented images are picked up in sequence and matched with the SIFT features of the ten standard digits from 0 to 9 one by one to recognize the digital numbers of the device’s display. The proposed approach can obtain the data directly from the display quickly and accurately with high environmental tolerance. The numeric recognition converts with over 99.2% accuracy, and processes an image in less than one second. The proposed method has been applied in the E-health Station, a physiological parameters measuring system that integrates a variety of commercial instruments, such as OMRON digital thermometer, oximeter, sphygmomanometer, glucometer, and fat monitor, to give a more complete physiological health measurement.