A Real-Time Microscopic PIV System Using Frame Straddling High-Frame-Rate Vision

2013 ◽  
Vol 25 (4) ◽  
pp. 586-595 ◽  
Author(s):  
Motofumi Kobatake ◽  
◽  
Tadayoshi Aoyama ◽  
Takeshi Takaki ◽  
Idaku Ishii

In this paper, we propose a novel concept of realtime microscopic particle image velocimetry (PIV) for apparent high-speed microchannel flows in lab-on-achip (LOC). We introduce a frame-straddling dualcamera high-speed vision system that synchronizes two different camera inputs for the same camera view with a submicrosecond time delay. In order to improve upper and lower limits of measurable velocity in microchannel flow observation, we designed an improved gradient-based optical flow algorithm that adaptively selects a pair of images in the optimal frame-straddling time between the two camera inputs based on the amplitude of the estimated optical flow. This avoids large image displacement between frames that often generates serious errors in optical flow estimation. Our method is implemented using software on a frame-straddling dual-camera high-speed vision platform that captures real-time video and processes 512 × 512 pixel images at 2000 fps for the two camera heads and controls the frame-straddling time delay between them from 0 to 0.25 ms with 9.9 ns step. Our microscopic PIV system with frame-straddling dualcamera high-speed vision simultaneously estimates the velocity distribution of high-speed microchannel flow at 1 × 108pixel/s or more. Results of experiments using real microscopic flows on microchannels thousands of µm wide on LOCs verify the performance of the real-time microscopic PIV system we developed.

2011 ◽  
Vol 23 (1) ◽  
pp. 53-65 ◽  
Author(s):  
Yao-DongWang ◽  
◽  
Idaku Ishii ◽  
Takeshi Takaki ◽  
Kenji Tajima ◽  
...  

This paper introduces a high-speed vision system called IDP Express, which can execute real-time image processing and High-Frame-Rate (HFR) video recording simultaneously. In IDP Express, 512×512 pixel images from two camera heads and the processed results on a dedicated FPGA (Field Programmable Gate Array) board are transferred to standard PC memory at a rate of 1000 fps or more. Owing to the simultaneous HFR video processing and recording, IDP Express can be used as an intelligent video logging system for long-term high-speed phenomenon analysis. In this paper, a real-time abnormal behavior detection algorithm was implemented on IDP-Express to capture HFR videos of crucial moments of unpredictable abnormal behaviors in high-speed periodic motions. Several experiments were performed for a high-speed slider machine with repetitive operation at a frequency of 15 Hz and videos of the abnormal behaviors were automatically recorded to verify the effectiveness of our intelligent HFR video logging system.


2012 ◽  
Vol 24 (4) ◽  
pp. 686-698 ◽  
Author(s):  
Lei Chen ◽  
◽  
Hua Yang ◽  
Takeshi Takaki ◽  
Idaku Ishii

In this paper, we propose a novel method for accurate optical flow estimation in real time for both high-speed and low-speed moving objects based on High-Frame-Rate (HFR) videos. We introduce a multiframe-straddling function to select several pairs of images with different frame intervals from an HFR image sequence even when the estimated optical flow is required to output at standard video rates (NTSC at 30 fps and PAL at 25 fps). The multiframestraddling function can remarkably improve the measurable range of velocities in optical flow estimation without heavy computation by adaptively selecting a small frame interval for high-speed objects and a large frame interval for low-speed objects. On the basis of the relationship between the frame intervals and the accuracies of the optical flows estimated by the Lucas–Kanade method, we devise a method to determine multiple frame intervals in optical flow estimation and select an optimal frame interval from these intervals according to the amplitude of the estimated optical flow. Our method was implemented using software on a high-speed vision platform, IDP Express. The estimated optical flows were accurately outputted at intervals of 40 ms in real time by using three pairs of 512×512 images; these images were selected by frame-straddling a 2000-fps video with intervals of 0.5, 1.5, and 5 ms. Several experiments were performed for high-speed movements to verify that our method can remarkably improve the measurable range of velocities in optical flow estimation, compared to optical flows estimated for 25-fps videos with the Lucas–Kanade method.


2018 ◽  
Vol 30 (1) ◽  
pp. 117-127
Author(s):  
Xianwu Jiang ◽  
Qingyi Gu ◽  
Tadayoshi Aoyama ◽  
Takeshi Takaki ◽  
Idaku Ishii ◽  
...  

In this study, we develop a real-time high-frame-rate vision system with frame-by-frame automatic exposure (AE) control that can simultaneously synthesize multiple images with different exposure times into a high-dynamic-range (HDR) image for scenarios with dynamic change in illumination. By accelerating the video capture and processing for time-division multithread AE control at the millisecond level, the proposed system can virtually function as multiple AE cameras with different exposure times. This system can capture color HDR images of 512 × 512 pixels in real time at 500 fps by synthesizing four 8-bit color images with different exposure times at consecutive frames, captured at an interval of 2 ms, with pixel-level parallel processing accelerated by a GPU (Graphic Processing Unit) board. Several experimental results for scenarios with a large change in illumination are demonstrated to confirm the performance of the proposed system for real-time HDR imaging.


2005 ◽  
Vol 17 (2) ◽  
pp. 121-129 ◽  
Author(s):  
Yoshihiro Watanabe ◽  
◽  
Takashi Komuro ◽  
Shingo Kagami ◽  
Masatoshi Ishikawa

Real-time image processing at high frame rates could play an important role in various visual measurement. Such image processing can be realized by using a high-speed vision system imaging at high frame rates and having appropriate algorithms processed at high speed. We introduce a vision chip for high-speed vision and propose a multi-target tracking algorithm for the vision chip utilizing the unique features. We describe two visual measurement applications, target counting and rotation measurement. Both measurements enable excellent measurement precision and high flexibility because of high-frame-rate visual observation achievable. Experimental results show the advantages of vision chips compared with conventional visual systems.


Author(s):  
Chauncey F. Graetzel ◽  
Steven N. Fry ◽  
Felix Beyeler ◽  
Yu Sun ◽  
Bradley J. Nelson

Sensors ◽  
2020 ◽  
Vol 20 (18) ◽  
pp. 5368
Author(s):  
Atul Sharma ◽  
Sushil Raut ◽  
Kohei Shimasaki ◽  
Taku Senoo ◽  
Idaku Ishii

This study develops a projector–camera-based visible light communication (VLC) system for real-time broadband video streaming, in which a high frame rate (HFR) projector can encode and project a color input video sequence into binary image patterns modulated at thousands of frames per second and an HFR vision system can capture and decode these binary patterns into the input color video sequence with real-time video processing. For maximum utilization of the high-throughput transmission ability of the HFR projector, we introduce a projector–camera VLC protocol, wherein a multi-level color video sequence is binary-modulated with a gray code for encoding and decoding instead of pure-code-based binary modulation. Gray code encoding is introduced to address the ambiguity with mismatched pixel alignments along the gradients between the projector and vision system. Our proposed VLC system consists of an HFR projector, which can project 590 × 1060 binary images at 1041 fps via HDMI streaming and a monochrome HFR camera system, which can capture and process 12-bit 512 × 512 images in real time at 3125 fps; it can simultaneously decode and reconstruct 24-bit RGB video sequences at 31 fps, including an error correction process. The effectiveness of the proposed VLC system was verified via several experiments by streaming offline and live video sequences.


Author(s):  
Satoshi Hoshino ◽  
◽  
Kyohei Niimura

Mobile robots equipped with camera sensors are required to perceive humans and their actions for safe autonomous navigation. For simultaneous human detection and action recognition, the real-time performance of the robot vision is an important issue. In this paper, we propose a robot vision system in which original images captured by a camera sensor are described by the optical flow. These images are then used as inputs for the human and action classifications. For the image inputs, two classifiers based on convolutional neural networks are developed. Moreover, we describe a novel detector (a local search window) for clipping partial images around the target human from the original image. Since the camera sensor moves together with the robot, the camera movement has an influence on the calculation of optical flow in the image, which we address by further modifying the optical flow for changes caused by the camera movement. Through the experiments, we show that the robot vision system can detect humans and recognize the action in real time. Furthermore, we show that a moving robot can achieve human detection and action recognition by modifying the optical flow.


2015 ◽  
Vol 27 (1) ◽  
pp. 12-23 ◽  
Author(s):  
Qingyi Gu ◽  
◽  
Sushil Raut ◽  
Ken-ichi Okumura ◽  
Tadayoshi Aoyama ◽  
...  

<div class=""abs_img""><img src=""[disp_template_path]/JRM/abst-image/00270001/02.jpg"" width=""300"" />Synthesized panoramic images</div> In this paper, we propose a real-time image mosaicing system that uses a high-frame-rate video sequence. Our proposed system can mosaic 512 × 512 color images captured at 500 fps as a single synthesized panoramic image in real time by stitching the images based on their estimated frame-to-frame changes in displacement and orientation. In the system, feature point extraction is accelerated by implementing a parallel processing circuit module for Harris corner detection, and hundreds of selected feature points in the current frame can be simultaneously corresponded with those in their neighbor ranges in the previous frame, assuming that frame-to-frame image displacement becomes smaller in high-speed vision. The efficacy of our system for improved feature-based real-time image mosaicing at 500 fps was verified by implementing it on a field-programmable gate array (FPGA)-based high-speed vision platform and conducting several experiments: (1) capturing an indoor scene using a camera mounted on a fast-moving two-degrees-of-freedom active vision, (2) capturing an outdoor scene using a hand-held camera that was rapidly moved in a periodic fashion by hand. </span>


2021 ◽  
Author(s):  
Jamin Islam

For the purpose of autonomous satellite grasping, a high-speed, low-cost stereo vision system is required with high accuracy. This type of system must be able to detect an object and estimate its range. Hardware solutions are often chosen over software solutions, which tend to be too slow for high frame-rate applications. Designs utilizing field programmable gate arrays (FPGAs) provide flexibility and are cost effective versus solutions that provide similar performance (i.e., Application Specific Integrated Circuits). This thesis presents the architecture and implementation of a high frame-rate stereo vision system based on an FPGA platform. The system acquires stereo images, performs stereo rectification and generates disparity estimates at frame-rates close to 100 fpSi and on a large-enough FPGA, it can process 200 fps. The implementation presents novelties in performance and in the choice of the algorithm implemented. It achieves superior performance to existing systems that estimate scene depth. Furthermore, it demonstrates equivalent accuracy to software implementations of the dynamic programming maximum likelihood stereo correspondence algorithm.


2019 ◽  
Vol 6 (1) ◽  
Author(s):  
Sushil Raut ◽  
Kohei Shimasaki ◽  
Sanjay Singh ◽  
Takeshi Takaki ◽  
Idaku Ishii

AbstractIn this study, the novel approach of real-time video stabilization system using a high-frame-rate (HFR) jitter sensing device is demonstrated to realize the computationally efficient technique of digital video stabilization for high-resolution image sequences. This system consists of a high-speed camera to extract and track feature points in gray-level $$512\times 496$$512×496 image sequences at 1000 fps and a high-resolution CMOS camera to capture $$2048\times 2048$$2048×2048 image sequences considering their hybridization to achieve real-time stabilization. The high-speed camera functions as a real-time HFR jitter sensing device to measure an apparent jitter movement of the system by considering two ways of computational acceleration; (1) feature point extraction with a parallel processing circuit module of the Harris corner detection and (2) corresponding hundreds of feature points at the current frame to those in the neighbor ranges at the previous frame on the assumption of small frame-to-frame displacement in high-speed vision. The proposed hybrid-camera system can digitally stabilize the $$2048\times 2048$$2048×2048 images captured with the high-resolution CMOS camera by compensating the sensed jitter-displacement in real time for displaying to human eyes on a computer display. The experiments were conducted to demonstrate the effectiveness of hybrid-camera-based digital video stabilization such as (a) verification when the hybrid-camera system in the pan direction in front of a checkered pattern, (b) stabilization in video shooting a photographic pattern when the system moved with a mixed-displacement motion of jitter and constant low-velocity in the pan direction, and (c) stabilization in video shooting a real-world outdoor scene when an operator holding hand-held hybrid-camera module while walking on the stairs.


Sign in / Sign up

Export Citation Format

Share Document