TIME-DOMAIN INTERPOLATION ON GRAPHICS PROCESSING UNIT
The signal processing speed of spectral domain optical coherence tomography (SD-OCT) has become a bottleneck in a lot of medical applications. Recently, a time-domain interpolation method was proposed. This method can get better signal-to-noise ratio (SNR) but much-reduced signal processing time in SD-OCT data processing as compared with the commonly used zero-padding interpolation method. Additionally, the resampled data can be obtained by a few data and coefficients in the cutoff window. Thus, a lot of interpolations can be performed simultaneously. So, this interpolation method is suitable for parallel computing. By using graphics processing unit (GPU) and the compute unified device architecture (CUDA) program model, time-domain interpolation can be accelerated significantly. The computing capability can be achieved more than 250,000 A-lines, 200,000 A-lines, and 160,000 A-lines in a second for 2,048 pixel OCT when the cutoff length is L = 11, L = 21, and L = 31, respectively. A frame SD-OCT data (400A-lines × 2,048 pixel per line) is acquired and processed on GPU in real time. The results show that signal processing time of SD-OCT can be finished in 6.223 ms when the cutoff length L = 21, which is much faster than that on central processing unit (CPU). Real-time signal processing of acquired data can be realized.