scholarly journals A fast butterfly algorithm for generalized Radon transforms

Geophysics ◽  
2013 ◽  
Vol 78 (4) ◽  
pp. U41-U51 ◽  
Author(s):  
Jingwei Hu ◽  
Sergey Fomel ◽  
Laurent Demanet ◽  
Lexing Ying

Generalized Radon transforms, such as the hyperbolic Radon transform, cannot be implemented as efficiently in the frequency domain as convolutions, thus limiting their use in seismic data processing. We have devised a fast butterfly algorithm for the hyperbolic Radon transform. The basic idea is to reformulate the transform as an oscillatory integral operator and to construct a blockwise low-rank approximation of the kernel function. The overall structure follows the Fourier integral operator butterfly algorithm. For 2D data, the algorithm runs in complexity [Formula: see text], where [Formula: see text] depends on the maximum frequency and offset in the data set and the range of parameters (intercept time and slowness) in the model space. From a series of studies, we found that this algorithm can be significantly more efficient than the conventional time-domain integration.

Geophysics ◽  
1989 ◽  
Vol 54 (10) ◽  
pp. 1318-1325 ◽  
Author(s):  
Virgil Bardan

2‐D seismic data are usually sampled and processed in a rectangular grid, for which sampling requirements are generally derived from the usual 1‐D viewpoint. For a 2‐D seismic data set, the band region (the region of the Fourier plane in which the amplitude spectrum exceeds some very small number) can be approximated by a domain bounded by two triangles. Considering the particular shape of this band region, I use 2‐D sampling theory to obtain results applicable to seismic data processing. The 2‐D viewpoint leads naturally to weaker sampling requirements than does the 1‐D viewpoint; i.e., fewer sample points are needed to represent data with the same degree of accuracy. The sampling of 2‐D seismic data and of their Radon transform in a parallelogram and then in a triangular grid is introduced. The triangular sampling grid is optimal in these cases, since it requires the minimum number of sample points—equal to half the number required by a parallelogram or rectangular grid. The sampling of 2‐D seismic data in a triangular grid is illustrated by examples of synthetic and field seismic sections. The properties of parallelogram grid sampling impose an additional sampling requirement on the 2‐D seismic data in order to evaluate their Radon transform numerically; i.e., the maximum value of the spatial sampling interval must be half of that required by the sampling theorem.


10.2196/20597 ◽  
2020 ◽  
Vol 8 (12) ◽  
pp. e20597
Author(s):  
Ki-Hun Kim ◽  
Kwang-Jae Kim

Background A lifelogs-based wellness index (LWI) is a function for calculating wellness scores based on health behavior lifelogs (eg, daily walking steps and sleep times collected via a smartwatch). A wellness score intuitively shows the users of smart wellness services the overall condition of their health behaviors. LWI development includes estimation (ie, estimating coefficients in LWI with data). A panel data set comprising health behavior lifelogs allows LWI estimation to control for unobserved variables, thereby resulting in less bias. However, these data sets typically have missing data due to events that occur in daily life (eg, smart devices stop collecting data when batteries are depleted), which can introduce biases into LWI coefficients. Thus, the appropriate choice of method to handle missing data is important for reducing biases in LWI estimations with panel data. However, there is a lack of research in this area. Objective This study aims to identify a suitable missing-data handling method for LWI estimation with panel data. Methods Listwise deletion, mean imputation, expectation maximization–based multiple imputation, predictive-mean matching–based multiple imputation, k-nearest neighbors–based imputation, and low-rank approximation–based imputation were comparatively evaluated by simulating an existing case of LWI development. A panel data set comprising health behavior lifelogs of 41 college students over 4 weeks was transformed into a reference data set without any missing data. Then, 200 simulated data sets were generated by randomly introducing missing data at proportions from 1% to 80%. The missing-data handling methods were each applied to transform the simulated data sets into complete data sets, and coefficients in a linear LWI were estimated for each complete data set. For each proportion for each method, a bias measure was calculated by comparing the estimated coefficient values with values estimated from the reference data set. Results Methods performed differently depending on the proportion of missing data. For 1% to 30% proportions, low-rank approximation–based imputation, predictive-mean matching–based multiple imputation, and expectation maximization–based multiple imputation were superior. For 31% to 60% proportions, low-rank approximation–based imputation and predictive-mean matching–based multiple imputation performed best. For over 60% proportions, only low-rank approximation–based imputation performed acceptably. Conclusions Low-rank approximation–based imputation was the best of the 6 data-handling methods regardless of the proportion of missing data. This superiority is generalizable to other panel data sets comprising health behavior lifelogs given their verified low-rank nature, for which low-rank approximation–based imputation is known to perform effectively. This result will guide missing-data handling in reducing coefficient biases in new development cases of linear LWIs with panel data.


2008 ◽  
Vol 20 (11) ◽  
pp. 2839-2861 ◽  
Author(s):  
Dit-Yan Yeung ◽  
Hong Chang ◽  
Guang Dai

In recent years, metric learning in the semisupervised setting has aroused a lot of research interest. One type of semisupervised metric learning utilizes supervisory information in the form of pairwise similarity or dissimilarity constraints. However, most methods proposed so far are either limited to linear metric learning or unable to scale well with the data set size. In this letter, we propose a nonlinear metric learning method based on the kernel approach. By applying low-rank approximation to the kernel matrix, our method can handle significantly larger data sets. Moreover, our low-rank approximation scheme can naturally lead to out-of-sample generalization. Experiments performed on both artificial and real-world data show very promising results.


2020 ◽  
Vol 14 (12) ◽  
pp. 2791-2798
Author(s):  
Xiaoqun Qiu ◽  
Zhen Chen ◽  
Saifullah Adnan ◽  
Hongwei He

2020 ◽  
Vol 6 ◽  
pp. 922-933
Author(s):  
M. Amine Hadj-Youcef ◽  
Francois Orieux ◽  
Alain Abergel ◽  
Aurelia Fraysse

Sign in / Sign up

Export Citation Format

Share Document