f-x adaptive seismic-trace interpolation

Geophysics ◽  
2009 ◽  
Vol 74 (1) ◽  
pp. V9-V16 ◽  
Author(s):  
Mostafa Naghizadeh ◽  
Mauricio D. Sacchi

We use exponentially weighted recursive least squares to estimate adaptive prediction filters for frequency-space [Formula: see text] seismic interpolation. Adaptive prediction filters can model signals where the dominant wavenumbers vary in space. This concept leads to an [Formula: see text] interpolation method that does not require windowing strategies for optimal results. In other words, adaptive prediction filters can be used to interpolate waveforms that have spatially variant dips. The interpolation method’s performance depends on two parameters: filter length and forgetting factor. We pay particular attention to selection of the forgetting factor because it controls the algorithm’s adaptability to changes in local dip. Finally, we use synthetic- and real-data examples to illustrate the performance of the proposed adaptive [Formula: see text] interpolation method.

Geophysics ◽  
2013 ◽  
Vol 78 (1) ◽  
pp. A1-A5 ◽  
Author(s):  
Mostafa Naghizadeh ◽  
Mauricio Sacchi

We tested a strategy for beyond-alias interpolation of seismic data using Cadzow reconstruction. The strategy enables Cadzow reconstruction to be used for interpolation of regularly sampled seismic records. First, in the frequency-space ([Formula: see text]) domain, we generated a Hankel matrix from the spatial samples of the low frequencies. To perform interpolation at a given frequency, the spatial samples were interlaced with zero samples and another Hankel matrix was generated from the zero-interlaced data. Next, the rank-reduced eigen-decomposition of the Hankel matrix at low frequencies was used for beyond-alias preconditioning of the Hankel matrix at a given frequency. Finally, antidiagonal averaging of the conditioned Hankel matrix produced the final interpolated data. In addition, the multidimensional extension of the proposed algorithm was explained. The proposed method provides a unifying thread between reduced-rank Cadzow reconstruction and beyond alias [Formula: see text] prediction error interpolation. Synthetic and real data examples were provided to examine the performance of the proposed interpolation method.


Entropy ◽  
2020 ◽  
Vol 23 (1) ◽  
pp. 62
Author(s):  
Zhengwei Liu ◽  
Fukang Zhu

The thinning operators play an important role in the analysis of integer-valued autoregressive models, and the most widely used is the binomial thinning. Inspired by the theory about extended Pascal triangles, a new thinning operator named extended binomial is introduced, which is a general case of the binomial thinning. Compared to the binomial thinning operator, the extended binomial thinning operator has two parameters and is more flexible in modeling. Based on the proposed operator, a new integer-valued autoregressive model is introduced, which can accurately and flexibly capture the dispersed features of counting time series. Two-step conditional least squares (CLS) estimation is investigated for the innovation-free case and the conditional maximum likelihood estimation is also discussed. We have also obtained the asymptotic property of the two-step CLS estimator. Finally, three overdispersed or underdispersed real data sets are considered to illustrate a superior performance of the proposed model.


Symmetry ◽  
2021 ◽  
Vol 13 (7) ◽  
pp. 1114
Author(s):  
Guillermo Martínez-Flórez ◽  
Roger Tovar-Falón ◽  
María Martínez-Guerra

This paper introduces a new family of distributions for modelling censored multimodal data. The model extends the widely known tobit model by introducing two parameters that control the shape and the asymmetry of the distribution. Basic properties of this new family of distributions are studied in detail and a model for censored positive data is also studied. The problem of estimating parameters is addressed by considering the maximum likelihood method. The score functions and the elements of the observed information matrix are given. Finally, three applications to real data sets are reported to illustrate the developed methodology.


Symmetry ◽  
2021 ◽  
Vol 13 (2) ◽  
pp. 170
Author(s):  
Michal Holčapek ◽  
Nicole Škorupová ◽  
Martin Štěpnička

The article develops further directions stemming from the arithmetic of extensional fuzzy numbers. It presents the existing knowledge of the relationship between the arithmetic and the proposed orderings of extensional fuzzy numbers—so-called S-orderings—and investigates distinct properties of such orderings. The desirable investigation of the S-orderings of extensional fuzzy numbers is directly used in the concept of S-function—a natural extension of the notion of a function that, in its arguments as well as results, uses extensional fuzzy numbers. One of the immediate subsequent applications is fuzzy interpolation. The article provides readers with the basic fuzzy interpolation method, investigation of its properties and an illustrative experimental example on real data. The goal of the paper is, however, much deeper than presenting a single fuzzy interpolation method. It determines direction to a wide variety of fuzzy interpolation as well as other analytical methods stemming from the concept of S-function and from the arithmetic of extensional fuzzy numbers in general.


2013 ◽  
Vol 2013 ◽  
pp. 1-11 ◽  
Author(s):  
Jia-Rou Liu ◽  
Po-Hsiu Kuo ◽  
Hung Hung

Large-p-small-ndatasets are commonly encountered in modern biomedical studies. To detect the difference between two groups, conventional methods would fail to apply due to the instability in estimating variances int-test and a high proportion of tied values in AUC (area under the receiver operating characteristic curve) estimates. The significance analysis of microarrays (SAM) may also not be satisfactory, since its performance is sensitive to the tuning parameter, and its selection is not straightforward. In this work, we propose a robust rerank approach to overcome the above-mentioned diffculties. In particular, we obtain a rank-based statistic for each feature based on the concept of “rank-over-variable.” Techniques of “random subset” and “rerank” are then iteratively applied to rank features, and the leading features will be selected for further studies. The proposed re-rank approach is especially applicable for large-p-small-ndatasets. Moreover, it is insensitive to the selection of tuning parameters, which is an appealing property for practical implementation. Simulation studies and real data analysis of pooling-based genome wide association (GWA) studies demonstrate the usefulness of our method.


Complexity ◽  
2018 ◽  
Vol 2018 ◽  
pp. 1-16 ◽  
Author(s):  
Yiwen Zhang ◽  
Yuanyuan Zhou ◽  
Xing Guo ◽  
Jintao Wu ◽  
Qiang He ◽  
...  

The K-means algorithm is one of the ten classic algorithms in the area of data mining and has been studied by researchers in numerous fields for a long time. However, the value of the clustering number k in the K-means algorithm is not always easy to be determined, and the selection of the initial centers is vulnerable to outliers. This paper proposes an improved K-means clustering algorithm called the covering K-means algorithm (C-K-means). The C-K-means algorithm can not only acquire efficient and accurate clustering results but also self-adaptively provide a reasonable numbers of clusters based on the data features. It includes two phases: the initialization of the covering algorithm (CA) and the Lloyd iteration of the K-means. The first phase executes the CA. CA self-organizes and recognizes the number of clusters k based on the similarities in the data, and it requires neither the number of clusters to be prespecified nor the initial centers to be manually selected. Therefore, it has a “blind” feature, that is, k is not preselected. The second phase performs the Lloyd iteration based on the results of the first phase. The C-K-means algorithm combines the advantages of CA and K-means. Experiments are carried out on the Spark platform, and the results verify the good scalability of the C-K-means algorithm. This algorithm can effectively solve the problem of large-scale data clustering. Extensive experiments on real data sets show that the accuracy and efficiency of the C-K-means algorithm outperforms the existing algorithms under both sequential and parallel conditions.


2020 ◽  
Author(s):  
Yaxue Ren ◽  
Fucai Liu ◽  
Jingfeng Lv ◽  
Aiwen Meng ◽  
Yintang Wen

Abstract The division of fuzzy space is very important in the identification of premise parameters and the Gaussian membership function is applied to the premise fuzzy set. However, the two parameters of Gaussian membership function, center and width, are not easy to be determined. In this paper, a novel T-S fuzzy model optimal identification method of optimizing two parameters of Gaussian function based on Fuzzy c-means (FCM) and particle swarm optimization (PSO) algorithm is presented. Firstly, we use FCM algorithm to determine the Gaussian center for rough adjustment. Then, under the condition that the center of Gaussian function is fixed, the PSO algorithm is used to optimize another adjustable parameter, the width of the Gaussian membership function, to achieve fine tuning, so as to complete the identification of prerequisite parameters of fuzzy model. In addition, the recursive least squares (RLS) algorithm is used to identify the conclusion parameters. Finally, the effectiveness of this method for T-S fuzzy model identification is verified by simulation examples, and the higher identification accuracy can be obtained by using the novel identification method described compared with other identification methods.


2021 ◽  
Author(s):  
Nivedita Nivedita ◽  
John D. Aitchison ◽  
Nitin S. Baliga

ABSTRACTDrug resistance is a major problem in treatment of microbial infections and cancers. There is growing evidence that a transient drug tolerant state may precede and potentiate the emergence of drug resistance. Therefore, understanding the mechanisms leading to tolerance is critical for combating drug resistance and for the development of effective therapeutic strategy. Through laboratory evolution of yeast, we recently demonstrated that adaptive prediction (AP), a strategy employed by organisms to anticipate and prepare for a future stressful environment, can emerge within 100 generations by linking the response triggered by a neutral cue (caffeine) to a mechanism of protection against a lethal agent (5-FOA). Here, we demonstrate that mutations selected across multiple laboratory evolved lines had linked the neutral cue response to core genes of autophagy. Across these evolved lines, conditional activation of autophagy through AP conferred tolerance, and potentiated subsequent selection of mutations in genes specific to overcoming the toxicity of 5-FOA. We propose a model to explain how extensive genome-wide genetic interactions of autophagy facilitates emergence of AP over short evolutionary timescales to potentiate selection of resistance-conferring mutations.


Sign in / Sign up

Export Citation Format

Share Document