scholarly journals Tuning and comparing spatial normalization methods

2004 ◽  
Vol 8 (3) ◽  
pp. 311-323 ◽  
Author(s):  
S ROBBINS
Author(s):  
V.S. Smith ◽  
L.G. Shapiro ◽  
D. Hanlon ◽  
R.F. Martin ◽  
J.F. Brinkley ◽  
...  

2020 ◽  
Author(s):  
Hengda He ◽  
Qolamreza R. Razlighi

AbstractAs the size of the neuroimaging cohorts being increased to address key questions in the field of cognitive neuroscience, cognitive aging, and neurodegenerative diseases, the accuracy of the spatial normalization as an essential pre-processing step becomes extremely important in the neuroimaging processing pipeline. Existing spatial normalization methods have poor accuracy particularly when dealing with the highly convoluted human cerebral cortex and when brain morphology is severely altered (e.g. clinical and aging populations). To address this shortcoming, we propose to implement and evaluate a novel landmark-guided region-based spatial normalization technique that takes advantage of the existing surface-based human brain parcellation to automatically identify and match regional landmarks. To simplify the non-linear whole brain registration, the identified landmarks of each region and their counterparts are registered independently with large diffeomorphic (topology preserving) deformation via geodesic shooting. The regional diffeomorphic warping fields were combined by an inverse distance weighted interpolation technique to have a smooth global warping field for the whole brain. To ensure that the final warping field is diffeomorphic, we used simultaneously forward and reverse maps with certain symmetric constraints to yield bijectivity. We have evaluated our proposed method using both simulated and real (structural and functional) human brain images. Our evaluation shows that our method can enhance structural correspondence up to around 86%, a 67% improvement compared to the existing state-of-the-art method. Such improvement also increases the sensitivity and specificity of the functional imaging studies by about 17%, reducing the required number of subjects and subsequent costs. We conclude that our proposed method can effectively substitute existing substandard spatial normalization methods to deal with the demand of large cohorts and the need for investigating clinical and aging populations.


Metabolites ◽  
2020 ◽  
Vol 11 (1) ◽  
pp. 8
Author(s):  
Michiel Bongaerts ◽  
Ramon Bonte ◽  
Serwet Demirdas ◽  
Edwin H. Jacobs ◽  
Esmee Oussoren ◽  
...  

Untargeted metabolomics is an emerging technology in the laboratory diagnosis of inborn errors of metabolism (IEM). Analysis of a large number of reference samples is crucial for correcting variations in metabolite concentrations that result from factors, such as diet, age, and gender in order to judge whether metabolite levels are abnormal. However, a large number of reference samples requires the use of out-of-batch samples, which is hampered by the semi-quantitative nature of untargeted metabolomics data, i.e., technical variations between batches. Methods to merge and accurately normalize data from multiple batches are urgently needed. Based on six metrics, we compared the existing normalization methods on their ability to reduce the batch effects from nine independently processed batches. Many of those showed marginal performances, which motivated us to develop Metchalizer, a normalization method that uses 10 stable isotope-labeled internal standards and a mixed effect model. In addition, we propose a regression model with age and sex as covariates fitted on reference samples that were obtained from all nine batches. Metchalizer applied on log-transformed data showed the most promising performance on batch effect removal, as well as in the detection of 195 known biomarkers across 49 IEM patient samples and performed at least similar to an approach utilizing 15 within-batch reference samples. Furthermore, our regression model indicates that 6.5–37% of the considered features showed significant age-dependent variations. Our comprehensive comparison of normalization methods showed that our Log-Metchalizer approach enables the use out-of-batch reference samples to establish clinically-relevant reference values for metabolite concentrations. These findings open the possibilities to use large scale out-of-batch reference samples in a clinical setting, increasing the throughput and detection accuracy.


Author(s):  
Sofiane Zeghoud ◽  
Saba Ghazanfar Ali ◽  
Egemen Ertugrul ◽  
Aouaidjia Kamel ◽  
Bin Sheng ◽  
...  

2020 ◽  
Vol 10 (1) ◽  
Author(s):  
Li Tong ◽  
◽  
Po-Yen Wu ◽  
John H. Phan ◽  
Hamid R. Hassazadeh ◽  
...  

Abstract To use next-generation sequencing technology such as RNA-seq for medical and health applications, choosing proper analysis methods for biomarker identification remains a critical challenge for most users. The US Food and Drug Administration (FDA) has led the Sequencing Quality Control (SEQC) project to conduct a comprehensive investigation of 278 representative RNA-seq data analysis pipelines consisting of 13 sequence mapping, three quantification, and seven normalization methods. In this article, we focused on the impact of the joint effects of RNA-seq pipelines on gene expression estimation as well as the downstream prediction of disease outcomes. First, we developed and applied three metrics (i.e., accuracy, precision, and reliability) to quantitatively evaluate each pipeline’s performance on gene expression estimation. We then investigated the correlation between the proposed metrics and the downstream prediction performance using two real-world cancer datasets (i.e., SEQC neuroblastoma dataset and the NIH/NCI TCGA lung adenocarcinoma dataset). We found that RNA-seq pipeline components jointly and significantly impacted the accuracy of gene expression estimation, and its impact was extended to the downstream prediction of these cancer outcomes. Specifically, RNA-seq pipelines that produced more accurate, precise, and reliable gene expression estimation tended to perform better in the prediction of disease outcome. In the end, we provided scenarios as guidelines for users to use these three metrics to select sensible RNA-seq pipelines for the improved accuracy, precision, and reliability of gene expression estimation, which lead to the improved downstream gene expression-based prediction of disease outcome.


2012 ◽  
Vol 382 (1-2) ◽  
pp. 211-215 ◽  
Author(s):  
Morgan A. Marks ◽  
Yolanda Eby ◽  
Roslyn Howard ◽  
Patti E. Gravitt

NeuroImage ◽  
2007 ◽  
Vol 37 (3) ◽  
pp. 866-875 ◽  
Author(s):  
Jenny Crinion ◽  
John Ashburner ◽  
Alex Leff ◽  
Matthew Brett ◽  
Cathy Price ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document