scholarly journals Learned SPARCOM: Unfolded Deep Super-Resolution Microscopy

2020 ◽  
Author(s):  
Gili Dardikman-Yoffe ◽  
Yonina C. Eldar

AbstractThe use of photo-activated fluorescent molecules to create long sequences of low emitter-density diffraction-limited images enables high-precision emitter localization. However, this is achieved at the cost of lengthy imaging times, limiting temporal resolution. In recent years, a variety of approaches have been suggested to reduce imaging times, ranging from classical optimization and statistical algorithms to deep learning methods. Classical methods often rely on prior knowledge of the optical system and require heuristic adjustment of parameters or do not lead to good enough performance. Deep learning methods proposed to date tend to suffer from poor generalization ability outside the specific distribution they were trained on, and require learning of many parameters. They also tend to lead to black-box solutions that are hard to interpret. In this paper, we suggest combining a recent high-performing classical method, SPARCOM, with model-based deep learning, using the algorithm unfolding approach which relies on an iterative algorithm to design a compact neural network considering domain knowledge. We show that the resulting network, Learned SPARCOM (LSPARCOM), requires far fewer layers and parameters, and can be trained on a single field of view. Nonetheless it yields comparable or superior results to those obtained by SPARCOM with no heuristic parameter determination or explicit knowledge of the point spread function, and is able to generalize better than standard deep learning techniques. It even allows producing a high-quality reconstruction from as few as 25 frames. This is due to a significantly smaller network, which also contributes to fast performance - 5× improvement in execution time relative to SPARCOM, and a full order of magnitudes improvement relative to a leading competing deep learning method (Deep-STORM) when implemented serially. Our results show that we can obtain super-resolution imaging from a small number of high emitter density frames without knowledge of the optical system and across different test sets. Thus, we believe LSPARCOM will find broad use in single molecule localization microscopy of biological structures, and pave the way to interpretable, efficient live-cell imaging in a broad range of settings.

Optica ◽  
2018 ◽  
Vol 5 (4) ◽  
pp. 458 ◽  
Author(s):  
Elias Nehme ◽  
Lucien E. Weiss ◽  
Tomer Michaeli ◽  
Yoav Shechtman

2020 ◽  
Author(s):  
Anish Mukherjee

The quality of super-resolution images largely depends on the performance of the emitter localization algorithm used to localize point sources. In this article, an overview of the various techniques which are used to localize point sources in single-molecule localization microscopy are discussed and their performances are compared. This overview can help readers to select a localization technique for their application. Also, an overview is presented about the emergence of deep learning methods that are becoming popular in various stages of single-molecule localization microscopy. The state of the art deep learning approaches are compared to the traditional approaches and the trade-offs of selecting an algorithm for localization are discussed.


2020 ◽  
Vol 10 (18) ◽  
pp. 6580 ◽  
Author(s):  
Alket Cecaj ◽  
Marco Lippi ◽  
Marco Mamei ◽  
Franco Zambonelli

Accurately forecasting how crowds of people are distributed in urban areas during daily activities is of key importance for the smart city vision and related applications. In this work we forecast the crowd density and distribution in an urban area by analyzing an aggregated mobile phone dataset. By comparing the forecasting performance of statistical and deep learning methods on the aggregated mobile data we show that each class of methods has its advantages and disadvantages depending on the forecasting scenario. However, for our time-series forecasting problem, deep learning methods are preferable when it comes to simplicity and immediacy of use, since they do not require a time-consuming model selection for each different cell. Deep learning approaches are also appropriate when aiming to reduce the maximum forecasting error. Statistical methods instead show their superiority in providing more precise forecasting results, but they require data domain knowledge and computationally expensive techniques in order to select the best parameters.


2020 ◽  
Vol 32 (2) ◽  
pp. 025105 ◽  
Author(s):  
Bo Liu ◽  
Jiupeng Tang ◽  
Haibo Huang ◽  
Xi-Yun Lu

2021 ◽  
Vol 2021 ◽  
pp. 1-9
Author(s):  
Huanyu Liu ◽  
Jiaqi Liu ◽  
Junbao Li ◽  
Jeng-Shyang Pan ◽  
Xiaqiong Yu

Magnetic resonance imaging (MRI) is widely used in the detection and diagnosis of diseases. High-resolution MR images will help doctors to locate lesions and diagnose diseases. However, the acquisition of high-resolution MR images requires high magnetic field intensity and long scanning time, which will bring discomfort to patients and easily introduce motion artifacts, resulting in image quality degradation. Therefore, the resolution of hardware imaging has reached its limit. Based on this situation, a unified framework based on deep learning super resolution is proposed to transfer state-of-the-art deep learning methods of natural images to MRI super resolution. Compared with the traditional image super-resolution method, the deep learning super-resolution method has stronger feature extraction and characterization ability, can learn prior knowledge from a large number of sample data, and has a more stable and excellent image reconstruction effect. We propose a unified framework of deep learning -based MRI super resolution, which has five current deep learning methods with the best super-resolution effect. In addition, a high-low resolution MR image dataset with the scales of ×2, ×3, and ×4 was constructed, covering 4 parts of the skull, knee, breast, and head and neck. Experimental results show that the proposed unified framework of deep learning super resolution has a better reconstruction effect on the data than traditional methods and provides a standard dataset and experimental benchmark for the application of deep learning super resolution in MR images.


2020 ◽  
Vol 897 (2) ◽  
pp. L32
Author(s):  
Sumiaya Rahman ◽  
Yong-Jae Moon ◽  
Eunsu Park ◽  
Ashraf Siddique ◽  
Il-Hyun Cho ◽  
...  

2021 ◽  
Author(s):  
Yaoxian Lv ◽  
Lei Cai ◽  
Jingyang Gao

Abstract Background: Single-molecule real-time (SMRT) sequencing data are characterized by long reads and high read depth. Compared with next-generation sequencing (NGS), SMRT sequencing data can present more structural variations (SVs) and has greater advantages in calling variation. However, there are high sequencing errors and noises in SMRT sequencing data, which brings inaccurately on calling SVs from sequencing data. Most existing tools are unable to overcome the sequencing errors and detect genomic deletions. Methods and results: In this investigation, we propose a new method for calling deletions from SMRT sequencing data, called MaxDEL. MaxDEL can effectively overcome the noise of SMRT sequencing data and integrates new machine learning and deep learning technologies. Firstly, it uses machine learning method to calibrate the deletions regions from variant call format (VCF) file. Secondly, MaxDEL develops a novel feature visualization method to convert the variant features to images and uses these images to accurately call the deletions based on convolutional neural network (CNN). The result shows that MaxDEL performs better in terms of accuracy and recall for calling variants when compared with existing methods in both real data and simulative data. Conclusions: We propose a method (MAXDEL) for calling deletion variations, which effectively utilizes both machine learning and deep learning methods. We tested it with different SMRT data and evaluated its effectiveness. The research result shows that the use of machine learning and deep learning methods has great potential in calling deletion variations.


Sign in / Sign up

Export Citation Format

Share Document