segmentation quality
Recently Published Documents


TOTAL DOCUMENTS

142
(FIVE YEARS 37)

H-INDEX

15
(FIVE YEARS 2)

2022 ◽  
pp. 016173462110688
Author(s):  
Aleksandra Wilczewska ◽  
Szymon Cygan ◽  
Jakub Żmigrodzki

Although the two dimensional Speckle Tracking Echocardiography has gained a strong position among medical diagnostic techniques in cardiology, it still requires further developments to improve its repeatability and reliability. Few works have attempted to incorporate the left ventricle segmentation results in the process of displacements and strain estimation to improve its performance. We proposed the use of mask information as an additional penalty in the elastic image registration based displacements estimation. This approach was studied using a short axis view synthetic echocardiographic data, segmented using an active contour method. The obtained masks were distorted to a different degree, using different methods to assess the influence of the segmentation quality on the displacements and strain estimation process. The results of displacements and circumferential strain estimations show, that even though the method is dependent on the mask quality, the potential loss in accuracy due to the poor segmentation quality is much lower than the potential accuracy gain in cases where the segmentation performs well.


2021 ◽  
Vol 14 (1) ◽  
pp. 23
Author(s):  
Yiping Gong ◽  
Fan Zhang ◽  
Xiangyang Jia ◽  
Zhu Mao ◽  
Xianfeng Huang ◽  
...  

Although great success has been achieved in instance segmentation, accurate segmentation of instances remains difficult, especially at object edges. This problem is more prominent for instance segmentation in remote sensing imagery due to the diverse scales, variable illumination, smaller objects, and complex backgrounds. We find that most current instance segmentation networks do not consider the segmentation difficulty of different instances and different regions within the instance. In this paper, we study this problem and propose an ensemble method to segment instances from remote sensing images, considering the enhancement of hard-to-segment instances and instance edges. First, we apply a pixel-level Dice metric that reliably describes the segmentation quality of each instance to achieve online hard instance learning. Instances with low Dice values are studied with emphasis. Second, we generate a penalty map based on the analysis of boundary shapes to not only enhance the edges of objects but also discriminatively strengthen the edges of different shapes. That is, different areas of an object, such as internal areas, flat edges, and sharp edges, are distinguished and discriminatively weighed. Finally, the hard-to-segment instance learning and the shape-penalty map are integrated for precise instance segmentation. To evaluate the effectiveness and generalization ability of the proposed method, we train with the classic instance segmentation network Mask R-CNN and conduct experiments on two different types of remote sensing datasets: the iSAID-Reduce100 and the JKGW_WHU datasets, which have extremely different feature distributions and spatial resolutions. The comprehensive experimental results show that the proposed method improved the segmentation results by 2.78% and 1.77% in mask AP on the iSAID-Reduce100 and JKGW_WHU datasets, respectively. We also test other state-of-the-art (SOTA) methods that focus on inaccurate edges. Experiments demonstrate that our method outperforms these methods.


Sensors ◽  
2021 ◽  
Vol 21 (22) ◽  
pp. 7521
Author(s):  
Agnieszka Stankiewicz ◽  
Tomasz Marciniak ◽  
Adam Dabrowski ◽  
Marcin Stopa ◽  
Elzbieta Marciniak ◽  
...  

This paper proposes an efficient segmentation of the preretinal area between the inner limiting membrane (ILM) and posterior cortical vitreous (PCV) of the human eye in an image obtained with the use of optical coherence tomography (OCT). The research was carried out using a database of three-dimensional OCT imaging scans obtained with the Optovue RTVue XR Avanti device. Various types of neural networks (UNet, Attention UNet, ReLayNet, LFUNet) were tested for semantic segmentation, their effectiveness was assessed using the Dice coefficient and compared to the graph theory techniques. Improvement in segmentation efficiency was achieved through the use of relative distance maps. We also show that selecting a larger kernel size for convolutional layers can improve segmentation quality depending on the neural network model. In the case of PVC, we obtain the effectiveness reaching up to 96.35%. The proposed solution can be widely used to diagnose vitreomacular traction changes, which is not yet available in scientific or commercial OCT imaging solutions.


Sensors ◽  
2021 ◽  
Vol 21 (20) ◽  
pp. 6884
Author(s):  
Roman Dębski ◽  
Rafał Dreżewski

Sensor data streams often represent signals/trajectories which are twice differentiable (e.g., to give a continuous velocity and acceleration), and this property must be reflected in their segmentation. An adaptive streaming algorithm for this problem is presented. It is based on the greedy look-ahead strategy and is built on the concept of a cubic splinelet. A characteristic feature of the proposed algorithm is the real-time simultaneous segmentation, smoothing, and compression of data streams. The segmentation quality is measured in terms of the signal approximation accuracy and the corresponding compression ratio. The numerical results show the relatively high compression ratios (from 135 to 208, i.e., compressed stream sizes up to 208 times smaller) combined with the approximation errors comparable to those obtained from the state-of-the-art global reference algorithm. The proposed algorithm can be applied to various domains, including online compression and/or smoothing of data streams coming from sensors, real-time IoT analytics, and embedded time-series databases.


Author(s):  
Sebastian Nowak ◽  
Maike Theis ◽  
Barbara D. Wichtmann ◽  
Anton Faron ◽  
Matthias F. Froelich ◽  
...  

Abstract Objectives To develop a pipeline for automated body composition analysis and skeletal muscle assessment with integrated quality control for large-scale application in opportunistic imaging. Methods First, a convolutional neural network for extraction of a single slice at the L3/L4 lumbar level was developed on CT scans of 240 patients applying the nnU-Net framework. Second, a 2D competitive dense fully convolutional U-Net for segmentation of visceral and subcutaneous adipose tissue (VAT, SAT), skeletal muscle (SM), and subsequent determination of fatty muscle fraction (FMF) was developed on single CT slices of 1143 patients. For both steps, automated quality control was integrated by a logistic regression model classifying the presence of L3/L4 and a linear regression model predicting the segmentation quality in terms of Dice score. To evaluate the performance of the entire pipeline end-to-end, body composition metrics, and FMF were compared to manual analyses including 364 patients from two centers. Results Excellent results were observed for slice extraction (z-deviation = 2.46 ± 6.20 mm) and segmentation (Dice score for SM = 0.95 ± 0.04, VAT = 0.98 ± 0.02, SAT = 0.97 ± 0.04) on the dual-center test set excluding cases with artifacts due to metallic implants. No data were excluded for end-to-end performance analyses. With a restrictive setting of the integrated segmentation quality control, 39 of 364 patients were excluded containing 8 cases with metallic implants. This setting ensured a high agreement between manual and fully automated analyses with mean relative area deviations of ΔSM = 3.3 ± 4.1%, ΔVAT = 3.0 ± 4.7%, ΔSAT = 2.7 ± 4.3%, and ΔFMF = 4.3 ± 4.4%. Conclusions This study presents an end-to-end automated deep learning pipeline for large-scale opportunistic assessment of body composition metrics and sarcopenia biomarkers in clinical routine. Key Points • Body composition metrics and skeletal muscle quality can be opportunistically determined from routine abdominal CT scans. • A pipeline consisting of two convolutional neural networks allows an end-to-end automated analysis. • Machine-learning-based quality control ensures high agreement between manual and automatic analysis.


2021 ◽  
Author(s):  
Haoran Chen ◽  
Robert F. Murphy

AbstractCell segmentation is a cornerstone of many bioimage informatics studies. Inaccurate segmentation introduces computational error in downstream cellular analysis. Evaluating the segmentation results is thus a necessary step for developing the segmentation methods as well as choosing the most appropriate one for a certain kind of tissue or image. The evaluation process has typically involved comparison of segmentations to those generated by humans, which can be expensive and subject to unknown bias. We present here an approach that seeks to evaluate cell segmentation methods without relying upon comparison to results from humans. For this, we defined a number of segmentation quality metrics that can be applied to multichannel fluorescence images. We calculated these metrics for 11 previously-described segmentation methods applied to datasets from 5 multiplexed microscope modalities covering 5 tissues. Using principal component analysis to combine the metrics we defined an overall cell segmentation quality score and ranked the segmentation methods. A Reproducible Research Archive containing all data and code will be made available upon publication at http://hubmap.scs.cmu.edu.


Sensors ◽  
2021 ◽  
Vol 21 (16) ◽  
pp. 5482
Author(s):  
Ahmed Sharafeldeen ◽  
Mohamed Elsharkawy ◽  
Norah Saleh Alghamdi ◽  
Ahmed Soliman Soliman ◽  
Ayman El-Baz

A new segmentation technique is introduced for delineating the lung region in 3D computed tomography (CT) images. To accurately model the distribution of Hounsfield scale values within both chest and lung regions, a new probabilistic model is developed that depends on a linear combination of Gaussian (LCG). Moreover, we modified the conventional expectation-maximization (EM) algorithm to be run in a sequential way to estimate both the dominant Gaussian components (one for the lung region and one for the chest region) and the subdominant Gaussian components, which are used to refine the final estimated joint density. To estimate the marginal density from the mixed density, a modified k-means clustering approach is employed to classify the Gaussian subdominant components to determine which components belong properly to a lung and which components belong to a chest. The initial segmentation, based on the LCG-model, is then refined by the imposition of 3D morphological constraints based on a 3D Markov–Gibbs random field (MGRF) with analytically estimated potentials. The proposed approach was tested on CT data from 32 coronavirus disease 2019 (COVID-19) patients. Segmentation quality was quantitatively evaluated using four metrics: Dice similarity coefficient (DSC), overlap coefficient, 95th-percentile bidirectional Hausdorff distance (BHD), and absolute lung volume difference (ALVD), and it achieved 95.67±1.83%, 91.76±3.29%, 4.86±5.01, and 2.93±2.39, respectively. The reported results showed the capability of the proposed approach to accurately segment healthy lung tissues in addition to pathological lung tissues caused by COVID-19, outperforming four current, state-of-the-art deep learning-based lung segmentation approaches.


2021 ◽  
Vol 12 (3) ◽  
pp. 188-214
Author(s):  
Hamza Abdellahoum ◽  
Abdelmajid Boukra

The image segmentation problem is one of the most studied problems because it helps in several areas. In this paper, the authors propose new algorithms to resolve two problems, namely cluster detection and centers initialization. The authors opt to use statistical methods to automatically determine the number of clusters and the fuzzy sets theory to start the algorithm with a near optimal configuration. They use the image histogram information to determine the number of clusters and a cooperative approach involving three metaheuristics, genetic algorithm (GA), firefly algorithm (FA). and biogeography-based optimization algorithm (BBO), to detect the clusters centers in the initialization step. The experimental study shows that, first, the proposed solution determines a near optimal initial clusters centers set leading to good image segmentation compared to well-known methods; second, the number of clusters determined automatically by the proposed approach contributes to improve the image segmentation quality.


2021 ◽  
Author(s):  
Anuradha Kar ◽  
Manuel Petit ◽  
Yassin Refahi ◽  
Guillaume Cerutti ◽  
Christophe Godin ◽  
...  

Segmenting three dimensional microscopy images is essential for understanding phenomena like morphogenesis, cell division, cellular growth and genetic expression patterns. Recently, deep learning (DL) pipelines have been developed which claim to provide high accuracy segmentation of cellular images and are increasingly considered as the state-of-the-art for image segmentation problems. However, it remains difficult to define their relative performance as the concurrent diversity and lack of uniform evaluation strategies makes it difficult to know how their results compare. In this paper, we first made an inventory of the available DL methods for 3D segmentation. We next implemented and quantitatively compared a number of representative DL pipelines, alongside a highly efficient non-DL method named MARS. The DL methods were trained on a common dataset of 3D cellular confocal microscopy images. Their segmentation accuracies were also tested in the presence of different image artefacts. A new method for segmentation quality evaluation was adopted which isolates segmentation errors due to under/over segmentation. This is complemented with new visualisation strategies that make interactive exploration of segmentation quality possible. Our analysis shows that the DL pipelines have very different levels of accuracy. Two of them show high performance, and offer clear advantages in terms of adaptability to new data.


Sign in / Sign up

Export Citation Format

Share Document