scholarly journals SELECTION OF REGION OF INTEREST IN NON-CONTACT MONITORING OF RESPIRATION PARAMETERS USING SEMANTIC SEGMENTATION

Author(s):  
O.K. Bodilovskyi
2021 ◽  
Vol 6 (1) ◽  
pp. e000898
Author(s):  
Andrea Peroni ◽  
Anna Paviotti ◽  
Mauro Campigotto ◽  
Luis Abegão Pinto ◽  
Carlo Alberto Cutolo ◽  
...  

ObjectiveTo develop and test a deep learning (DL) model for semantic segmentation of anatomical layers of the anterior chamber angle (ACA) in digital gonio-photographs.Methods and analysisWe used a pilot dataset of 274 ACA sector images, annotated by expert ophthalmologists to delineate five anatomical layers: iris root, ciliary body band, scleral spur, trabecular meshwork and cornea. Narrow depth-of-field and peripheral vignetting prevented clinicians from annotating part of each image with sufficient confidence, introducing a degree of subjectivity and features correlation in the ground truth. To overcome these limitations, we present a DL model, designed and trained to perform two tasks simultaneously: (1) maximise the segmentation accuracy within the annotated region of each frame and (2) identify a region of interest (ROI) based on local image informativeness. Moreover, our calibrated model provides results interpretability returning pixel-wise classification uncertainty through Monte Carlo dropout.ResultsThe model was trained and validated in a 5-fold cross-validation experiment on ~90% of available data, achieving ~91% average segmentation accuracy within the annotated part of each ground truth image of the hold-out test set. An appropriate ROI was successfully identified in all test frames. The uncertainty estimation module located correctly inaccuracies and errors of segmentation outputs.ConclusionThe proposed model improves the only previously published work on gonio-photographs segmentation and may be a valid support for the automatic processing of these images to evaluate local tissue morphology. Uncertainty estimation is expected to facilitate acceptance of this system in clinical settings.


2007 ◽  
Author(s):  
Li Lan ◽  
Maryellen L. Giger ◽  
Joel R. Wilkie ◽  
Tamara J. Vokes ◽  
Weijie Chen ◽  
...  

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Gaihua Wang ◽  
Qianyu Zhai

AbstractContextual information is a key factor affecting semantic segmentation. Recently, many methods have tried to use the self-attention mechanism to capture more contextual information. However, these methods with self-attention mechanism need a huge computation. In order to solve this problem, a novel self-attention network, called FFANet, is designed to efficiently capture contextual information, which reduces the amount of calculation through strip pooling and linear layers. It proposes the feature fusion (FF) module to calculate the affinity matrix. The affinity matrix can capture the relationship between pixels. Then we multiply the affinity matrix with the feature map, which can selectively increase the weight of the region of interest. Extensive experiments on the public datasets (PASCAL VOC2012, CityScapes) and remote sensing dataset (DLRSD) have been conducted and achieved Mean Iou score 74.5%, 70.3%, and 63.9% respectively. Compared with the current typical algorithms, the proposed method has achieved excellent performance.


2020 ◽  
Vol 9 (8) ◽  
pp. 486 ◽  
Author(s):  
Aleksandar Milosavljević

The proliferation of high-resolution remote sensing sensors and platforms imposes the need for effective analyses and automated processing of high volumes of aerial imagery. The recent advance of artificial intelligence (AI) in the form of deep learning (DL) and convolutional neural networks (CNN) showed remarkable results in several image-related tasks, and naturally, gain the focus of the remote sensing community. In this paper, we focus on specifying the processing pipeline that relies on existing state-of-the-art DL segmentation models to automate building footprint extraction. The proposed pipeline is organized in three stages: image preparation, model implementation and training, and predictions fusion. For the first and third stages, we introduced several techniques that leverage remote sensing imagery specifics, while for the selection of the segmentation model, we relied on empirical examination. In the paper, we presented and discussed several experiments that we conducted on Inria Aerial Image Labeling Dataset. Our findings confirmed that automatic processing of remote sensing imagery using DL semantic segmentation is both possible and can provide applicable results. The proposed pipeline can be potentially transferred to any other remote sensing imagery segmentation task if the corresponding dataset is available.


2019 ◽  
Vol 12 (4) ◽  
pp. 417-421 ◽  
Author(s):  
Alexander R Podgorsak ◽  
Ryan A Rava ◽  
Mohammad Mahdi Shiraz Bhurwani ◽  
Anusha R Chandra ◽  
Jason M Davies ◽  
...  

BackgroundAngiographic parametric imaging (API) is an imaging method that uses digital subtraction angiography (DSA) to characterize contrast media dynamics throughout the vasculature. This requires manual placement of a region of interest over a lesion (eg, an aneurysm sac) by an operator.ObjectiveThe purpose of our work was to determine if a convolutional neural network (CNN) was able to identify and segment the intracranial aneurysm (IA) sac in a DSA and extract API radiomic features with minimal errors compared with human user results.MethodsThree hundred and fifty angiographic images of IAs were retrospectively collected. The IAs and surrounding vasculature were manually contoured and the masks put to a CNN tasked with semantic segmentation. The CNN segmentations were assessed for accuracy using the Dice similarity coefficient (DSC) and Jaccard index (JI). Area under the receiver operating characteristic curve (AUROC) was computed. API features based on the CNN segmentation were compared with the human user results.ResultsThe mean JI was 0.823 (95% CI 0.783 to 0.863) for the IA and 0.737 (95% CI 0.682 to 0.792) for the vasculature. The mean DSC was 0.903 (95% CI 0.867 to 0.937) for the IA and 0.849 (95% CI 0.811 to 0.887) for the vasculature. The mean AUROC was 0.791 (95% CI 0.740 to 0.817) for the IA and 0.715 (95% CI 0.678 to 0.733) for the vasculature. All five API features measured inside the predicted masks were within 18% of those measured inside manually contoured masks.ConclusionsCNN segmentation of IAs and surrounding vasculature from DSA images is non-inferior to manual contours of aneurysms and can be used in parametric imaging procedures.


Diagnostics ◽  
2021 ◽  
Vol 11 (4) ◽  
pp. 616
Author(s):  
Sivaramakrishnan Rajaraman ◽  
Les R. Folio ◽  
Jane Dimperio ◽  
Philip O. Alderson ◽  
Sameer K. Antani

Deep learning (DL) has drawn tremendous attention for object localization and recognition in both natural and medical images. U-Net segmentation models have demonstrated superior performance compared to conventional hand-crafted feature-based methods. Medical image modality-specific DL models are better at transferring domain knowledge to a relevant target task than those pretrained on stock photography images. This character helps improve model adaptation, generalization, and class-specific region of interest (ROI) localization. In this study, we train chest X-ray (CXR) modality-specific U-Nets and other state-of-the-art U-Net models for semantic segmentation of tuberculosis (TB)-consistent findings. Automated segmentation of such manifestations could help radiologists reduce errors and supplement decision-making while improving patient care and productivity. Our approach uses the publicly available TBX11K CXR dataset with weak TB annotations, typically provided as bounding boxes, to train a set of U-Net models. Next, we improve the results by augmenting the training data with weak localization, postprocessed into an ROI mask, from a DL classifier trained to classify CXRs as showing normal lungs or suspected TB manifestations. Test data are individually derived from the TBX11K CXR training distribution and other cross-institutional collections, including the Shenzhen TB and Montgomery TB CXR datasets. We observe that our augmented training strategy helped the CXR modality-specific U-Net models achieve superior performance with test data derived from the TBX11K CXR training distribution and cross-institutional collections (p < 0.05). We believe that this is the first study to i) use CXR modality-specific U-Nets for semantic segmentation of TB-consistent ROIs and ii) evaluate the segmentation performance while augmenting the training data with weak TB-consistent localizations.


Sensors ◽  
2020 ◽  
Vol 20 (10) ◽  
pp. 2838
Author(s):  
Gustavo Calderon-Auza ◽  
Cesar Carrillo-Gomez ◽  
Mariko Nakano ◽  
Karina Toscano-Medina ◽  
Hector Perez-Meana ◽  
...  

This paper proposes a teleophthalmology support system in which we use algorithms of object detection and semantic segmentation, such as faster region-based CNN (FR-CNN) and SegNet, based on several CNN architectures such as: Vgg16, MobileNet, AlexNet, etc. These are used to segment and analyze the principal anatomical elements, such as optic disc (OD), region of interest (ROI) composed by the macular region, real retinal region, and vessels. Unlike the conventional retinal image quality assessment system, the proposed system provides some possible reasons about the low-quality image to support the operator of an ophthalmoscope and patient to acquire and transmit a better-quality image to central eye hospital for its diagnosis. The proposed system consists of four steps: OD detection, OD quality analysis, obstruction detection of the region of interest (ROI), and vessel segmentation. For the OD detection, artefacts and vessel segmentation, the FR-CNN and SegNet are used, while for the OD quality analysis, we use transfer learning. The proposed system provides accuracies of 0.93 for the OD detection, 0.86 for OD image quality, 1.0 for artefact detection, and 0.98 for vessel segmentation. As the global performance metric, the kappa-based agreement score between ophthalmologist and the proposed system is calculated, which is higher than the score between ophthalmologist and general practitioner.


Author(s):  
Andrey M. Kitenko ◽  

The paper explores the possibility of using neural networks to single out target artifacts on different types of documents. Numerous types of neural networks are often used for document processing, from text analysis to the allocation of certain areas where the desired information may be contained. However, to date, there are no perfect document processing systems that can work autonomously, compensating for human errors that may appear in the process of work due to stress, fatigue and many other reasons. In this work, the emphasis is on the search and selection of target artifacts in drawings, in conditions of a small amount of initial data. The proposed method of searching and highlighting artifacts in the image consists of two main parts, detection and semantic segmentation of the detected area. The method is based on training with a teacher on marked-up data for two convolutional neural networks. The first convolutional network is used to detect an area with an artifact, in this example YoloV4 was taken as the basis. For semantic segmentation, the U-Net architecture is used, where the basis is the pre-trained Efficientnetb0. By combining these neural networks, good results were achieved, even for the selection of certain handwritten texts, without using the specifics of building neural network models for text recognition. This method can be used to search for and highlight artifacts in large datasets, while the artifacts themselves may be different in shape, color and type, and they may be located in different places of the image, have or not have intersection with other objects.


Author(s):  
Shafaf Ibrahim ◽  
Zarith Azuren Noor Azmy ◽  
Nur Nabilah Abu Mangshor ◽  
Nurbaity Sabri ◽  
Ahmad Firdaus Ahmad Fadzil ◽  
...  

<span>Scalp problems may occur due to the miscellaneous factor, which includes genetics, stress, abuse and hair products. The conventional technique for scalp and hair treatment involves high operational cost and complicated diagnosis. Besides, it is becoming progressively important for the payer to investigate the value of new treatment selection in the management of a specific scalp problem. As they are generally expensive and inconvenient, there is an increasing need for an affordable and convenient way of monitoring scalp conditions. Thus, this paper presents a study of pre-trained classification of scalp conditions using image processing techniques. Initially, the scalp image went through the pre-processing such as image enhancement and greyscale conversion. Next, three features of color, texture, and shape were extracted from each input image, and stored in a Region of Interest (ROI) table. The knowledge of the values of the pre-trained features is used as a reference in the classification process subsequently. A technique of Support Vector Machine (SVM) is employed to classify the three types of scalp conditions which are alopecia areata (AA), dandruff and normal. A total of 120 images of the scalp conditions were tested. The classification of scalp conditions indicated a good performance of 85% accuracy. It is expected that the outcome of this study may automatically classify the scalp condition, and may assist the user on a selection of suitable treatment available.</span>


Sign in / Sign up

Export Citation Format

Share Document