scholarly journals Deep Learning Assisted Covid-19 Detection using full CT-scans

Author(s):  
varan singhrohila ◽  
Nitin Gupta ◽  
Amit Kaul ◽  
Deepak Sharma

<div>The ongoing pandemic of COVID-19 has shown</div><div>the limitations of our current medical institutions. There</div><div>is a need for research in the field of automated diagnosis</div><div>for speeding up the process while maintaining accuracy</div><div>and reducing computational requirements. In this work, an</div><div>automatic diagnosis of COVID-19 infection from CT scans</div><div>of the patients using Deep Learning technique is proposed.</div><div>The proposed model, ReCOV-101 uses full chest CT scans to</div><div>detect varying degrees of COVID-19 infection, and requires</div><div>less computational power. Moreover, in order to improve</div><div>the detection accuracy the CT-scans were preprocessed by</div><div>employing segmentation and interpolation. The proposed</div><div>scheme is based on the residual network, taking advantage</div><div>of skip connection, allowing the model to go deeper.</div><div>Moreover, the model was trained on a single enterpriselevel</div><div>GPU such that it can easily be provided on the edge of</div><div>the network, reducing communication with the cloud often</div><div>required for processing the data. The objective of this work</div><div>is to demonstrate a less hardware-intensive approach for COVID-19 detection with excellent performance that can</div><div>be combined with medical equipment and help ease the</div><div>examination procedure. Moreover, with the proposed model</div><div>an accuracy of 94.9% was achieved.</div>

2020 ◽  
Author(s):  
varan singhrohila ◽  
Nitin Gupta ◽  
Amit Kaul ◽  
Deepak Sharma

<div>The ongoing pandemic of COVID-19 has shown</div><div>the limitations of our current medical institutions. There</div><div>is a need for research in the field of automated diagnosis</div><div>for speeding up the process while maintaining accuracy</div><div>and reducing computational requirements. In this work, an</div><div>automatic diagnosis of COVID-19 infection from CT scans</div><div>of the patients using Deep Learning technique is proposed.</div><div>The proposed model, ReCOV-101 uses full chest CT scans to</div><div>detect varying degrees of COVID-19 infection, and requires</div><div>less computational power. Moreover, in order to improve</div><div>the detection accuracy the CT-scans were preprocessed by</div><div>employing segmentation and interpolation. The proposed</div><div>scheme is based on the residual network, taking advantage</div><div>of skip connection, allowing the model to go deeper.</div><div>Moreover, the model was trained on a single enterpriselevel</div><div>GPU such that it can easily be provided on the edge of</div><div>the network, reducing communication with the cloud often</div><div>required for processing the data. The objective of this work</div><div>is to demonstrate a less hardware-intensive approach for COVID-19 detection with excellent performance that can</div><div>be combined with medical equipment and help ease the</div><div>examination procedure. Moreover, with the proposed model</div><div>an accuracy of 94.9% was achieved.</div>


2020 ◽  
Vol 21 (S6) ◽  
Author(s):  
Jianqiang Li ◽  
Guanghui Fu ◽  
Yueda Chen ◽  
Pengzhi Li ◽  
Bo Liu ◽  
...  

Abstract Background Screening of the brain computerised tomography (CT) images is a primary method currently used for initial detection of patients with brain trauma or other conditions. In recent years, deep learning technique has shown remarkable advantages in the clinical practice. Researchers have attempted to use deep learning methods to detect brain diseases from CT images. Methods often used to detect diseases choose images with visible lesions from full-slice brain CT scans, which need to be labelled by doctors. This is an inaccurate method because doctors detect brain disease from a full sequence scan of CT images and one patient may have multiple concurrent conditions in practice. The method cannot take into account the dependencies between the slices and the causal relationships among various brain diseases. Moreover, labelling images slice by slice spends much time and expense. Detecting multiple diseases from full slice brain CT images is, therefore, an important research subject with practical implications. Results In this paper, we propose a model called the slice dependencies learning model (SDLM). It learns image features from a series of variable length brain CT images and slice dependencies between different slices in a set of images to predict abnormalities. The model is necessary to only label the disease reflected in the full-slice brain scan. We use the CQ500 dataset to evaluate our proposed model, which contains 1194 full sets of CT scans from a total of 491 subjects. Each set of data from one subject contains scans with one to eight different slice thicknesses and various diseases that are captured in a range of 30 to 396 slices in a set. The evaluation results present that the precision is 67.57%, the recall is 61.04%, the F1 score is 0.6412, and the areas under the receiver operating characteristic curves (AUCs) is 0.8934. Conclusion The proposed model is a new architecture that uses a full-slice brain CT scan for multi-label classification, unlike the traditional methods which only classify the brain images at the slice level. It has great potential for application to multi-label detection problems, especially with regard to the brain CT images.


2020 ◽  
Vol 26 (4) ◽  
pp. 3088-3105 ◽  
Author(s):  
Mohamed Abdel-Basst ◽  
Rehab Mohamed ◽  
Mohamed Elhoseny

The rapid spread of the COVID-19 virus around the world poses a real threat to public safety. Some COVID-19 symptoms are similar to other viral chest diseases, which makes it challenging to develop models for effective detection of COVID-19 infection. This article advocates a model to differentiate between COVID-19 and other four viral chest diseases under uncertainty environment using the viruses primary symptoms and CT scans. The proposed model is based on a plithogenic set, which provides higher accurate evaluation results in an uncertain environment. The proposed model employs the best-worst method (BWM) and the technique in order of preference by similarity to ideal solution (TOPSIS). Besides, this study discusses how smart Internet of Things technology can assist medical staff in monitoring the spread of COVID-19. Experimental evaluation of the proposed model was conducted on five different chest diseases. Evaluation results demonstrate that the proposed model effectiveness in detecting the COVID-19 in all five cases achieving detection accuracy of up to 98%.


The most serious threats to the current mobile internet are Android Malware. In this paper, we proposed a static analysis model that does not need to understand the source code of the android applications. The main idea is as most of the malware variants are created using automatic tools. Also, there are special fingerprint features for each malware family. According to decompiling the android APK, we mapped the Opcodes, sensitive API packages, and high-level risky API functions into three channels of an RGB image respectively. Then we used the deep learning technique convolutional neural network to identify Android application as benign or as malware. Finally, the proposed model succeeds to detect the entire 200 android applications (100 benign applications and 100 malware applications) with an accuracy of over 99% as shown in experimental results.


2021 ◽  
Author(s):  
E. Karthik ◽  
T Sethukarasi

Abstract Sentiment analysis uses different tools and techniques to extract informative data such as users' opinions or emotions from their textual feedback. The state-of-art sentiment analysis techniques offered lower performance due to the inability to handle both small and larger datasets. To overcome this problem this paper presents a deep learning technique known as Centered Convolutional Restricted Boltzmann Machines (CCRBM) for user behavioral sentimental analysis. However, this deep learning model's performance solely depends upon the parameter selection process. To overcome this problem and increase the classification accuracy a Hybrid Atom Search Arithmetic Optimization (HASAO) algorithm is used in this paper to select the parameters of the CCRBM architecture and offer optimal performance. The initial population quality and exploitation capacity of the Atom Search Optimization (ASO) algorithm is enhanced by hybridizing it with the Arithmetic Optimization(AO) algorithm. To investigate the effectiveness of the proposed HASAO optimized CCRBM architecture it is evaluated using four different datasets namely Reddit, Twitter, IMDB movie review, and Yelp dataset. The performance of the proposed model is analyzed by comparing it with four baseline models and the accuracy value above 90% for the four datasets proves the efficiency of the proposed technique.


2022 ◽  
Vol 2022 ◽  
pp. 1-11
Author(s):  
Syed Abdul Basit Andrabi ◽  
Abdul Wahid

Machine translation is an ongoing field of research from the last decades. The main aim of machine translation is to remove the language barrier. Earlier research in this field started with the direct word-to-word replacement of source language by the target language. Later on, with the advancement in computer and communication technology, there was a paradigm shift to data-driven models like statistical and neural machine translation approaches. In this paper, we have used a neural network-based deep learning technique for English to Urdu languages. Parallel corpus sizes of around 30923 sentences are used. The corpus contains sentences from English-Urdu parallel corpus, news, and sentences which are frequently used in day-to-day life. The corpus contains 542810 English tokens and 540924 Urdu tokens, and the proposed system is trained and tested using 70 : 30 criteria. In order to evaluate the efficiency of the proposed system, several automatic evaluation metrics are used, and the model output is also compared with the output from Google Translator. The proposed model has an average BLEU score of 45.83.


2020 ◽  
Vol 2020 ◽  
pp. 1-12
Author(s):  
Zhenbo Lu ◽  
Wei Zhou ◽  
Shixiang Zhang ◽  
Chen Wang

Quick and accurate crash detection is important for saving lives and improved traffic incident management. In this paper, a feature fusion-based deep learning framework was developed for video-based urban traffic crash detection task, aiming at achieving a balance between detection speed and accuracy with limited computing resource. In this framework, a residual neural network (ResNet) combined with attention modules was proposed to extract crash-related appearance features from urban traffic videos (i.e., a crash appearance feature extractor), which were further fed to a spatiotemporal feature fusion model, Conv-LSTM (Convolutional Long Short-Term Memory), to simultaneously capture appearance (static) and motion (dynamic) crash features. The proposed model was trained by a set of video clips covering 330 crash and 342 noncrash events. In general, the proposed model achieved an accuracy of 87.78% on the testing dataset and an acceptable detection speed (FPS > 30 with GTX 1060). Thanks to the attention module, the proposed model can capture the localized appearance features (e.g., vehicle damage and pedestrian fallen-off) of crashes better than conventional convolutional neural networks. The Conv-LSTM module outperformed conventional LSTM in terms of capturing motion features of crashes, such as the roadway congestion and pedestrians gathering after crashes. Compared to traditional motion-based crash detection model, the proposed model achieved higher detection accuracy. Moreover, it could detect crashes much faster than other feature fusion-based models (e.g., C3D). The results show that the proposed model is a promising video-based urban traffic crash detection algorithm that could be used in practice in the future.


Sensors ◽  
2020 ◽  
Vol 20 (6) ◽  
pp. 1698 ◽  
Author(s):  
Jia Yin ◽  
Koppaka Ganesh Sai Apuroop ◽  
Yokhesh Krishnasamy Tamilselvam ◽  
Rajesh Elara Mohan ◽  
Balakrishnan Ramalingam ◽  
...  

This work presents a table cleaning and inspection method using a Human Support Robot (HSR) which can operate in a typical food court setting. The HSR is able to perform a cleanliness inspection and also clean the food litter on the table by implementing a deep learning technique and planner framework. A lightweight Deep Convolutional Neural Network (DCNN) has been proposed to recognize the food litter on top of the table. In addition, the planner framework was proposed to HSR for accomplishing the table cleaning task which generates the cleaning path according to the detection of food litter and then the cleaning action is carried out. The effectiveness of the food litter detection module is verified with the cleanliness inspection task using Toyota HSR, and its detection results are verified with standard quality metrics. The experimental results show that the food litter detection module achieves an average of 96 % detection accuracy, which is more suitable for deploying the HSR robots for performing the cleanliness inspection and also helps to select the different cleaning modes. Further, the planner part has been tested through the table cleaning tasks. The experimental results show that the planner generated the cleaning path in real time and its generated path is optimal which reduces the cleaning time by grouping based cleaning action for removing the food litters from the table.


Author(s):  
Minh T Nguyen ◽  
Jin H Huang

Machine fault detection is designed to automatically detect faults or damage in machines. When a machine operates, it produces vibrations and sound signals that can be analyzed to provide information about the status of the machine. This study proposed a method to detect the faults in a machine based on sound analysis using a deep learning technique. The sound signals generated by the machine were obtained and analyzed under different operating conditions. These signals were first pre-processed to eliminate noise, and then the features were extracted as mel-spectrograms so that the convolutional neural network could automatically learn the appropriate features required for classification. Experiments were conducted on three different water pumps during suction from and discharge to the water tank under normal and abnormal operating conditions. The high accuracies in fault detections in both known and unknown machines indicated that the proposed model performed very well in the detection of machine faults.


2018 ◽  
Vol 2018 ◽  
pp. 1-12 ◽  
Author(s):  
Faten Hamed Nahhas ◽  
Helmi Z. M. Shafri ◽  
Maher Ibrahim Sameen ◽  
Biswajeet Pradhan ◽  
Shattri Mansor

This paper reports on a building detection approach based on deep learning (DL) using the fusion of Light Detection and Ranging (LiDAR) data and orthophotos. The proposed method utilized object-based analysis to create objects, a feature-level fusion, an autoencoder-based dimensionality reduction to transform low-level features into compressed features, and a convolutional neural network (CNN) to transform compressed features into high-level features, which were used to classify objects into buildings and background. The proposed architecture was optimized for the grid search method, and its sensitivity to hyperparameters was analyzed and discussed. The proposed model was evaluated on two datasets selected from an urban area with different building types. Results show that the dimensionality reduction by the autoencoder approach from 21 features to 10 features can improve detection accuracy from 86.06% to 86.19% in the working area and from 77.92% to 78.26% in the testing area. The sensitivity analysis also shows that the selection of the hyperparameter values of the model significantly affects detection accuracy. The best hyperparameters of the model are 128 filters in the CNN model, the Adamax optimizer, 10 units in the fully connected layer of the CNN model, a batch size of 8, and a dropout of 0.2. These hyperparameters are critical to improving the generalization capacity of the model. Furthermore, comparison experiments with the support vector machine (SVM) show that the proposed model with or without dimensionality reduction outperforms the SVM models in the working area. However, the SVM model achieves better accuracy in the testing area than the proposed model without dimensionality reduction. This study generally shows that the use of an autoencoder in DL models can improve the accuracy of building recognition in fused LiDAR–orthophoto data.


Sign in / Sign up

Export Citation Format

Share Document