scholarly journals Tongue fissure visualization by using deep learning – an example of the application of artificial intelligence in traditional medicine

2020 ◽  
Author(s):  
Wen-Hsien Chang ◽  
Han-Kuei Wu ◽  
Lun-chien Lo ◽  
William W. L. Hsiao ◽  
Hsueh-Ting Chu ◽  
...  

Abstract Background Traditional Chinese medicine (TCM) describes physiological and pathological changes inside and outside the human body by the application of four methods of diagnosis. One of the four methods, tongue diagnosis, is widely used by TCM physicians, since it allows direct observations that prevent discrepancies in the patient’s history and, as such, provides clinically important, objective evidence. The clinical significance of tongue features has been explored in both TCM and modern medicine. However, TCM physicians may have different interpretations of the features displayed by the same tongue, and therefore intra- and inter-observer agreements are relatively low. If an automated interpretation system could be developed, more consistent results could be obtained, and learning could also be more efficient. This study will apply a recently developed deep learning method to the classification of tongue features, and indicate the regions where the features are located. Methods A large number of tongue photographs with labeled fissures were used. Transfer learning was conducted using the ImageNet-pretrained ResNet50 model to determine whether tongue fissures were identified on a tongue photograph. Often, the neural network model lacks interpretability, and users cannot understand how the model determines the presence of tongue fissures. Therefore, Gradient-weighted Class Activation Mapping (Grad-CAM) was also applied to directly mark the tongue features on the tongue image. Results Only 6 epochs were trained in this study and no graphics processing units (GPUs) were used. It took less than 4 minutes for each epoch to be trained. The correct rate for the test set was approximately 70%. After the model training was completed, Grad-CAM was applied to localize tongue fissures in each image. The neural network model not only determined whether tongue fissures existed, but also allowed users to learn about the tongue fissure regions. Conclusions This study demonstrated how to apply transfer learning using the ImageNet-pretrained ResNet50 model for the identification and localization of tongue fissures and regions. The neural network model built in this study provided interpretability and intuitiveness, (often lacking in general neural network models), and improved the feasibility for clinical application.

2020 ◽  
Author(s):  
Wen-Hsien Chang ◽  
Han-Kuei Wu ◽  
Lun-chien Lo ◽  
William W. L. Hsiao ◽  
Hsueh-Ting Chu ◽  
...  

Abstract Background: Traditional Chinese medicine (TCM) describes physiological and pathological changes inside and outside the human body by the application of four methods of diagnosis. One of the four methods, tongue diagnosis, is widely used by TCM physicians, since it allows direct observations that prevent discrepancies in the patient’s history and, as such, provides clinically important, objective evidence. The clinical significance of tongue features has been explored in both TCM and modern medicine. However, TCM physicians may have different interpretations of the features displayed by the same tongue, and therefore intra- and inter-observer agreements are relatively low. If an automated interpretation system could be developed, more consistent results could be obtained, and learning could also be more efficient. This study will apply a recently developed deep learning method to the classification of tongue features, and indicate the regions where the features are located.Methods: A large number of tongue photographs with labeled fissures were used. Transfer learning was conducted using the ImageNet-pretrained ResNet50 model to determine whether tongue fissures were identified on a tongue photograph. Often, the neural network model lacks interpretability, and users cannot understand how the model determines the presence of tongue fissures. Therefore, Gradient-weighted Class Activation Mapping (Grad-CAM) was also applied to directly mark the tongue features on the tongue image. Results: Only 6 epochs were trained in this study and no graphics processing units (GPUs) were used. It took less than 4 minutes for each epoch to be trained. The correct rate for the test set was approximately 70%. After the model training was completed, Grad-CAM was applied to localize tongue fissures in each image. The neural network model not only determined whether tongue fissures existed, but also allowed users to learn about the tongue fissure regions.Conclusions: This study demonstrated how to apply transfer learning using the ImageNet-pretrained ResNet50 model for the identification and localization of tongue fissures and regions. The neural network model built in this study provided interpretability and intuitiveness, (often lacking in general neural network models), and improved the feasibility for clinical application.


2019 ◽  
Author(s):  
Wen-Hsien Chang ◽  
Han-Kuei Wu ◽  
Lun-chien Lo ◽  
William W. L. Hsiao ◽  
Hsueh-Ting Chu ◽  
...  

Abstract Background Traditional Chinese medicine (TCM) describes physiological and pathological changes inside and outside the human body by the application of four methods of diagnosis. One of the four methods, tongue diagnosis, is widely used by TCM physicians, since it allows direct observations that prevent discrepancies in the patient’s history and, as such, provides clinically important, objective evidence. The clinical significance of tongue features has been explored in both TCM and modern medicine. However, TCM physicians may have different interpretations of the features displayed by the same tongue, and therefore intra- and inter-observer agreements are relatively low. If an automated interpretation system could be developed, more consistent results could be obtained, and learning could also be more efficient. This study will apply a recently developed deep learning method to the classification of tongue features, and indicate the regions where the features are located. Methods A large number of tongue photographs with labeled fissures were used. Transfer learning was conducted using the ImageNet-pretrained ResNet50 model to determine whether tongue fissures were identified on a tongue photograph. Often, the neural network model lacks interpretability, and users cannot understand how the model determines the presence of tongue fissures. Therefore, Gradient-weighted Class Activation Mapping (Grad-CAM) was also applied to directly mark the tongue features on the tongue image. Results Only 6 epochs were trained in this study and no graphics processing units (GPUs) were used. It took less than 4 minutes for each epoch to be trained. The correct rate for the test set was approximately 70%. After the model training was completed, Grad-CAM was applied to localize tongue fissures in each image. The neural network model not only determined whether tongue fissures existed, but also allowed users to learn about the tongue fissure regions. Conclusions This study demonstrated how to apply transfer learning using the ImageNet-pretrained ResNet50 model for the identification and localization of tongue fissures and regions. The neural network model built in this study provided interpretability and intuitiveness, (often lacking in general neural network models), and improved the feasibility for clinical application.


Author(s):  
A. Saravanan ◽  
J. Jerald ◽  
A. Delphin Carolina Rani

AbstractThe objective of the paper is to develop a new method to model the manufacturing cost–tolerance and to optimize the tolerance values along with its manufacturing cost. A cost–tolerance relation has a complex nonlinear correlation among them. The property of a neural network makes it possible to model the complex correlation, and the genetic algorithm (GA) is integrated with the best neural network model to optimize the tolerance values. The proposed method used three types of neural network models (multilayer perceptron, backpropagation network, and radial basis function). These network models were developed separately for prismatic and rotational parts. For the construction of network models, part size and tolerance values were used as input neurons. The reference manufacturing cost was assigned as the output neuron. The qualitative production data set was gathered in a workshop and partitioned into three files for training, testing, and validation, respectively. The architecture of the network model was identified based on the best regression coefficient and the root-mean-square-error value. The best network model was integrated into the GA, and the role of genetic operators was also studied. Finally, two case studies from the literature were demonstrated in order to validate the proposed method. A new methodology based on the neural network model enables the design and process planning engineers to propose an intelligent decision irrespective of their experience.


2019 ◽  
Vol 8 (4) ◽  
pp. 5023-5031

Forecasting and prediction are based on pattern recognition. It may be a human energy potential increase day today when he grownup a young guy, but afterward, his energy potential going downwards. So, we observed the pattern with the help of neural network models; these are radical bias function (RBP) and back-propagation (BP). Utilizing the neural network model, it also has many classification parts like a deep neural network, feedforward neural network, recurrent neural network, convolutional neural network and many more. In the forecasting or prediction, we have a large amount of data to manage. We trained the data with algorithm and here we also use the neural network models. We used optimization techniques that are inspired by biological swarm. Nowadays, lots of data generate day by day like market, medical, education, automobile, etc. we need recognition of the pattern for prediction of future expectations. That expectation of prediction very helpful and needy to gain profit of human beings. In this work, we use SOM (self-Organized Map), RBF (Radical Bias Function), DNN (Deep Neural Network) and PGO (Plant Grow Optimization). The total data point for the processing used 27500. The evaluation of the performance used standard parameters such as ET, MAE, MSE, RMSE and MI. The proposed algorithm implemented in MATLAB software. The cascaded neural network classifier is the combination of the SOM and RBF neural network models. The SOM neural network model proceeds the task of clustering and RBF neural network model used for prediction.


Author(s):  
Zilin Bian ◽  
Kaan Ozbay

This study aims to develop a neural network model to predict work zone capacity including various uncertainties stemming from traffic and operational conditions. The neural network model is formulated in terms of the number of total lanes, number of open lanes, heavy vehicle percentage, work intensity, and work duration. The data used in this paper are obtained from previous studies published in open literature. To capture the uncertainty of work zone capacity, this paper provides two recent methods that enable neural network models to generate prediction intervals which are determined by mean work zone capacity and prediction standard error. The research first builds a Bayesian neural network model with the application of black-box variational inference (BBVI) technique. The second model is based on a regular artificial neural network with an application of the recently proposed Monte-Carlo dropout technique. Both of the neural network models construct prediction intervals under various confidence levels and provide the coverage rates of the actual work zone capacities. The statistical accuracy (MAPE, MAE, MSE, and RMSE) of the models is then compared with traditional estimation methods in predicted mean work zone capacity. BBVI produces better statistical results than the other three models. Both of the models provide predicted work zone capacity distribution and prediction intervals, whereas traditional models only provide a single estimate.


2015 ◽  
Vol 10 (3) ◽  
pp. 325-340 ◽  
Author(s):  
Sujeet Kumar Sharma ◽  
Srikrishna Madhumohan Govindaluri ◽  
Said Gattoufi

Purpose – The purpose of this paper is to investigate the quality determinants influencing the adoption of e-government services in Oman and compare the performance of multiple regression and neural network models in identifying the significant factors influencing adoption in Oman. Design/methodology/approach – Primary data concerning service quality determinants and demographic variables were collected using a structured questionnaire survey. The variables selected in the design of the questionnaire were based on an extensive literature review. Factor analysis, multiple linear regression and neural network models were employed to analyze data. Findings – The study found that quality determinants: responsiveness, security, efficiency and reliability are statistically significant predictors of adoption. The neural network model performed better than the regression model in the prediction of e-government services’ adoption and was able to characterize the non-linear relationship of the aforementioned predictors with the adoption of e-government services. Further, the neural network model was able to identify demographic variables as significant predictors. Practical implications – This study highlights the importance of service quality in the adoption of e-government services and suggests that an enhanced focus and investment on improving quality of the design and delivery of e-government services can have a positive impact on the usage of the services, thereby enabling the Oman Government in achieving the governance objectives for which these technologies were employed. Originality/value – Studies in the area of e-government typically focus either on technology adoption problems or service quality problems. The role of service quality in adoption is rarely addressed. The research presented in this paper is of great value to the institutions that are involved in the development of technology-based e-government services in Oman.


2021 ◽  
Vol 21 ◽  
pp. 330-335
Author(s):  
Maciej Wadas ◽  
Jakub Smołka

This paper presents the results of performance analysis of the Tensorflow library used in machine learning and deep neural networks. The analysis focuses on comparing the parameters obtained when training the neural network model for optimization algorithms: Adam, Nadam, AdaMax, AdaDelta, AdaGrad. Special attention has been paid to the differences between the training efficiency on tasks using microprocessor and graphics card. For the study, neural network models were created in order to recognise Polish handwritten characters. The results obtained showed that the most efficient algorithm is AdaMax, while the computer component used during the research only affects the training time of the neural network model used.


2020 ◽  
Author(s):  
Yang Liu ◽  
Hansaim Lim ◽  
Lei Xie

AbstractMotivationDrug discovery is time-consuming and costly. Machine learning, especially deep learning, shows a great potential in accelerating the drug discovery process and reducing its cost. A big challenge in developing robust and generalizable deep learning models for drug design is the lack of a large amount of data with high quality and balanced labels. To address this challenge, we developed a self-training method PLANS that exploits millions of unlabeled chemical compounds as well as partially labeled pharmacological data to improve the performance of neural network models.ResultWe evaluated the self-training with PLANS for Cytochrome P450 binding activity prediction task, and proved that our method could significantly improve the performance of the neural network model with a large margin. Compared with the baseline deep neural network model, the PLANS-trained neural network model improved accuracy, precision, recall, and F1 score by 13.4%, 12.5%, 8.3%, and 10.3%, respectively. The self-training with PLANS is model agnostic, and can be applied to any deep learning architectures. Thus, PLANS provides a general solution to utilize unlabeled and partially labeled data to improve the predictive modeling for drug discovery.AvailabilityThe code that implements PLANS is available at https://github.com/XieResearchGroup/PLANS


Electronics ◽  
2021 ◽  
Vol 10 (13) ◽  
pp. 1514
Author(s):  
Seung-Ho Lim ◽  
WoonSik William Suh ◽  
Jin-Young Kim ◽  
Sang-Young Cho

The optimization for hardware processor and system for performing deep learning operations such as Convolutional Neural Networks (CNN) in resource limited embedded devices are recent active research area. In order to perform an optimized deep neural network model using the limited computational unit and memory of an embedded device, it is necessary to quickly apply various configurations of hardware modules to various deep neural network models and find the optimal combination. The Electronic System Level (ESL) Simulator based on SystemC is very useful for rapid hardware modeling and verification. In this paper, we designed and implemented a Deep Learning Accelerator (DLA) that performs Deep Neural Network (DNN) operation based on the RISC-V Virtual Platform implemented in SystemC in order to enable rapid and diverse analysis of deep learning operations in an embedded device based on the RISC-V processor, which is a recently emerging embedded processor. The developed RISC-V based DLA prototype can analyze the hardware requirements according to the CNN data set through the configuration of the CNN DLA architecture, and it is possible to run RISC-V compiled software on the platform, can perform a real neural network model like Darknet. We performed the Darknet CNN model on the developed DLA prototype, and confirmed that computational overhead and inference errors can be analyzed with the DLA prototype developed by analyzing the DLA architecture for various data sets.


2021 ◽  
Vol 10 (9) ◽  
pp. 25394-25398
Author(s):  
Chitra Desai

Deep learning models have demonstrated improved efficacy in image classification since the ImageNet Large Scale Visual Recognition Challenge started since 2010. Classification of images has further augmented in the field of computer vision with the dawn of transfer learning. To train a model on huge dataset demands huge computational resources and add a lot of cost to learning. Transfer learning allows to reduce on cost of learning and also help avoid reinventing the wheel. There are several pretrained models like VGG16, VGG19, ResNet50, Inceptionv3, EfficientNet etc which are widely used.   This paper demonstrates image classification using pretrained deep neural network model VGG16 which is trained on images from ImageNet dataset. After obtaining the convolutional base model, a new deep neural network model is built on top of it for image classification based on fully connected network. This classifier will use features extracted from the convolutional base model.


Sign in / Sign up

Export Citation Format

Share Document