Racing APUF: A Novel APUF against Machine Learning Attack with High Reliability

Author(s):  
Zheng Li ◽  
Lijia Zhu ◽  
Minghui Huang ◽  
Ziheng Chen ◽  
Shuai Chen ◽  
...  
2021 ◽  
Author(s):  
Xinxu Shen ◽  
Troy Houser ◽  
David Victor Smith ◽  
Vishnu P. Murty

The use of naturalistic stimuli, such as narrative movies, is gaining popularity in many fields, characterizing memory, affect, and decision-making. Narrative recall paradigms are often used to capture the complexity and richness of memory for naturalistic events. However, scoring narrative recalls is time-consuming and prone to human biases. Here, we show the validity and reliability of using a natural language processing tool, the Universal Sentence Encoder (USE), to automatically score narrative recall. We compared the reliability in scoring made between two independent raters (i.e., hand-scored) and between our automated algorithm and individual raters (i.e., automated) on trial-unique, video clips of magic tricks. Study 1 showed that our automated segmentation approaches yielded high reliability and reflected measures yielded by hand-scoring, and further that the results using USE outperformed another popular natural language processing tool, GloVe. In study two, we tested whether our automated approach remained valid when testing individual’s varying on clinically-relevant dimensions that influence episodic memory, age and anxiety. We found that our automated approach was equally reliable across both age groups and anxiety groups, which shows the efficacy of our approach to assess narrative recall in large-scale individual difference analysis. In sum, these findings suggested that machine learning approaches implementing USE are a promising tool for scoring large-scale narrative recalls and perform individual difference analysis for research using naturalistic stimuli.


2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
Muhammad Waqar ◽  
Hassan Dawood ◽  
Hussain Dawood ◽  
Nadeem Majeed ◽  
Ameen Banjar ◽  
...  

Cardiac disease treatments are often being subjected to the acquisition and analysis of vast quantity of digital cardiac data. These data can be utilized for various beneficial purposes. These data’s utilization becomes more important when we are dealing with critical diseases like a heart attack where patient life is often at stake. Machine learning and deep learning are two famous techniques that are helping in making the raw data useful. Some of the biggest problems that arise from the usage of the aforementioned techniques are massive resource utilization, extensive data preprocessing, need for features engineering, and ensuring reliability in classification results. The proposed research work presents a cost-effective solution to predict heart attack with high accuracy and reliability. It uses a UCI dataset to predict the heart attack via various machine learning algorithms without the involvement of any feature engineering. Moreover, the given dataset has an unequal distribution of positive and negative classes which can reduce performance. The proposed work uses a synthetic minority oversampling technique (SMOTE) to handle given imbalance data. The proposed system discarded the need of feature engineering for the classification of the given dataset. This led to an efficient solution as feature engineering often proves to be a costly process. The results show that among all machine learning algorithms, SMOTE-based artificial neural network when tuned properly outperformed all other models and many existing systems. The high reliability of the proposed system ensures that it can be effectively used in the prediction of the heart attack.


Energies ◽  
2021 ◽  
Vol 14 (20) ◽  
pp. 6852
Author(s):  
Grant Buster ◽  
Paul Siratovich ◽  
Nicole Taverna ◽  
Michael Rossol ◽  
Jon Weers ◽  
...  

Geothermal power plants are excellent resources for providing low carbon electricity generation with high reliability. However, many geothermal power plants could realize significant improvements in operational efficiency from the application of improved modeling software. Increased integration of digital twins into geothermal operations will not only enable engineers to better understand the complex interplay of components in larger systems but will also enable enhanced exploration of the operational space with the recent advances in artificial intelligence (AI) and machine learning (ML) tools. Such innovations in geothermal operational analysis have been deterred by several challenges, most notably, the challenge in applying idealized thermodynamic models to imperfect as-built systems with constant degradation of nominal performance. This paper presents GOOML: a new framework for Geothermal Operational Optimization with Machine Learning. By taking a hybrid data-driven thermodynamics approach, GOOML is able to accurately model the real-world performance characteristics of as-built geothermal systems. Further, GOOML can be readily integrated into the larger AI and ML ecosystem for true state-of-the-art optimization. This modeling framework has already been applied to several geothermal power plants and has provided reasonably accurate results in all cases. Therefore, we expect that the GOOML framework can be applied to any geothermal power plant around the world.


2020 ◽  
Vol 8 (12) ◽  
pp. 992
Author(s):  
Mengning Wu ◽  
Christos Stefanakos ◽  
Zhen Gao

Short-term wave forecasts are essential for the execution of marine operations. In this paper, an efficient and reliable physics-based machine learning (PBML) model is proposed to realize the multi-step-ahead forecasting of wave conditions (e.g., significant wave height Hs and peak wave period Tp). In the model, the primary variables in physics-based wave models (i.e., the wind forcing and initial wave boundary condition) are considered as inputs. Meanwhile, a machine learning algorithm (artificial neural network, ANN) is adopted to build an implicit relation between inputs and forecasted outputs of wave conditions. The computational cost of this data-driven model is obviously much lower than that of the differential-equation based physical model. A ten-year (from 2001 to 2010) dataset of every three hours at the North Sea center was used to assess the model performance in a small domain. The result reveals high reliability for one-day-ahead Hs forecasts, while that of Tp is slightly lower due to the weaker implicit relationships between the data. Overall, the PBML model can be conceived as an efficient tool for the multi-step-ahead forecasting of wave conditions, and thus has great potential for furthering assist decision-making during the execution of marine operations.


2020 ◽  
Vol 9 (2) ◽  
pp. 601-614
Author(s):  
Linda A. Antonucci ◽  
Alessandro Taurino ◽  
Domenico Laera ◽  
Paolo Taurisano ◽  
Jolanda Losole ◽  
...  

2017 ◽  
Vol 2017 ◽  
pp. 1-14 ◽  
Author(s):  
Jan Kleine Deters ◽  
Rasa Zalakeviciute ◽  
Mario Gonzalez ◽  
Yves Rybarczyk

Outdoor air pollution costs millions of premature deaths annually, mostly due to anthropogenic fine particulate matter (or PM2.5). Quito, the capital city of Ecuador, is no exception in exceeding the healthy levels of pollution. In addition to the impact of urbanization, motorization, and rapid population growth, particulate pollution is modulated by meteorological factors and geophysical characteristics, which complicate the implementation of the most advanced models of weather forecast. Thus, this paper proposes a machine learning approach based on six years of meteorological and pollution data analyses to predict the concentrations of PM2.5 from wind (speed and direction) and precipitation levels. The results of the classification model show a high reliability in the classification of low (<10 µg/m3) versus high (>25 µg/m3) and low (<10 µg/m3) versus moderate (10–25 µg/m3) concentrations of PM2.5. A regression analysis suggests a better prediction of PM2.5 when the climatic conditions are getting more extreme (strong winds or high levels of precipitation). The high correlation between estimated and real data for a time series analysis during the wet season confirms this finding. The study demonstrates that the use of statistical models based on machine learning is relevant to predict PM2.5 concentrations from meteorological data.


Computers ◽  
2021 ◽  
Vol 10 (3) ◽  
pp. 34
Author(s):  
Stefan Bosse ◽  
Dennis Weiss ◽  
Daniel Schmidt

Structural health monitoring (SHM) is a promising technique for in-service inspection of technical structures in a broad field of applications in order to reduce maintenance efforts as well as the overall structural weight. SHM is basically an inverse problem deriving physical properties such as damages or material inhomogeneity (target features) from sensor data. Often models defining the relationship between predictable features and sensors are required but not available. The main objective of this work is the investigation of model-free distributed machine learning (DML) for damage diagnostics under resource and failure constraints by using multi-instance ensemble and model fusion strategies and featuring improved scaling and stability compared with centralised single-instance approaches. The diagnostic system delivers two features: A binary damage classification (damaged or non-damaged) and an estimation of the spatial damage position in case of a damaged structure. The proposed damage diagnostics architecture should be able to be used in low-resource sensor networks with soft real-time capabilities. Two different machine learning methodologies and architectures are evaluated and compared posing low- and high-resolution sensor processing for low- and high-resolution damage diagnostics, i.e., a dedicated supervised trained low-resource and an unsupervised trained high-resource deep learning approach, respectively. In both architectures state-based recurrent artificial neural networks are used that process spatially and time-resolved sensor data from experimental ultrasonic guided wave measurements of a hybrid material (carbon fibre laminate) plate with pseudo defects. Finally, both architectures can be fused to a hybrid architecture with improved damage detection accuracy and reliability. An extensive evaluation of the damage prediction by both systems shows high reliability and accuracy of damage detection and localisation, even by the distributed multi-instance architecture with a resolution in the order of the sensor distance.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Dae-Hong Min ◽  
Hyung-Koo Yoon

AbstractDeterministic models have been widely applied in landslide risk assessment (LRA), but they have limitations in obtaining various geotechnical and hydraulic properties. The objective of this study is to suggest a new deterministic method based on machine learning (ML) algorithms. Eight crucial variables of LRA are selected with reference to expert opinions, and the output value is set to the safety factor derived by Mohr–Coulomb failure theory in infinite slope. Linear regression and a neural network based on ML are applied to find the best model between independent and dependent variables. To increase the reliability of linear regression and the neural network, the results of back propagation, including gradient descent, Levenberg–Marquardt (LM), and Bayesian regularization (BR) methods, are compared. An 1800-item dataset is constructed through measured data and artificial data by using a geostatistical technique, which can provide the information of an unknown area based on measured data. The results of linear regression and the neural network show that the special LM and BR back propagation methods demonstrate a high determination of coefficient. The important variables are also investigated though random forest (RF) to overcome the number of various input variables. Only four variables—shear strength, soil thickness, elastic modulus, and fine content—demonstrate a high reliability for LRA. The results show that it is possible to perform LRA with ML, and four variables are enough when it is difficult to obtain various variables.


2019 ◽  
Vol 10 (1) ◽  
pp. 84 ◽  
Author(s):  
In Hyuk Choi ◽  
Ja Bin Koo ◽  
Jung Wook Woo ◽  
Ju Am Son ◽  
Do Yeon Bae ◽  
...  

Frequency response signals have been used for the non-destructive evaluation of many different structures and for the integrity evaluation of porcelain insulators. However, it is difficult to accurately estimate the integrity of porcelain insulators under various environmental conditions only by using general frequency response signals. Therefore, this study used a method that extracted several features that can be derived from the frequency response signal and reduced their dimensions to select features suitable for the evaluation of the soundness of porcelain insulators. The latest machine learning techniques were used to identify correlations and not for basic feature analyses. Two machine learning models were developed using the support vector machine and ensemble methods in MATLAB. Both models showed high reliability in distinguishing between normal and defective porcelain insulators, and they could visualize the distribution area of the data by extracting quantitative values and applying machine learning, rather than simply verifying the frequency response signal.


Sign in / Sign up

Export Citation Format

Share Document