scholarly journals Pathways to Consumers’ Minds: Using Machine Learning and Multiple EEG Metrics to Increase Preference Prediction Above and Beyond Traditional Measurements

2018 ◽  
Author(s):  
Adam Hakim ◽  
Shira Klorfeld ◽  
Tal Sela ◽  
Doron Friedman ◽  
Maytal Shabat-Simon ◽  
...  

AbstractA basic aim of marketing research is to predict consumers’ preferences and the success of marketing campaigns in the general population. However, traditional behavioral measurements have various limitations, calling for novel measurements to improve predictive power. In this study, we use neural signals measured with electroencephalography (EEG) in order to overcome these limitations. We record the EEG signals of subjects, as they watched commercials of six food products. We introduce a novel approach in which instead of using one type of EEG measure, we combine several measures, and use state-of-the-art machine learning algorithms to predict subjects’ individual future preferences over the products and the commercials’ population success, as measured by their YouTube metrics. As a benchmark, we acquired measurements of the commercials’ effectiveness using a standard questionnaire commonly used in marketing research. We reached 68.5% accuracy in predicting between the most and least preferred items and a lower than chance RMSE score for predicting the rank order preferences of all six products. We also predicted the commercials’ population success better than chance. Most importantly, we demonstrate for the first time, that for all of our predictions, the EEG measurements increased the prediction power of the questionnaires. Our analyses methods and results show great promise for utilizing EEG measures by managers, marketing practitioners, and researchers, as a valuable tool for predicting subjects’ preferences and marketing campaigns’ success.

2020 ◽  
Vol 14 (2) ◽  
pp. 140-159
Author(s):  
Anthony-Paul Cooper ◽  
Emmanuel Awuni Kolog ◽  
Erkki Sutinen

This article builds on previous research around the exploration of the content of church-related tweets. It does so by exploring whether the qualitative thematic coding of such tweets can, in part, be automated by the use of machine learning. It compares three supervised machine learning algorithms to understand how useful each algorithm is at a classification task, based on a dataset of human-coded church-related tweets. The study finds that one such algorithm, Naïve-Bayes, performs better than the other algorithms considered, returning Precision, Recall and F-measure values which each exceed an acceptable threshold of 70%. This has far-reaching consequences at a time where the high volume of social media data, in this case, Twitter data, means that the resource-intensity of manual coding approaches can act as a barrier to understanding how the online community interacts with, and talks about, church. The findings presented in this article offer a way forward for scholars of digital theology to better understand the content of online church discourse.


Sensors ◽  
2021 ◽  
Vol 21 (13) ◽  
pp. 4618
Author(s):  
Francisco Oliveira ◽  
Miguel Luís ◽  
Susana Sargento

Unmanned Aerial Vehicle (UAV) networks are an emerging technology, useful not only for the military, but also for public and civil purposes. Their versatility provides advantages in situations where an existing network cannot support all requirements of its users, either because of an exceptionally big number of users, or because of the failure of one or more ground base stations. Networks of UAVs can reinforce these cellular networks where needed, redirecting the traffic to available ground stations. Using machine learning algorithms to predict overloaded traffic areas, we propose a UAV positioning algorithm responsible for determining suitable positions for the UAVs, with the objective of a more balanced redistribution of traffic, to avoid saturated base stations and decrease the number of users without a connection. The tests performed with real data of user connections through base stations show that, in less restrictive network conditions, the algorithm to dynamically place the UAVs performs significantly better than in more restrictive conditions, reducing significantly the number of users without a connection. We also conclude that the accuracy of the prediction is a very important factor, not only in the reduction of users without a connection, but also on the number of UAVs deployed.


2021 ◽  
Vol 13 (3) ◽  
pp. 63
Author(s):  
Maghsoud Morshedi ◽  
Josef Noll

Video conferencing services based on web real-time communication (WebRTC) protocol are growing in popularity among Internet users as multi-platform solutions enabling interactive communication from anywhere, especially during this pandemic era. Meanwhile, Internet service providers (ISPs) have deployed fiber links and customer premises equipment that operate according to recent 802.11ac/ax standards and promise users the ability to establish uninterrupted video conferencing calls with ultra-high-definition video and audio quality. However, the best-effort nature of 802.11 networks and the high variability of wireless medium conditions hinder users experiencing uninterrupted high-quality video conferencing. This paper presents a novel approach to estimate the perceived quality of service (PQoS) of video conferencing using only 802.11-specific network performance parameters collected from Wi-Fi access points (APs) on customer premises. This study produced datasets comprising 802.11-specific network performance parameters collected from off-the-shelf Wi-Fi APs operating at 802.11g/n/ac/ax standards on both 2.4 and 5 GHz frequency bands to train machine learning algorithms. In this way, we achieved classification accuracies of 92–98% in estimating the level of PQoS of video conferencing services on various Wi-Fi networks. To efficiently troubleshoot wireless issues, we further analyzed the machine learning model to correlate features in the model with the root cause of quality degradation. Thus, ISPs can utilize the approach presented in this study to provide predictable and measurable wireless quality by implementing a non-intrusive quality monitoring approach in the form of edge computing that preserves customers’ privacy while reducing the operational costs of monitoring and data analytics.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Martin Saveski ◽  
Edmond Awad ◽  
Iyad Rahwan ◽  
Manuel Cebrian

AbstractAs groups are increasingly taking over individual experts in many tasks, it is ever more important to understand the determinants of group success. In this paper, we study the patterns of group success in Escape The Room, a physical adventure game in which a group is tasked with escaping a maze by collectively solving a series of puzzles. We investigate (1) the characteristics of successful groups, and (2) how accurately humans and machines can spot them from a group photo. The relationship between these two questions is based on the hypothesis that the characteristics of successful groups are encoded by features that can be spotted in their photo. We analyze >43K group photos (one photo per group) taken after groups have completed the game—from which all explicit performance-signaling information has been removed. First, we find that groups that are larger, older and more gender but less age diverse are significantly more likely to escape. Second, we compare humans and off-the-shelf machine learning algorithms at predicting whether a group escaped or not based on the completion photo. We find that individual guesses by humans achieve 58.3% accuracy, better than random, but worse than machines which display 71.6% accuracy. When humans are trained to guess by observing only four labeled photos, their accuracy increases to 64%. However, training humans on more labeled examples (eight or twelve) leads to a slight, but statistically insignificant improvement in accuracy (67.4%). Humans in the best training condition perform on par with two, but worse than three out of the five machine learning algorithms we evaluated. Our work illustrates the potentials and the limitations of machine learning systems in evaluating group performance and identifying success factors based on sparse visual cues.


Nafta-Gaz ◽  
2021 ◽  
Vol 77 (5) ◽  
pp. 283-292
Author(s):  
Tomasz Topór ◽  

The application of machine learning algorithms in petroleum geology has opened a new chapter in oil and gas exploration. Machine learning algorithms have been successfully used to predict crucial petrophysical properties when characterizing reservoirs. This study utilizes the concept of machine learning to predict permeability under confining stress conditions for samples from tight sandstone formations. The models were constructed using two machine learning algorithms of varying complexity (multiple linear regression [MLR] and random forests [RF]) and trained on a dataset that combined basic well information, basic petrophysical data, and rock type from a visual inspection of the core material. The RF algorithm underwent feature engineering to increase the number of predictors in the models. In order to check the training models’ robustness, 10-fold cross-validation was performed. The MLR and RF applications demonstrated that both algorithms can accurately predict permeability under constant confining pressure (R2 0.800 vs. 0.834). The RF accuracy was about 3% better than that of the MLR and about 6% better than the linear reference regression (LR) that utilized only porosity. Porosity was the most influential feature of the models’ performance. In the case of RF, the depth was also significant in the permeability predictions, which could be evidence of hidden interactions between the variables of porosity and depth. The local interpretation revealed the common features among outliers. Both the training and testing sets had moderate-low porosity (3–10%) and a lack of fractures. In the test set, calcite or quartz cementation also led to poor permeability predictions. The workflow that utilizes the tidymodels concept will be further applied in more complex examples to predict spatial petrophysical features from seismic attributes using various machine learning algorithms.


2020 ◽  
Vol 190 (3) ◽  
pp. 342-351
Author(s):  
Munir S Pathan ◽  
S M Pradhan ◽  
T Palani Selvam

Abstract In the present study, machine learning (ML) methods for the identification of abnormal glow curves (GC) of CaSO4:Dy-based thermoluminescence dosimeters in individual monitoring are presented. The classifier algorithms, random forest (RF), artificial neural network (ANN) and support vector machine (SVM) are employed for identifying not only the abnormal glow curve but also the type of abnormality. For the first time, the simplest and computationally efficient algorithm based on RF is presented for GC classifications. About 4000 GCs are used for the training and validation of ML algorithms. The performance of all algorithms is compared by using various parameters. Results show a fairly good accuracy of 99.05% for the classification of GCs by RF algorithm. Whereas 96.7% and 96.1% accuracy is achieved using ANN and SVM, respectively. The RF-based classifier is recommended for GC classification as well as in assisting the fault determination of the TLD reader system.


Author(s):  
Omar Zahour ◽  
El Habib Benlahmar ◽  
Ahmed Eddaouim ◽  
Oumaima Hourrane

Academic and vocational guidance is a particularly important issue today, as it strongly determines the chances of successful integration into the labor market, which has become increasingly difficult. Families have understood this because they are interested, often with concern, in the orientation of their child. In this context, it is very important to consider the interests, trades, skills, and personality of each student to make the right decision and build a strong career path. This paper deals with the problematic of educational and vocational guidance by providing a comparative study of the results of four machine-learning algorithms. The algorithms we used are for the automatic classification of school orientation questions and four categories based on John L. Holland's Theory of RIASEC typology. The results of this study show that neural networks work better than the other three algorithms in terms of the automatic classification of these questions. In this sense, our model allows us to automatically generate questions in this domain. This model can serve practitioners and researchers in E-Orientation for further research because the algorithms give us good results.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Duy Ngoc Nguyen ◽  
Tuoi Thi Phan ◽  
Phuc Do

AbstractSentiment classification, which uses deep learning algorithms, has achieved good results when tested with popular datasets. However, it will be challenging to build a corpus on new topics to train machine learning algorithms in sentiment classification with high confidence. This study proposes a method that processes embedding knowledge in the ontology of opinion datasets called knowledge processing and representation based on ontology (KPRO) to represent the significant features of the dataset into the word embedding layer of deep learning algorithms in sentiment classification. Unlike the methods that lexical encode or add information to the corpus, this method adds presentation of raw data based on the expert’s knowledge in the ontology. Once the data has a rich knowledge of the topic, the efficiency of the machine learning algorithms is significantly enhanced. Thus, this method is appliable to embed knowledge in datasets in other languages. The test results show that deep learning methods achieved considerably higher accuracy when trained with the KPRO method’s dataset than when trained with datasets not processed by this method. Therefore, this method is a novel approach to improve the accuracy of deep learning algorithms and increase the reliability of new datasets, thus making them ready for mining.


Sensors ◽  
2022 ◽  
Vol 22 (2) ◽  
pp. 461
Author(s):  
Mujeeb Ur Rehman ◽  
Arslan Shafique ◽  
Kashif Hesham Khan ◽  
Sohail Khalid ◽  
Abdullah Alhumaidi Alotaibi ◽  
...  

This article presents non-invasive sensing-based diagnoses of pneumonia disease, exploiting a deep learning model to make the technique non-invasive coupled with security preservation. Sensing and securing healthcare and medical images such as X-rays that can be used to diagnose viral diseases such as pneumonia is a challenging task for researchers. In the past few years, patients’ medical records have been shared using various wireless technologies. The wireless transmitted data are prone to attacks, resulting in the misuse of patients’ medical records. Therefore, it is important to secure medical data, which are in the form of images. The proposed work is divided into two sections: in the first section, primary data in the form of images are encrypted using the proposed technique based on chaos and convolution neural network. Furthermore, multiple chaotic maps are incorporated to create a random number generator, and the generated random sequence is used for pixel permutation and substitution. In the second part of the proposed work, a new technique for pneumonia diagnosis using deep learning, in which X-ray images are used as a dataset, is proposed. Several physiological features such as cough, fever, chest pain, flu, low energy, sweating, shaking, chills, shortness of breath, fatigue, loss of appetite, and headache and statistical features such as entropy, correlation, contrast dissimilarity, etc., are extracted from the X-ray images for the pneumonia diagnosis. Moreover, machine learning algorithms such as support vector machines, decision trees, random forests, and naive Bayes are also implemented for the proposed model and compared with the proposed CNN-based model. Furthermore, to improve the CNN-based proposed model, transfer learning and fine tuning are also incorporated. It is found that CNN performs better than other machine learning algorithms as the accuracy of the proposed work when using naive Bayes and CNN is 89% and 97%, respectively, which is also greater than the average accuracy of the existing schemes, which is 90%. Further, K-fold analysis and voting techniques are also incorporated to improve the accuracy of the proposed model. Different metrics such as entropy, correlation, contrast, and energy are used to gauge the performance of the proposed encryption technology, while precision, recall, F1 score, and support are used to evaluate the effectiveness of the proposed machine learning-based model for pneumonia diagnosis. The entropy and correlation of the proposed work are 7.999 and 0.0001, respectively, which reflects that the proposed encryption algorithm offers a higher security of the digital data. Moreover, a detailed comparison with the existing work is also made and reveals that both the proposed models work better than the existing work.


Sign in / Sign up

Export Citation Format

Share Document