scholarly journals Human Activity Recognition of Children with Wearable Devices Using LightGBM Machine Learning

Author(s):  
Gábor Csizmadia ◽  
Krisztina Liszkai-Peres ◽  
Bence Ferdinandy ◽  
Ádám Miklósi ◽  
Veronika Konok

Abstract Human activity recognition (HAR) using machine learning (ML) methods is a relatively new method for collecting and analyzing large amounts of human behavioral data using special wearable sensors. Our main goal was to find a reliable method which could automatically detect various playful and daily routine activities in children. We defined 40 activities for ML recognition, and we collected activity motion data by means of wearable smartwatches with a special SensKid software. We analyzed the data of 34 children (19 girls, 15 boys; age range: 6.59 – 8.38; median age = 7.47). All children were typically developing first graders from three elementary schools. The activity recognition was a binary classification task which was evaluated with a Light Gradient Boosted Machine (LGBM)learning algorithm, a decision based method with a 3-fold cross validation. We used the sliding window technique during the signal processing, and we aimed at finding the best window size for the analysis of each behavior element to achieve the most effective settings. Seventeen activities out of 40 were successfully recognized with AUC values above 0.8. The window size had no significant effect. The overall accuracy was 0.95, which is at the top segment of the previously published similar HAR data. In summary, the LGBM is a very promising solution for HAR. In line with previous findings, our results provide a firm basis for a more precise and effective recognition system that can make human behavioral analysis faster and more objective.

Author(s):  
Anna Ferrari ◽  
Daniela Micucci ◽  
Marco Mobilio ◽  
Paolo Napoletano

AbstractHuman activity recognition (HAR) is a line of research whose goal is to design and develop automatic techniques for recognizing activities of daily living (ADLs) using signals from sensors. HAR is an active research filed in response to the ever-increasing need to collect information remotely related to ADLs for diagnostic and therapeutic purposes. Traditionally, HAR used environmental or wearable sensors to acquire signals and relied on traditional machine-learning techniques to classify ADLs. In recent years, HAR is moving towards the use of both wearable devices (such as smartphones or fitness trackers, since they are daily used by people and they include reliable inertial sensors), and deep learning techniques (given the encouraging results obtained in the area of computer vision). One of the major challenges related to HAR is population diversity, which makes difficult traditional machine-learning algorithms to generalize. Recently, researchers successfully attempted to address the problem by proposing techniques based on personalization combined with traditional machine learning. To date, no effort has been directed at investigating the benefits that personalization can bring in deep learning techniques in the HAR domain. The goal of our research is to verify if personalization applied to both traditional and deep learning techniques can lead to better performance than classical approaches (i.e., without personalization). The experiments were conducted on three datasets that are extensively used in the literature and that contain metadata related to the subjects. AdaBoost is the technique chosen for traditional machine learning, while convolutional neural network is the one chosen for deep learning. These techniques have shown to offer good performance. Personalization considers both the physical characteristics of the subjects and the inertial signals generated by the subjects. Results suggest that personalization is most effective when applied to traditional machine-learning techniques rather than to deep learning ones. Moreover, results show that deep learning without personalization performs better than any other methods experimented in the paper in those cases where the number of training samples is high and samples are heterogeneous (i.e., they represent a wider spectrum of the population). This suggests that traditional deep learning can be more effective, provided you have a large and heterogeneous dataset, intrinsically modeling the population diversity in the training process.


Human activity recognition (HAR) has realized more interest in several research communities given that understanding user activities and behavior help to deliver proactive and personalized services. Different types of noise in wearable sensors data frequently hamper the human activity recognition classification process. Categorically, in this proposed work designs the three-level hierarchical classification structure, i.e., instance based, group and sub-group based and subject based to detect the daily activity of human body motion among activity groups. In correlation with other famous classifiers, for such as Random Forest Tree, J48, Decision Table, Multilayer Perceptron, NaïveBayes, oneR and REPTree (Reduced Error Pruning Tree), etc., thorough experiments on the mHealth dataset (Shimmer2 mHealth Data) demonstrate that group based classification achieves the best classification results, reaching RFT 99.97%. We trained classier in order to estimate accuracy classification based on (gender, age, height, and weight). We applied validation methods to the process, 10-fold cross-validation. For all three classification structure, we achieve high accuracy values for all three classification task


Sensors ◽  
2020 ◽  
Vol 20 (4) ◽  
pp. 1208 ◽  
Author(s):  
Sebastian Scheurer ◽  
Salvatore Tedesco ◽  
Kenneth N. Brown ◽  
Brendan O’Flynn

Human activity recognition (HAR) has become an increasingly popular application of machine learning across a range of domains. Typically the HAR task that a machine learning algorithm is trained for requires separating multiple activities such as walking, running, sitting, and falling from each other. Despite a large body of work on multi-class HAR, and the well-known fact that the performance on a multi-class problem can be significantly affected by how it is decomposed into a set of binary problems, there has been little research into how the choice of multi-class decomposition method affects the performance of HAR systems. This paper presents the first empirical comparison of multi-class decomposition methods in a HAR context by estimating the performance of five machine learning algorithms when used in their multi-class formulation, with four popular multi-class decomposition methods, five expert hierarchies—nested dichotomies constructed from domain knowledge—or an ensemble of expert hierarchies on a 17-class HAR data-set which consists of features extracted from tri-axial accelerometer and gyroscope signals. We further compare performance on two binary classification problems, each based on the topmost dichotomy of an expert hierarchy. The results show that expert hierarchies can indeed compete with one-vs-all, both on the original multi-class problem and on a more general binary classification problem, such as that induced by an expert hierarchy’s topmost dichotomy. Finally, we show that an ensemble of expert hierarchies performs better than one-vs-all and comparably to one-vs-one, despite being of lower time and space complexity, on the multi-class problem, and outperforms all other multi-class decomposition methods on the two dichotomous problems.


It becomes essential to monitor the Activity of Daily Living(ADL) of elderly people living alone by keeping track of their day to day activities & helping those having strong health issues. In this paper various machine learning algorithms for human activity recognition is analyzed. Along with this, an extensive study is carried out to learn about the current technologies used in activity recognition. Activity recognition is generally done in the form of signals generated through sensors. The signals are then preprocessed, segmented, features are extracted and activity is recognized. The main objective of Human Activity Recognition System is to explore the limitations of self-dependent old age persons and suggest ways of overcoming it. By using the different wearable and non-wearable sensors, one can easily monitor the human activity and evaluate the data generated through it.


Sensors ◽  
2021 ◽  
Vol 21 (3) ◽  
pp. 885 ◽  
Author(s):  
Zhongzheng Fu ◽  
Xinrun He ◽  
Enkai Wang ◽  
Jun Huo ◽  
Jian Huang ◽  
...  

Human activity recognition (HAR) based on the wearable device has attracted more attention from researchers with sensor technology development in recent years. However, personalized HAR requires high accuracy of recognition, while maintaining the model’s generalization capability is a major challenge in this field. This paper designed a compact wireless wearable sensor node, which combines an air pressure sensor and inertial measurement unit (IMU) to provide multi-modal information for HAR model training. To solve personalized recognition of user activities, we propose a new transfer learning algorithm, which is a joint probability domain adaptive method with improved pseudo-labels (IPL-JPDA). This method adds the improved pseudo-label strategy to the JPDA algorithm to avoid cumulative errors due to inaccurate initial pseudo-labels. In order to verify our equipment and method, we use the newly designed sensor node to collect seven daily activities of 7 subjects. Nine different HAR models are trained by traditional machine learning and transfer learning methods. The experimental results show that the multi-modal data improve the accuracy of the HAR system. The IPL-JPDA algorithm proposed in this paper has the best performance among five HAR models, and the average recognition accuracy of different subjects is 93.2%.


Sign in / Sign up

Export Citation Format

Share Document