Recognizing activities of the elderly using wearable sensors: a comparison of ensemble algorithms based on boosting

Sensor Review ◽  
2019 ◽  
Vol 39 (6) ◽  
pp. 743-751 ◽  
Author(s):  
Yuchuan Wu ◽  
Shengfeng Qi ◽  
Feng Hu ◽  
Shuangbao Ma ◽  
Wen Mao ◽  
...  

Purpose In human action recognition based on wearable sensors, most previous studies have focused on a single type of sensor and single classifier. This study aims to use a wearable sensor based on flexible sensors and a tri-axial accelerometer to collect action data of elderly people. It uses a statistical modeling approach based on the ensemble algorithm to classify actions and verify its validity. Design/methodology/approach Nine types of daily actions were collected by the wearable sensor device from a group of elderly volunteers, and the time-domain features of the action sequences were extracted. The dimensionality of the feature vectors was reduced by linear discriminant analysis. An ensemble learning method based on XGBoost was used to build a model of elderly action recognition. Its performance was compared with the action recognition rate of other algorithms based on the Boosting algorithm, and with the accuracy of single classifier models. Findings The effectiveness of the method was validated by three experiments. The results show that XGBoost is able to classify nine daily actions of the elderly and achieve an average recognition rate of 94.8 per cent, which is superior to single classifiers and to other ensemble algorithms. Practical implications The research could have important implications for health care, including the treatment and rehabilitation of the elderly, and the prevention of falls. Originality/value Instead of using a single type of sensor, this research used a wearable sensor to obtain daily action data of the elderly. The results show that, by using the appropriate method, the device can obtain detailed data of joint action at a low cost. Comparing differences in performance, it was concluded that XGBoost is the most suitable algorithm for building a model of elderly action recognition. This method, together with a wearable sensor, can provide key data and accurate feedback information to monitor the elderly in their rehabilitation activities.

2013 ◽  
Vol 18 (2-3) ◽  
pp. 49-60 ◽  
Author(s):  
Damian Dudzńiski ◽  
Tomasz Kryjak ◽  
Zbigniew Mikrut

Abstract In this paper a human action recognition algorithm, which uses background generation with shadow elimination, silhouette description based on simple geometrical features and a finite state machine for recognizing particular actions is described. The performed tests indicate that this approach obtains a 81 % correct recognition rate allowing real-time image processing of a 360 X 288 video stream.


2014 ◽  
Vol 644-650 ◽  
pp. 4162-4166
Author(s):  
Dan Dan Guo ◽  
Xi’an Zhu

An effective Human action recognition method based on the human skeletal information which is extracted by Kinect depth sensor is proposed in this paper. Skeleton’s 3D space coordinates and the angles between nodes of human related actions are collected as action characteristics through the research of human skeletal structure, node data and research on human actions. First, 3D information of human skeletons is acquired by Kinect depth sensors and the cosine of relevant nodes is calculated. Then human skeletal information within the time prior to current state is stored in real time. Finally, the relevant locations of the skeleton nodes and the variation of the cosine of skeletal joints within a certain time are analyzed to recognize the human motion. This algorithm has higher adaptability and practicability because of the complicated sample trainings and recognizing processes of traditional method is not taken up. The results of the experiment indicate that this method is with high recognition rate.


Author(s):  
MARC BOSCH-JORGE ◽  
ANTONIO-JOSÉ SÁNCHEZ-SALMERÓN ◽  
CARLOS RICOLFE-VIALA

The aim of this work is to present a visual-based human action recognition system which is adapted to constrained embedded devices, such as smart phones. Basically, vision-based human action recognition is a combination of feature-tracking, descriptor-extraction and subsequent classification of image representations, with a color-based identification tool to distinguish between multiple human subjects. Simple descriptors sets were evaluated to optimize recognition rate and performance and two dimensional (2D) descriptors were found to be effective. These sets installed on the latest phones can recognize human actions in videos in less than one second with a success rate of over 82%.


2014 ◽  
Vol 599-601 ◽  
pp. 1571-1574
Author(s):  
Jia Ding ◽  
Yang Yi ◽  
Ze Min Qiu ◽  
Jun Shi Liu

Human action recognition in videos plays an important role in the field of computer vision and image understanding. A novel method of multi-channel bag of visual words and multiple kernel learning is proposed in this paper. The videos are described by multi-channel bag of visual words, and a multiple kernel learning classifier is used for action classification, in which each kernel function of the classifier corresponds to a video channel in order to avoid the noise interference from other channels. The proposed approach improves the ability in distinguishing easily confused actions. Experiments on KTH show that the presented method achieves remarkable performance on the average recognition rate, and obtains comparable recognition rate with state-of-the-art methods.


Complexity ◽  
2020 ◽  
Vol 2020 ◽  
pp. 1-12 ◽  
Author(s):  
Xugang Xi ◽  
Wenjun Jiang ◽  
Zhong Lü ◽  
Seyed M. Miran ◽  
Zhi-Zeng Luo

Falls among the elderly comprise a major health problem. Daily activity monitoring and fall detection using wearable sensors provide an important healthcare system for elderly or frail individuals. We investigated the classification accuracy of daily activity and fall data based on surface electromyography (sEMG) and plantar pressure signals. sEMG and plantar pressure signals were collected, and their features were extracted. Suitable features were selected and combined for posture transition, gait, and fall using the Fisher class separability index. A feature-level fusion method, named as the global canonical correlation analysis of weighting genetic algorithm, was proposed to reduce dimensions. For the problem in which the number of daily activities is considerably more than the number of fall activities, Weighted Kernel Fisher Linear Discriminant Analysis (WKFDA) was proposed to classify gait and fall. Double Parameter Kernel Optimization based on Extreme Learning Machine (DPK-OMELM) was used to classify activities. Results showed that the classification accuracy of the posture transition is 100%, and the accuracy of gait and fall classified using WKFDA can reach 98%. For all types of posture transition, gait, and fall, sensitivity, specificity, and accuracy are over 96%.


2021 ◽  
Author(s):  
Akila.K

Abstract Background: Human action recognition encompasses a scope for an automatic analysis of current events from video and has varied applications in multi-various fields. Recognizing and understanding of human actions from videos still remains a difficult downside as a result of the massive variations in human look, posture and body size inside identical category.Objective: This paper focuses on a specific issue related to inter-class variation in Human Action Recognition.Approach: To discriminate the human actions among the category, a novel approach which is based on wavelet packet transformation for feature extraction. As we are concentrating on classifying similar actions non-linearity among the features are analyzed and discriminated by Deterministic Normalized - Linear Discriminant Analysis (DN-LDA). However the major part of the recognition system relays on classification part and the dynamic feeds are classified by Hidden Markov Model at the final stage based on rule set..Conclusion: Experiments results have shown that the proposed approach is discriminative for similar human action recognition and well adapted to the inter-class variation


2018 ◽  
Vol 119 (9/10) ◽  
pp. 529-544 ◽  
Author(s):  
Ihab Zaqout ◽  
Mones Al-Hanjori

Purpose The face recognition problem has a long history and a significant practical perspective and one of the practical applications of the theory of pattern recognition, to automatically localize the face in the image and, if necessary, identify the person in the face. Interests in the procedures underlying the process of localization and individual’s recognition are quite significant in connection with the variety of their practical application in such areas as security systems, verification, forensic expertise, teleconferences, computer games, etc. This paper aims to recognize facial images efficiently. An averaged-feature based technique is proposed to reduce the dimensions of the multi-expression facial features. The classifier model is generated using a supervised learning algorithm called a back-propagation neural network (BPNN), implemented on a MatLab R2017. The recognition rate and accuracy of the proposed methodology is comparable with other methods such as the principle component analysis and linear discriminant analysis with the same data set. In total, 150 faces subjects are selected from the Olivetti Research Laboratory (ORL) data set, resulting 95.6 and 85 per cent recognition rate and accuracy, respectively, and 165 faces subjects from the Yale data set, resulting 95.5 and 84.4 per cent recognition rate and accuracy, respectively. Design/methodology/approach Averaged-feature based approach (dimension reduction) and BPNN (generate supervised classifier). Findings The recognition rate is 95.6 per cent and recognition accuracy is 85 per cent for the ORL data set, whereas the recognition rate is 95.5 per cent and recognition accuracy is 84.4 per cent for the Yale data set. Originality/value Averaged-feature based method.


Sign in / Sign up

Export Citation Format

Share Document