Gesture Recognition-Based Interaction with Smartwatch and Electric Wheelchair for Assistive Mobility and Navigation

Author(s):  
Elisha Didam Markus ◽  
Teboho Ntsinyi ◽  
Eric Monacelli
2016 ◽  
Vol 3 (2) ◽  
pp. 1
Author(s):  
Seong Jeong ◽  
HongJun Ju ◽  
Hyo-Rim Choi ◽  
TaeYong Kim

2015 ◽  
Vol 135 (8) ◽  
pp. 944-953 ◽  
Author(s):  
Kazuki Tanaka ◽  
Hiroyuki Ukida

2020 ◽  
Vol 79 (1) ◽  
pp. 47-57
Author(s):  
O. G. Viunytskyi ◽  
A. V. Totsky ◽  
Karen O. Egiazarian

2020 ◽  
Vol 5 (2) ◽  
pp. 609
Author(s):  
Segun Aina ◽  
Kofoworola V. Sholesi ◽  
Aderonke R. Lawal ◽  
Samuel D. Okegbile ◽  
Adeniran I. Oluwaranti

This paper presents the application of Gaussian blur filters and Support Vector Machine (SVM) techniques for greeting recognition among the Yoruba tribe of Nigeria. Existing efforts have considered different recognition gestures. However, tribal greeting postures or gestures recognition for the Nigerian geographical space has not been studied before. Some cultural gestures are not correctly identified by people of the same tribe, not to mention other people from different tribes, thereby posing a challenge of misinterpretation of meaning. Also, some cultural gestures are unknown to most people outside a tribe, which could also hinder human interaction; hence there is a need to automate the recognition of Nigerian tribal greeting gestures. This work hence develops a Gaussian Blur – SVM based system capable of recognizing the Yoruba tribe greeting postures for men and women. Videos of individuals performing various greeting gestures were collected and processed into image frames. The images were resized and a Gaussian blur filter was used to remove noise from them. This research used a moment-based feature extraction algorithm to extract shape features that were passed as input to SVM. SVM is exploited and trained to perform the greeting gesture recognition task to recognize two Nigerian tribe greeting postures. To confirm the robustness of the system, 20%, 25% and 30% of the dataset acquired from the preprocessed images were used to test the system. A recognition rate of 94% could be achieved when SVM is used, as shown by the result which invariably proves that the proposed method is efficient.


This project is regarding the Motion controlled wheelchair for disabled. We are going to control motorized wheelchair using a head band having motion sensor and Arduino as controller. Problem: “often disabled who cannot walk find themselves being burden for their families or caretakers just for moving around the house. Disabled who are paralysed below head, who may not have functioning arms cannot control joystick controlled electric wheelchair.” This project is to solve their problem using a motion sensor to control their wheelchair. We are aiming towards building a more affordable, unique, low maintenance and available for all kind of head-controlled wheel chair.


2020 ◽  
Vol 17 (4) ◽  
pp. 497-506
Author(s):  
Sunil Patel ◽  
Ramji Makwana

Automatic classification of dynamic hand gesture is challenging due to the large diversity in a different class of gesture, Low resolution, and it is performed by finger. Due to a number of challenges many researchers focus on this area. Recently deep neural network can be used for implicit feature extraction and Soft Max layer is used for classification. In this paper, we propose a method based on a two-dimensional convolutional neural network that performs detection and classification of hand gesture simultaneously from multimodal Red, Green, Blue, Depth (RGBD) and Optical flow Data and passes this feature to Long-Short Term Memory (LSTM) recurrent network for frame-to-frame probability generation with Connectionist Temporal Classification (CTC) network for loss calculation. We have calculated an optical flow from Red, Green, Blue (RGB) data for getting proper motion information present in the video. CTC model is used to efficiently evaluate all possible alignment of hand gesture via dynamic programming and check consistency via frame-to-frame for the visual similarity of hand gesture in the unsegmented input stream. CTC network finds the most probable sequence of a frame for a class of gesture. The frame with the highest probability value is selected from the CTC network by max decoding. This entire CTC network is trained end-to-end with calculating CTC loss for recognition of the gesture. We have used challenging Vision for Intelligent Vehicles and Applications (VIVA) dataset for dynamic hand gesture recognition captured with RGB and Depth data. On this VIVA dataset, our proposed hand gesture recognition technique outperforms competing state-of-the-art algorithms and gets an accuracy of 86%


Sign in / Sign up

Export Citation Format

Share Document