Comparison of the Predictive Models of Human Activity Recognition (HAR) in Smartphones
Abstract
Abstract Views: 103This report compared the performance of different classification algorithms such as decision tree, K-Nearest Neighbour (KNN), logistic regression, Support Vector Machine (SVM) and random forest. The dataset comprised smartphones’ accelerometer and gyroscope readings of the participants while performing different activities, such as walking, walking downstairs, walking upstairs, standing, sitting, and laying. Different machine learning algorithms were applied to this dataset for classification and their accuracy rates were compared. KNN and SVM were found to be the most accurate of all.
KEYWORDS— decision tree, Human Activity Recognition (HAR), K-Nearest Neighbour (KNN), logistic regression, random forest, Support Vector Machine (SVM)
Downloads
References
C. V. San Buenaventura and N. M. C. Tiglao, "Basic human activity recognition based on sensor fusion in smartphones," in 2017 IFIP/IEEE Symposium on Integrated Network and Service Management (IM), 2017, pp. 1182-1185.
Géron, "Hands-on machine learning with scikit-learn and tensorflow: Concepts," Tools, and Techniques to build intelligent systems, 2017.
S. Wan, L. Qi, X. Xu, C. Tong, and Z. Gu, "Deep learning models for real-time human activity recognition with smartphones," Mobile Networks and Applications, vol. 25, pp. 743-755, 2020.
P. Bota, J. Silva, D. Folgado, and H. Gamboa, "A semi-automatic annotation approach for human activity recognition," Sensors, vol. 19, p. 501, 2019.
R. Mutegeki and D. S. Han, "A CNN-LSTM approach to human activity recognition," in 2020 International Conference on Artificial Intelligence in Information and Communication (ICAIIC), 2020, pp. 362-366.
L. Cao, Y. Wang, B. Zhang, Q. Jin, and A. V. Vasilakos, "GCHAR: An efficient Group-based Context—Aware human activity recognition on smartphone," Journal of Parallel and Distributed Computing, vol. 118, pp. 67-80, 2018.
S. Rani, H. Babbar, S. Coleman, A. Singh, and H. M. Aljahdali, "An efficient and lightweight deep learning model for human activity recognition using smartphones," Sensors, vol. 21, p. 3845, 2021.
F. Cruciani, I. Cleland, C. Nugent, P. McCullagh, K. Synnes, and J. Hallberg, "Automatic annotation for human activity recognition in free living using a smartphone," Sensors, vol. 18, p. 2203, 2018.
Qi, H. Su, C. Yang, G. Ferrigno, E. De Momi, and A. Aliverti, "A fast and robust deep convolutional neural networks for complex human activity recognition using smartphone," Sensors, vol. 19, p. 3731, 2019.
D. Mukherjee, R. Mondal, P. K. Singh, R. Sarkar, and D. Bhattacharjee, "EnsemConvNet: a deep learning approach for human activity recognition using smartphone sensors for healthcare applications," Multimedia Tools and Applications, vol. 79, pp. 31663-31690, 2020.
UMT-AIR follow an open-access publishing policy and full text of all published articles is available free, immediately upon publication of an issue. The journal’s contents are published and distributed under the terms of the Creative Commons Attribution 4.0 International (CC-BY 4.0) license. Thus, the work submitted to the journal implies that it is original, unpublished work of the authors (neither published previously nor accepted/under consideration for publication elsewhere). On acceptance of a manuscript for publication, a corresponding author on the behalf of all co-authors of the manuscript will sign and submit a completed the Copyright and Author Consent Form.