Page 168 - 2024-Vol20-Issue2
P. 168
164 | Murad & Alasadi
computer vision: History, architecture, application, chal- [74] S. e. a. Escalera, “Chalearn multi-modal gesture recog-
lenges, and future scope,” Electronics, vol. 10, no. 20, nition 2013: grand challenge and workshop summary,”
2021. in Proceedings of the 15th ACM on International con-
ference on multimodal interaction, pp. 365–368, dec
[63] M. A. Khan, M. Mittal, L. M. Goyal, and S. Roy, “A deep 2013.
survey on supervised learning based human detection
and activity classification methods,” Multimedia Tools [75] P. e. a. Molchanov, “Online detection and classification
and Applications, vol. 80, pp. 27867–27923, jul 2021. of dynamic hand gestures with recurrent 3d convolu-
tional neural network,” in Proceedings of the IEEE Con-
[64] W. Chen, Q. Sun, X. Chen, G. Xie, H. Wu, and C. Xu, ference on Computer Vision and Pattern Recognition,
“Deep learning methods for heart sound classification: A pp. 4207–4215, 2016.
systematic review,” Entropy, vol. 23, no. 6, 2021.
[76] V. e. a. Athitsos, “The american sign language lexicon
[65] B. Xie, H. Liu, R. Alghofaili, Y. Zhang, Y. Jiang, video dataset,” in 2008 IEEE Computer Society Con-
F. D. Lobo, C. Li, W. Li, H. Huang, M. Akdere, and ference on Computer Vision and Pattern Recognition
C. Mousas, “A review of virtual reality skill training Workshops, pp. 1–8, IEEE, jun 2008.
applications,” Frontiers in Virtual Reality, vol. 2, apr
2021. [77] S. e. a. Yuan, “Bighand2.2m benchmark: Hand pose
dataset and state-of-the-art analysis,” in Proceedings of
[66] C. Lewis and F. C. Harris Jr, “An overview of virtual the IEEE Conference on Computer Vision and Pattern
reality,” in Proceedings of 31st International Conference, Recognition, pp. 4866–4874, 2017.
vol. 88, pp. 71–81, nov 2022.
[78] E. P. Costa, A. C. Lorena, A. C. Carvalho, and A. A.
[67] A. Rizzo, S. Koenig, and B. Lange, “Clinical virtual Freitas, “A review of performance evaluation measures
reality: The state of the science,” in APA Handbook of for hierarchical classifiers,” in Evaluation methods for
neuropsychology, Volume 2: Neuroscience and neuro machine learning II: papers from the AAAI-2007 Work-
methods, vol. 2, pp. 473–491, 2023. shop, vol. AAAI Technical Report WS-07-05, pp. 1–6,
2007.
[68] N. B. Ibrahim, H. H. Zayed, and M. M. Selim, “Ad-
vances, challenges, and opportunities in continuous sign
language recognition,” Journal of Engineering and Ap-
plied Sciences, vol. 15, no. 5, pp. 1205–1227, 2020.
[69] J. Wachs, H. Stern, Y. Edan, M. Gillam, C. Feied,
M. Smith, and J. Handler, “A hand gesture sterile tool for
browsing mri images in the or,” Journal of the American
Medical Informatics Association, vol. 15, pp. 321–323,
may 2008.
[70] Z. Hosseinaee, M. Le, K. Bell, and P. H. Reza, “To-
wards non-contact photoacoustic imaging,” Photoacous-
tics, vol. 20, dec 2020.
[71] Y. Zhang, S. Q. Xie, H. Wang, and Z. Zhang, “Data
analytics in steady-state visual evoked potential-based
brain-computer interface: A review,” IEEE Sensors Jour-
nal, vol. 21, pp. 1124–1138, aug 2020.
[72] M. B. Shaikh and D. Chai, “Rgb-d data-based action
recognition: A review,” Sensors, vol. 21, jun 2021.
[73] J. Wan, Y. Zhao, S. Zhou, I. Guyon, S. Escalera, and
S. Z. Li, “Chalearn looking at people rgb-d isolated and
continuous datasets for gesture recognition,” in Proceed-
ings of the IEEE Conference on computer vision and
pattern recognition workshops, pp. 56–64, 2016.