Cover
Vol. 15 No. 2 (2019)

Published: December 31, 2019

Pages: 115-121

Original Article

Indoor Low Cost Assistive Device using 2D SLAM Based on LiDAR for Visually Impaired People

Abstract

Many assistive devices have been developed for visually impaired (VI) person in recent years which solve the problems that face VI person in his/her daily moving. Most of researches try to solve the obstacle avoidance or navigation problem, and others focus on assisting VI person to recognize the objects in his/her surrounding environment. However, a few of them integrate both navigation and recognition capabilities in their system. According to above needs, an assistive device is presented in this paper that achieves both capabilities to aid the VI person to (1) navigate safely from his/her current location (pose) to a desired destination in unknown environment, and (2) recognize his/her surrounding objects. The proposed system consists of the low cost sensors Neato XV-11 LiDAR, ultrasonic sensor, Raspberry pi camera (CameraPi), which are hold on a white cane. Hector SLAM based on 2D LiDAR is used to construct a 2D-map of unfamiliar environment. While A* path planning algorithm generates an optimal path on the given 2D hector map. Moreover, the temporary obstacles in front of VI person are detected by an ultrasonic sensor. The recognition system based on Convolution Neural Networks (CNN) technique is implemented in this work to predict object class besides enhance the navigation system. The interaction between the VI person and an assistive system is done by audio module (speech recognition and speech synthesis). The proposed system performance has been evaluated on various real-time experiments conducted in indoor scenarios, showing the efficiency of the proposed system.

References

  1. S. Bhatlawande, M. Mahadevappa, J. Mukherjee, M. Biswas, D. Das and S. Gupta, " Design, development, and clinical evaluation of the electronic mobility cane for vision rehabilitation", IEEE Transactions on Neural Systems and Rehabilitation Engineering , vol. 22, pp. 1148–1159, 2014
  2. M. Islam, M. Sadi, K. Zamli and M. Ahmed, "Developing walking assistants for visually Journal , vol. 19, pp. 2814–2828, 2019.
  3. Z. Fei, E. Yang, H. Hu and H. Zhou, "Review of machine vision-based electronic travel aids", IEEE 23 rd International Conference on Automation and Computing (ICAC), Huddersfield, UK , pp. 1–7, 2017.
  4. A. Krizhevsky, I. Sutskever and G.E. Hinton, "Imagenet Classification with Deep Convolutional Neural Networks", Advances in Neural Processing Systems (NIPS) , pp. 1097–1105, 2012.
  5. B. Lin, C. Lee and P. Chiang, "Simple smartphone-based guiding system for visually
  6. F. Bashiri, E. LaRose, J. Badger, R. D’Souza, Z. Yu and P. Peissig, "Object detection to assist visually impaired people: A deep neural network adventure", International Symposium on Visual Computing , Springer , pp. 500–510, 2018.
  7. R. Trabelsi, I. Jabri, F. Melgani, F. Smach, N. Conci and A. Bouallegue, "Indoor object recognition in rgbd images with complexvalued neural networks for visually-impaired people", Neurocomputing, Elsevier , vol. 330, pp. 94-103, 2019.
  8. J. Redmon and A. Farhadi, “YOLO9000: better, faster, stronger”, arXiv:1612.08242v1 [cs.CV] , 2016.
  9. S. Kohlbrecher, O. Stryk, J. Meyer and U. Klingauf, "A flexible and scalable SLAM system with full 3D motion estimation", IEEE and Rescue Robotics , pp. 155-160, 2011.
  10. N. Otsu, " A threshold selection method from Systems, Man and Cybernetics , vol. 9, no. 1, pp. 62-66 , 1979 .
  11. P. Hart, N. Nilsson and B. Raphael, "A Formal Basis for the Heuristic Determination of Minimum Cost Paths", IEEE Transactions on Systems Science and Cybernetics (SSC), vol. 4, Issue 2, pp. 100–107, 1968.