Cover
Vol. 16 No. Special Issue (2020)

Published: June 30, 2020

Pages: 59-64

Conference Article

Enhancing Reading Advancement Using Eye Gaze Tracking

Abstract

This research aims to understand the enhancing reading advancement using eye gaze tracking in regards to pull the increase of time interacting with such devices along. In order to realize that, user should have a good understanding of the reading process and of the eye gaze tracking systems; as well as a good understanding of the issues existing while using eye gaze tracking system for reading process. Some issues are very common, so our proposed implementation algorithm compensate these issues. To obtain the best results possible, two mains algorithm have been implemented: the baseline algorithm and the algorithm to smooth the data. The tracking error rate is calculated based on changing points and missed changing points. In [21], a previous implementation on the same data was done and the final tracking error rate value was of 126%. The tracking error rate value seems to be abnormally high but this value is actually useful as described in [21]. For this system, all the algorithms used give a final tracking error rate value of 114.6%. Three main origins of the accuracy of the eye gaze reading were normal fixation, regression, skip fixation; and accuracies are displayed by the tracking rate value obtained. The three main sources of errors are the calibration drift, the quality of the setup and the physical characteristics of the eyes. For the tests, the graphical interface uses characters with an average height of 24 pixels for the text. By considering that the subject was approximately at 60 centimeters of the tracker. The character on the screen represents an angle of ±0.88◦; which is just above the threshold of ±0.5◦ imposed by the physical characteristics of the eyeball for the advancement of reading using eye gaze tracking.

References

  1. David Beymer and Daniel M. Russell. Webgazeanalyzer: a system for capturing and analyzing web reading behavior using eye gaze. In CHI ’05 Extended Abstracts on Human Factors in Computing Systems, pages 1913–1916, 2015.
  2. Richard A. Bolt. Eyes at the interface. Proceedings of the 1982 Conference on Human Factors in Computing Systems, pages 360–362, 2017.
  3. Hannah Faye Chua, Julie E. Boland, and Richard E. Nisbett. Cultural variation in eye movements during scene perception. Proceedings of the National Academy of Sciences, PNAS, 102(35):12629– 12633, 2015.
  4. Q. Ji and Z. Zhu, Eye and Gaze Tracking for Interactive Graphic Display, Proc. Second Intl Symp. Smart Graphics, pp. 79-85, 2002
  5. K. Krafka, A. Khosla, P. Kellnhofer, H. Kannan, S. Bhandarkar, W. Matusik,” Eye tracking for everyone,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., pp. 21762184, Jun. 2016.
  6. Q. He, X. Hong, X. Chai, J. Holappa, G. Zhao, X. Chen, and M. Pietikinen,” Omeg: Oulu multi-pose eye gaze dataset,” in Proc. Image Anal., pp. 418-427, 2015.
  7. Q. Huang, A. Veeraraghavan, and A. Sabharwal, ”Tabletgaze: Dataset and analysis for unconstrained appearance-based gaze estimation in mobile tab- lets,” Mach. Vis. Appl., vol. 28, no. 5, pp. 445-461, 2017.
  8. K. A. Funes Mora, F. Monay, and J.M. Odobez,” EYEDIAP: A database for the development and evaluation of gaze estimation algorithms from RGB and RGB-D cameras,” in Proc. ACM Symp. Eye Tracking Res., pp. 255-258, 2014.
  9. K. A. Funes Mora and J.-M. Odobez,” Person independent 3d gaze estimation from remote RGB-D cameras,” in Proc. IEEE Int. Conf. Image Process., pp. 2787-2791, 2013.
  10. T. Schneider, B. Schauerte, and R. Stiefelhagen,” Manifold alignment for person independent appearance-based gaze estimation,” in Proc. Int. Conf. Pattern Recognit., pp. 1167-1172, 2014.
  11. E. Wood, T. Baltrusaitis, L.P. Morency, P. Robinson, and A. Bulling,” Learning an appearance-based gaze estimator from one million synthesised images,” in Proc. ACM Symp. Eye Tracking Res., pp. 131- 138, 2016.
  12. E. Wood, T. Baltrusaitis, X. Zhang, Y. Sugano, P. Robinson, and A. Bulling,” Rendering of eyes for eye-shape registration and gaze estimation,” Proc. IEEE Int. Conf. Comput. Vis., pp. 3756-3764, 2015.
  13. A. Shrivastava, T. Pfister, O. Tuzel, J. Susskind, W. Wang, R. Webb,” Learning from simulated and unsupervised images through adversarial training”, Proc. IEEE Conf. Comput. Vis. Pattern Recognit., pp. 2242-2251, Jun. 2016. Saadaldeen, Mustafa & Salwa
  14. K. He, X. Zhang, S. Ren, and J. Sun,” Deep residual learning for image recogni-tion,” Proc. IEEE Conf. Comput. Vis. Pattern Recognit., pp. 770778, June 2016.
  15. S. Wyder, and P.C. Cattin,” Eye tracker accuracy: quantitative evaluation of the invisible eye center location,” International Journal of Computer Assisted Radiology and Surgery, vol. 13, pp. 1651- 1660, 2017.
  16. A. Plopski, J. Orlosky, Y. Itoh, C. Nitschke, K. Kiyokawa, and G. Klinker, Automated spatial calibration of HMD systems with unconstrained eye- cameras, Proc. Int. Symp. Mixed Augmented Reality, pp. 9499, 2016.
  17. Y. Zhang, Z. Qiu, T. Yao, D. Liu, and T. Mei,” Fully Convolutional Adaptation Networks for Semantic Segmentation,” 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 6810- 6818, 2018.
  18. [28] Y. Sugano, Y. Matsushita, and Y. Sato,” Learning-by-synthesis for appearance-based 3d gaze estimation,” in Proc. IEEE Conf. Comput. Vis. Pat- tern Recognit., pp. 1821-1828, 2014.
  19. X. Zhang, Y. Sugano, M. Fritz, and A. Bulling,” MPIIGaze: RealWorld Dataset and Deep Appearance-Based Gaze Estimation,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 41, pp. 162-175, 2017.
  20. R. Valenti and T. Gevers,” Accurate Eye Center Location and Tracking Using Isophote Curvature,” Proc. IEEE Conf. Computer Vision and Pattern Recognition, pp. 1-8, 2018.
  21. Morten Hojfeldt Rasmussen and Zheng-Hua Tan. Fusing eye-gaze and speech recognition for tracking in an automatic reading tutor - a step in the right direction? submitted to SLATE 2013, France, 2013.
  22. T. Joda, G. O. Gallucci, D. Wismeijerc, and N. U. Zitzmann, Augmented and virtual reality in dental medicine: A systematic review, Computers in Biology and Medicine, vol. 108, pp. 93-100, May. 2019.
  23. J. Lasse, and F. Konradsen, A review of the use of virtual reality head-mounted displays in education and training, Education and Information Technologies, vol. 23, pp. 1515-1529, 2017
  24. K. Fujii, G. Gras, A. Salerno, and G. Yang, Gaze gesture based human robot interaction for laparoscopic surgery. Medical image analysis, vol. 44, pp. 196-214, 2018.