Cover
Vol. 14 No. 1 (2018)

Published: June 30, 2018

Pages: 22-29

Original Article

The Effect of Using Projective Cameras on View- Independent Gait Recognition Performance

Abstract

Gait as a biometric can be used to identify subjects at a distance and thus it receives great attention from the research community for security and surveillance applications. One of the challenges that affects gait recognition performance is view variation. Much work has been done to tackle this challenge. However, the majority of the work assumes that gait silhouettes are captured by affine cameras where only the height of silhouettes changes and the difference in viewing angle of silhouettes in one gait cycle is relatively small. In this paper, we analyze the variation in gait recognition performance when using silhouettes from projective cameras and from affine cameras with different distance from the center of a walking path. This is done by using 3D models of walking people in the gallery set and 2D gait silhouettes from independent (single) cameras in the probe set. Different factors that affect matching 3D human models with 2D gait silhouettes from single cameras for view-independent gait recognition are analyzed. In all experiments, we use 258 multi-view sequences belong to 46 subjects from Multi-View Soton gait dataset. We evaluate the matching performance for 12 different views using Gait Energy Image (GEI) as gait features. Then, we analyze the effect of using different camera configurations for 3D model reconstruction, the GEI from cameras with different settings, the upper and lower body parts for recognition and different GEI resolutions. The results illustrate that low recognition performance is achieved when using gait silhouettes from affine cameras while lower recognition performance is obtained when using gait silhouettes from projective cameras.

References

  1. A. Bobick and A. Johnson, “Gait recognition using static, activity-specific parameters. In Proc. of IEEE Conf. on Computer Vision and Pattern Recognition , vol. 1, pp. I-I, 2001.
  2. C. Yam, M. S. Nixon and J. N. Carter, “Automated person recognition by walking and running via model-based approaches,” Pattern Recognition , vol. 37, no. 5, pp. 1057-1072, 2004.
  3. R. Sagawa, Y. Makihara, T. Echigo, and Y. Yagi, “Matching gait image sequences in the frequency domain for tracking people at a distance,” Computer Vision–ACCV 2006 , pp. 141-150, 2006.
  4. J. Man and B. Bhanu, “Individual recognition using gait energy image,” IEEE transactions on pattern analysis and machine intelligence , vol. 28, no. 2, pp. 316-322, 2006.
  5. T. H. W. Lam and R. S. T. Lee. "A new representation for human gait recognition: Motion silhouettes (MSI)." Conference on Biometrics . Springer, Berlin, Heidelberg, 2006.
  6. K. Bashir, T. Xiang, and S. Gong. Gait recognition using gait entropy image." 3 rd Detection and Preventation (ICDP), UK, 2009.
  7. Y. Iwashita, K. Ogawara and R. Kurazume, “Identification of people walking along curved trajectories,” Pattern Recognition Letters , vol. 48, pp. 60-69, 2014.
  8. D. Muramatsu, A. Shiraishi, Y. Makihara, M. Z. Uddin and Y. Yagi, “Gait-based person recognition using arbitrary view transformation model,” IEEE Transactions on Image Processing , vol. 24, no. 1, pp. 140-154, 2015.
  9. D. López-Fernández, F. J. Madrid-Cuevas, A. Carmona-Poyato, M. J. Marín-Jiménez, R. Muñoz-Salinas and R. Medina-Carnicer, “Independent recognition through morphological descriptions of 3D human reconstructions,” Image and Vision Computing , vol . 48 , pp. 1-13, 2016.
  10. R. D. Seely, “On a three-dimensional gait recognition system,” PhD thesis , University of Southampton, UK, 2010.
  11. A. Laurentini, “The visual hull concept for silhouette-based image understanding,” IEEE Transactions on pattern analysis and machine