Page 39 - IJEEE-2022-Vol18-ISSUE-1
P. 39

Shakir & Al-Azza                                                                                         | 35

[16] R. Amini, C. Lisetti, and G. Ruiz, “Hapfacs 3.0:     [30] D. Terzopoulos and K. Waters, “Physically-based
                                                            facial modelling, analysis, and animation,” The
Facs-based facial expression generator for 3d
Speaking virtual characters,” IEEE Transactions on          journal of visualization and computer animation, vol. 1,
Affective Computing, vol. 6, no. 4, pp. 348–360, 2015.      no. 2, pp. 73–80, 1990.
                                                          [31] E. Sifakis, I. Neverov, and R. Fedkiw, “Automatic
[17] R. Ekman, What the face reveals: Basic and applied

studies of spontaneous expression using the Facial        determination of facial muscle activations from
                                                          sparse motion capture marker data,” in ACM
Action Coding System (FACS). Oxford University            SIGGRAPH 2005 Papers, 2005, pp. 417–425.

  Press, USA, 1997.                                       [32] M. Cong, M. Bao, J. L. E, K. S. Bhat, and R.
[18] D. Kumar and D. Sharma, “Enhanced waters 2d            Fedkiw, “Fully automatic generation of anatomical
                                                            face simulation models,” in Proceedings of the 14th
  muscle model for facial expression generation.” in
  VISIGRAPP (1: GRAPP), 2019, pp. 262–269.                  ACM SIGGRAPH/Eurographics Symposium on
[19] D. Kumar and J. Vanualailai, “Low bandwidth            Computer Animation, 2015, pp. 175–183.
                                                          [33] A. E. Ichim, P. Kadle?cek, L. Kavan, and M. Pauly,
  video streaming using facs, facial expression and         “Phace: physics-based face modeling and
  animation techniques.” in VISIGRAPP (1: GRAPP),           animation,” ACM Transactions on Graphics
  2016, pp. 226–235.                                        (TOG), vol. 36, no. 4, pp. 1–14, 2017.
[20] Y. Zhou and B. E. Shi, “Photorealistic facial

expression synthesis by the conditional difference        [34] W.-C. Ma, Y.-H. Wang, G. Fyffe, B.-Y. Chen, and
adversarial autoencoder,” in 2017 Seventh                   P. Debevec, “A blendshape model that incorporates
                                                            physical interaction,” Computer Animation and
International Conference on Affective Computing and         Virtual Worlds, vol. 23, no. 3-4, pp. 235–243,
Intelligent Interaction (ACII). IEEE, 2017, pp. 370–376.

[21] A. Pumarola, A. Agudo, A. M. Martinez, A.            2012.
  Sanfeliu, and F. Moreno-Noguer, “Ganimation:

Anatomically-aware facial animation from a single         [35] Y. Kozlov, D. Bradley, M. Bacher, B.

image,” in Proceedings of the European Conference on      Thomaszewski, T. Beeler, and M. Gross,

Computer Vision (ECCV), 2018, pp. 818–833.                “Enriching facial blendshape rigs with physical

[22] K. Zhao.-S. Chu, and H. Zhang, “Deep region  and     simulation,” in Computer Graphics Forum, vol.  36,

multi-label learning for facial action unit               no. 2. Wiley Online Library, 2017, pp. 75–84.
detection,” in Proceedings of the IEEE Conference on
                                                          [36] ISO/IEC 14496-2:1999. Information technology –

  Computer Vision and Pattern Recognition, 2016, pp.      Coding of audio-visual objects – Part 2: Visual.
  3391–3399.
[23] Z. Liu, D. Liu, and Y. Wu, “Region based               ISO, Geneva, Switzerland. 2010.
  adversarial synthesis of facial action units,” in       [37] D. Bennett, “The faces of” the polar express”,” in

                                                          ACM Siggraph 2005 Courses.[38]  L. Williams,

International Conference on Multimedia Modeling.          “Performance-driven facial      animation,” in Acm
Springer, 2020, pp. 514–526.
                                                          SIGGRAPH 2006 Courses.

[24] Z. Liu, J. Dong, C. Zhang, L. Wang, and J. Dang,     [39] D. Bradley, W. Heidrich, T. Popa, and A. Sheffer,
  “Relation modeling with graph convolutional               “High resolution passive facial performance
  networks for facial action unit detection,” in
                                                          capture,” in ACM SIGGRAPH 2010 papers, 2010,
  International Conference on Multimedia Modeling.        pp. 1–10.
  Springer, 2020, pp. 489–501.
[25] T. N. Kipf and M. Welling, “Semi-supervised          [40] T. Beeler, F. Hahn, D. Bradley, B. Bickel, P.
  classification with graph convolutional networks,”
                                                          Beardsley, C. Gotsman, R.W. Sumner, and M.
arXiv preprint arXiv: 1609.02907, 2016.                   Gross, “Highquality passive facial performance
                                                          capture using anchor frames,” in ACM SIGGRAPH
[26] A. Pakstas, R. Forchheimer, and I. S. Pandzic,       2011 papers, 2011, pp. 1–10.

MPEG-4 Facial Animation: The Standard,                    [41] T. F. Cootes, G. J. Edwards, and C. J. Taylor,
                                                            “Active appearance models,” IEEE Transactions
Implementation and Applications. John Wiley &

Sons, Inc., 2003.                                         on pattern analysis and machine intelligence, vol.
                                                          23, no. 6, pp. 681–685, 2001.
[27] A. El Rhalibi, C. Carter, S. Cooper, and M.
  Merabti, “Highly realistic mpeg-4 compliant facial      [42] J. R. Tena, F. De la Torre, and I. Matthews,
  animation with charisma,” in 2011 Proceedings of          “Interactive region-based linear 3d face models,” in
                                                            ACM SIGGRAPH 2011 papers, 2011, pp. 1–10.
20th International Conference on Computer
                                                          [43] C. Cao, D. Bradley, K. Zhou, and T. Beeler, “Real-
  Communications and Networks (ICCCN). IEEE, 2011,          time high-fidelity facial performance capture,”
  pp. 1–6.
[28] S. M. Platt and N. I. Badler, “Animating facial      ACM Transactions on Graphics (ToG), vol. 34, no.
  expressions,” in Proceedings of the 8th annual          4, pp. 1–9, 2015.

  conference on Computer graphics and interactive         [44] E. Sifakis, A. Selle, A. Robinson-Mosher, and R.
  techniques, 1981, pp. 245–252.                            Fedkiw, “Simulating speech with a physics-based
[29] K. Waters, “A muscle model for animation three-        facial muscle model,” in Proceedings of the 2006
  dimensional facial expression,” Acm siggraph
  computer graphics, vol. 21, no. 4, pp. 17–24, 1987.     ACM SIGGRAPH/Eurographics symposium on
                                                          Computer animation, 2006, pp. 261–270.
   34   35   36   37   38   39   40   41   42   43   44