Page 38 - IJEEE-2022-Vol18-ISSUE-1
P. 38

34 |                                                                                                            Shakir & Al-Azza

FACS, it lacks the direct correspondence between animation                           CONFLICT OF INTEREST
parameters and face muscles. FACS offers a very consistent
description for the facial upper portions but it does not for         The authors have no conflict of relevant interest to this
the lower portions of the face. That restricts FACS from        article.
being the dominant method in the Face Animation area.
MPEG-4 describes 66 low-level FAPs and two high-level                                       REFERENCES
FAPs. The low-level FAPs are based on the study of minimal
face actions and are closely related to muscle actions. They    [1] M. Elson, “Displacement” facial animation
denote a complete set of basic face actions, and therefore        techniques,” Vol 26: State of the Art in Facial
permit the representation of most natural face expressions.       Animation, pp. 21–42, 1990.
FACS clearly describe face expressions by combining the
actions unites that are based on the face muscle.               [2] J. Kleiser, “A fast, efficient, accurate way to
                                                                  represent the human face,” SIGGRAPH’89 Course
      Machine learning methods to face animation solve            Notes 22: State of the Art in Facial Animation, pp. 36–40,
many of the problems found in traditional methods to face         1989.
animation. If the data is available, precise models can be
trained to obtain high-quality face animations. Machine         [3] Z. Deng, J. Bailenson, J. P. Lewis, and U. Neumann,
learning needs an enormous of data to be able to train precise    “Perceiving visual emotions with speech,” in
models. This type of data is not easily reachable, or is non-     International Workshop on Intelligent Virtual Agents.
existent because the method is new and is not commonly            Springer, 2006, pp. 107–120.
utilised. Without sufficient data, models can be imprecise
and produce results that would be restrict in the uncanny       [4] J. Fordham, “Middle earth strikes back,” Cinefex,
valley.                                                           vol. 92, pp. 71–142, 2003.

                         IV. CONCLUSION                         [5] M. Sagar1, “Facial performance capture and
                                                                  expressive translation for king kong,” in ACM
      Developing a facial animation comprises determining         SIGGRAPH 2006 Courses.
relevant geometric descriptions to represent the face model.
The structured facial model should be capable to support the    [6] B. Flueckiger, “Computer-generated characters in
animation.                                                        avatar and benjamin button,” Digitalitat und Kino.
                                                                  Translation from German by B. Letzler, vol. 1, 2011.
      There are a lot of different methods to facial animation
and it can be hard to get started with producing facial         [7] F. I. Parke and K. Waters, Computer facial
animation technologies. Since every approach is so                animation. CRC press, 2008.
exclusive, the amount of knowledge that transfers between
methods is limited. Most workshops have their particular        [8] F. I. Parke, “A parametric model for human faces.”
face animation pipelines and the lack of universal standards      UTAH UNIV SALT LAKE CITY DEPT OF
make it so there is no common method across industry. Even        COMPUTER SCIENCE, Tech. Rep., 1974.
when attempting to find assistance for smaller projects,
valuable information is difficult to come by because of the     [9] E. Friesen and P. Ekman, “Facial action coding
huge number of methods.                                           system: a technique for the measurement of facial
                                                                  movement,” Palo Alto, vol. 3, 1978.
      In this survey, we discuss and review several
approaches utilised in driving the facial animation. In         [10] W. F. P. Ekman and J. Hager, “Facial action coding
addition, we discuss the state-of-the art facial animation        system,” The Manual on CD ROM, A Human Face,
techniques. Within each method, the main ideas, and the           Salt Lake City, Tech. Rep., 2002.
strength and weakness of each approach are described in
detail.                                                         [11] C. M. Haase and et al., “Short alleles, bigger
                                                                  smiles? The effect of 5-httlpr on positive emotional
                     V. FUTURE DIRECTIONS                         expressions.” Emotion, vol. 15, no. 4, p. 438, 2015.

To be able to considerably advance the field of face            [12] A. A. Gunawan et al., “Face expression detection
animation with machine learning, future study will be             on kinect using active appearance model and fuzzy
required to address the problems that make machine learning       logic,” Procedia Computer Science, vol. 59, pp. 268–274,
ineligible to developers. Machine learning methods to facial      2015.
animation have had favorable results, however
complications with model formation have prohibited its          [13] I. M.Menne and B. Lugrin, “In the face of emotion:
wide-spread utilize. Generating machine learning models           a behavioral study on emotions towards a robot using the
needs a great amount of labeled data and the data sets            facial action coding system,” in Proceedings of the
essential for training a model are hard and time-consuming        Companion of the 2017 ACM/IEEE International
to produce. To increase utilize of machine learning in face       Conference on Human-Robot Interaction, 2017, pp. 205–
animation, future work will be required to research and           206.
produce solutions to permit for easier model production.
                                                                [14] P. Tripathi, K. Verma, L. Verma, and N. Parveen,
                                                                  “Facial expression recognition using data mining
                                                                  algorithm,” Journal of Economics, Business and
                                                                  Management, vol. 1, no. 4, pp. 343–346, 2013.

                                                                [15] C. Butler, L. Subramanian, and S. Michalowicz,
                                                                  “Crowdsourced facial expression mapping using a 3d
                                                                  avatar,” in Proceedings of the 2016 CHI Conference
                                                                  Extended Abstracts on Human Factors in Computing
                                                                  Systems, 2016, pp. 2798–2804.
   33   34   35   36   37   38   39   40   41   42   43