PhD Research

Assessment of Fun from the Analysis of Facial Expressions

Laboratory of Interactivity and Digital Entertainment Technology (LIDET)
Department of Computer Science from the Institute of Mathematics and Statistics (IME) of the University of São Paulo (USP)

Games are a very old human interest and video games are still one of the most important forms of modern entertainment. The reason why games are so interesting is because they are fun. Due to its subjectivity and context dependency, fun can not simply be designed. Designers employ best known practices and techniques to plan for good experiences and then evaluate their design choices iteratively with users in order to improve the product. The evaluation of user experience in games has been traditionally performed by means of observation, questionnaires and interviews, but it has becoming very common to capture performance and physiological data for quantitative and qualitative analysis. Pure quantitative analysis are known to not be enough to properly characterize fun and body sensors are still intrusive, increasing the risk of tampering the experience. In that sense, the analysis from facial expressions seems a good alternative.

Facial expressions are important in the human communication, carrying a lot of contextual and emotional information. In the context of video games the face seems more accessible because players keep the eyes at the screen most of the time and the face close enough to diminish problems with Computer Vision algorithms. Also, most of the and game-enabled devices already have embedded cameras that can be used to monitor facial expressions as game are played. This work presents a proposal on how to assess fun from a low-cost webcam with the purpose of supporting playtesting. This research intends to verify if (and how) the evaluation of fun can be aided using low-cost webcams. The solution currently in study is based on the detection and tracking of the player’s face, followed by the classification of the facial expressions in order to identify attention focus and shifts, and prototypical emotional peaks. The face tracking is being developed with the Viola-Jones, Active Appearance Model (AAM) and Pose from Orthography and Scaling with Iterations (POSIT) algorithms, and the emotion classifier with a Support Vector Machine (SVM) trainned from feature vectors extracted from a set of Gabor filters.