Robust feature for Facial Expression Recognition (FER) systems

Sukrit Jaidee

Thailand Mental Health Technology and Innovation Center (MH), Mahidol University, Thailand

The AI-Avatar system is a system that helps psychologists assess a participant's emotions. The system's method of predicting emotions takes into account a number of factors, including action units, facial keypoints, body movement, eye movement, and voice, each of which employs machine learning to extract key features. The action unit is used as part of the feature because the movements of the facial muscles correspond to the emotions expressed. We have applied a machine learning model to detect action units, which will allow us to determine the emotional expressions of the participants. Facial expression analysis is one of the few techniques available to assess emotions in real time. Movement of the upper body in a forward or backward inclination was another feature that corresponded to the emotions expressed by the participants. Movement of the upper body in a forward or backward inclination was a feature that corresponded to the emotions expressed by the participants. To determine upper body movement, we use pose estimation models in a different method. Rolling eyes is another feature that indicates the emotions expressed by the participants, so we detected eye rolling to help assess the participants' moods. Rolling eyes is another feature that corresponds to the emotions expressed by the participants, so we detect eye rolling to help assess the participants' moods. The AI-Avatar system also applied an acoustic model to help predict the participants' moods. All of the aforementioned feature inputs are then used to predict the participants' moods for the most accurate results.

Sukrit Jaidee

Thailand Mental Health Technology and Innovation Center (MH), Mahidol University, Thailand

I have been working in image recognition since the end of the Super AI Engineer Season 1 project, where I was a part of the Optimizer team. Then, I was in MU's AI-Care Nonverbal group during the preparation for my Ph.D. I have worked in many parts of the Facial Expression Recognition (FER) systems, such as robust features, action unit detection, emotion recognition, pose estimation, Iris Landmark detection, and facial landmark detection. I also participated in the HealthCam startup as part of the National Innovation Agency (public organization), where I worked in action recognition and stroke disease detection using deep learning. Currently, I am part of the AI-Care Nonverbal group at Mahidol University, where I continue my work in facial expression recognition. I am also part of the EGAT Proventure team, where I apply machine learning techniques and image recognition to provide technologies for business. Lastly, I am also part of the EGAT Data management Group working on Business Insights and Analytics.

Organization