Sensorimotor learning of facial expressions:

A novel intervention for autism

 

Tanaka, Schultz, Winkeilman, Schreibman, Movellan, Bartlett

 

This project addresses the learning of nonverbal behaviors essential for social functioning by use of two complementary technologies for face processing developed by two TDLC laboratories: (1) the University of VictoriaŐs LetŐs Face It! system - a training program shown to effectively improve the face processing abilities of children with Autism Spectrum Disorders (ASD) and (2) the Computer Expression Recognition Toolbox (CERT) developed at UC San Diego for real- time analysis of facial expressions. The goal of this proposal is to merge these technologies in order to enable the study of perceptual and motor learning in the recognition and production of dynamic facial expressions.  This collaboration emerged from Temporal Dynamics of Learning Center at UCSD. One long-term goal of the collaboration is to develop and evaluate computer-assisted intervention systems to enhance the social communication skills of children with ASD in an engaging and cost-effective manner. By studying a population with impaired social functioning we hope to obtain a better understanding of the relationship of facial expression perception and production to social functioning. In this project we will integrate the facial expression recognition system of UCSDŐs CERT with LetŐs Face It! to design, develop, and pilot tasks that incorporate the new forms of interaction afforded by real-time expression recognition. These tasks will enable studies of facial expression learning and dynamics in the normally developing population and to test the learning of facial expression production in children with ASD. These studies will serve to elucidate the connection between perception and production systems in facial expression.

 

Temporal Dynamics of Learning Center, NSF SBE 0542013

NIH Challenge Grant: Sensorimotor learning of facial expressions: A novel intervention for autism. 9/1/2009-8/31/2011. PI: Bartlett. Co-I: Tanaka, Schultz, Winkeilman, Schreibman, Movellan Award announcement at NIH.

 

Gordon I, Pierce M, Bartlett M, and Tanaka J (2014). Training Facial Expression Production in Children on the Autism Spectrum . J Autism Dev Disord 44(10), p. 2486-98. Link to Journal.

Susskind J, Bartlett M, Neiman M, Wazny M, Tanaka J, Xu B (2014). Emotion detection: Through the looking glass. Demo, Vision Science Society.

Deriso D, Susskind J, Tanaka J, Winkielman P, HerringtonJ, Schultz R, and Bartlett  M (2012). Exploring the Facial Expression Perception-Production Link Using Real-time Automated Facial Expression Recognition. In A Fitzgibbon, S Lazebnik, Y Sato, and C Schmid (Eds), Lecture Notes in Computer Science, Computer Vision – ECCV 2012, Vol 7584, Springer, p. 270-279. . Download pdf

 

Deriso, D, Susskind, J, Krieger, L, Bartlett MS (2012). Emotion Mirror: A novel intervention for autism based on real-time expression recognition. Demo, European Conference on Computer Vision. Lecture Notes in Computer Science, Computer Vision – ECCV 2012, Vol 7585, Springer, p. 671-674. Best Demo Award. Download pdf. Here are some screencaps of the demos. Demo_Part1.mov Demo_Part2.mov  OR Demo.m4v

 

Bartlett, M.S. (2010). Emotion simulation and expression understanding: A case for time.  Invited commentary on Niedenthal, Mermillold, Meringer, and Hess, The Simulation of Smiles (SIMS) model: Embodied simulation and the meaning of facial expression.  Behavioral and Brain Sciences 33(6) p. 435-436. Download pdf

 

Deriso D., Susskind J, Tanaka J, Herrington, J., Schultz, R, Bartlett, M (2011). The Emotion Mirror: A Novel Intervention for Facial Expression Production and Perception. Demo, Vision Sciences Society Conference, Naples, FL, May.

 

Gordon, I., Tanaka. J., Pierce, M. & Bartlett, M. (2011) Facial Expression Production and Training. Journal of Vision, Proc. Vision Science Society, Naples, Florida. Download pdf

 

Gordon, Bartlett, Tanaka, (2011) Face expression production and learning: Studies from both physiological and social perspectives. Talk, IEEE Conference on Automatic Face and Gesture Recognition, workshop on the Psychology of Face and Gesture Recognition.

 

Tanaka, J., Bartlett, M., Movellan, j., Littlewort, G., and Lee-Cultura, S. (2010). Face-Face-Revolution: A game in real-time facial expression recognition. Demo, Vision Sciences Society Conference, Naples, FL, May. Abstract pdf

 

Cockburn, J., Bartlett, M., Tanaka, J., Movellan, J., Pierce, M., and Schultz, R. (2008). SmileMaze: A Tutoring System in Real-Time Facial Expression Perception and Production for Children with Autism Spectrum Disorder. Intl Conference on Automatic Face and Gesture Recognition, Workshop on Facial and Bodily expressions for Control and Adaptation of Games. Download pdf

 

Tanaka, J., Movellan, J., Bartlett, M., Cockburn, J., & Pierce, M. (2008). SmileMaze: A real-time program training in expression recognition. Demo, 8th Annual Meeting of Vision Science Society,  Naples, Florida, April 2008.