The goal of the MPLab is to develop systems that perceive and interact with humans in real time using natural communication channels. To this effect we are developing perceptual primitives to detect and track human faces and to recognize facial expressions. We are also developing algorithms for robots that develop and learn to interact with people on their own. Applications include personal robots, perceptive tutoring systems, and system for clinical assessment, monitoring, and intervention.

  • Introduction to the MPLab (PDF)
  • MPLAB 5 Year Progress Report (PDF)

  • NEWS


    MPLAB on The New York Times, Oct 15, 2013

    MPLAB on The New York Times
    http://www.nytimes.com/2012/09/18/science/a-robot-with-a-delicate-touch.html?_r=3

    CERT on CNN. Full story here:
    http://www.cnn.com/2012/04/01/health/mental-health/autism-asperger-diagnoses-profile/index.html?hpt=hp_c1

    Here is a youtube of the Reach For Tomorrow activity organized by the MPLab:
    http://www.youtube.com/watch?v=OvNM2D6-ptA&feature=share__

    Here is an article disclosing that MS Cambridge was the main force on the development of the Kinect Software
    http://bit.ly/h4SN9O

    New paper shows that brain predicts consequences of future eye movements

    http://www.sciencedaily.com/releases/2011/01/110110103737.htm

    Interesting article talks about Ros Picard’s company among other things.
    http://nyti.ms/fVQY00

    Interesting talk about combining foveal views and and training a controller to lear where to look. Hugo Larochelle, Geoffrey Hinton


    keep looking »follow the MPLab on Twitter