Laryngitis homøopatiske midler køb propecia edderkop bid retsmidler
The goal of the MPLab is to develop systems that perceive and interact with humans in real time using natural communication channels. To this effect we are developing perceptual primitives to detect and track human faces and to recognize facial expressions. We are also developing algorithms for robots that develop and learn to interact with people on their own. Applications include personal robots, perceptive tutoring systems, and system for clinical assessment, monitoring, and intervention.

  • Introduction to the MPLab (PDF)
  • MPLAB 5 Year Progress Report (PDF)

  • NEWS

    Nick@CVPR: Shankar Shivappa is Mohan’s student. He’s at Microsoft for summer but he’d be happy to talk to MPLab about AV fusion in the fall.

    Nick@CVPR: Mohan Trivedi has students working on Audio/Visual Integration. We may want to talk to them about ideas for Einstein.

    Nick@CVPR: Jeff Cohn from CMU/Pitt finds some pain AUs are easier to find with shape (AAM) features, others with appearence (DCT) features.

    Nick@CVPR: Performance & Evaluation of Tracking & Surveilance (PETS) Workshop began at Face & Gesture. Each year they publish a new dataset.

    Nick@CVPR: Ta et al. used SURF features similar to SIFT to track feature points & do recognition on Nokia cell phones.

    Nick@CVPR: Piotr Dollar compared 12 methods for pedestrian tracking. The fastest is about 0.1FPS so he recommends we try MIPOMDP there.

    A reading robot from Waseda University. Note the design reminiscent of RUBI3.

    Nick@CVPR: Denzler’s studens check if two cameras are aligned by probabilities of point wise correspondence. Low entropy means aligned.

    « go backkeep looking »follow the MPLab on Twitter