FireStats is not installed in the database
Laryngitis homøopatiske midler køb propecia edderkop bid retsmidler
The goal of the MPLab is to develop systems that perceive and interact with humans in real time using natural communication channels. To this effect we are developing perceptual primitives to detect and track human faces and to recognize facial expressions. We are also developing algorithms for robots that develop and learn to interact with people on their own. Applications include personal robots, perceptive tutoring systems, and system for clinical assessment, monitoring, and intervention.

  • Introduction to the MPLab (PDF)
  • MPLAB 5 Year Progress Report (PDF)

  • NEWS

    Francisco Lacerda, a professor of phonetics at Stockholm University, is one of two scientists threatened with legal action after the publication of a scientific article condemning the use of lie detectors. The Israeli company Nemesysco, which manufactures detectors, has written in a letter to the researchers’ publishers that the researchers may be sued for libel if they continue to write on this subject in the future.


    Toyota has patented a driver drowsiness detector. It looks like it is based on analysis of the driving itself, not computer-vision based analysis of the driver.

    I have been reading the overview of the Willow Garage Robot Operating System. If it lives up to its promise, it sounds like it almost perfect for our needs, and we may want to consider trying it for our latest projects.

    It has the following advantages:

    • Cross platform
    • Cross language (currently C++/Python, Java coming “soon”)
    • Publish/Subscribe architecture
    • Nodes talk directly to each other
    • Multiple data passing methods (tcp/udp/shared memory) as appropriate
    • Automatic bookkeeping of all publishers/subscribers

    An overview is available:

    Sixth Canadian Conference on Computer and Robot Vision (CRV’09)

    Kelowna, British Columba, CANADA
    May 25-37, 2009
    will be held on May 25-27, 2009, in Kelowna, British Columbia, Canada.
    Held jointly with the Graphics Interface 2009 (GI) and the Artificial
    Intelligence 2009 (AI) conferences, a single registration will permit
    attendees to attend any talk in the three conferences (CRV, GI, AI),
    which will be scheduled in parallel tracks.

    CRV seeks contributions of complete, original research papers on any
    aspect of computer vision, robot vision, robotics, medical imaging,
    image processing or pattern recognition. CRV provides an excellent
    environment for interdisciplinary interaction as well as for
    networking of students and scientists in computer vision, robotic
    vision, robotics, image understanding and pattern recognition. In
    addition to the regular sessions, there will be three invited
    speakers. Four paper awards will be presented: one for the best
    overall paper, one for the best paper with a student as first author,
    and area awards for the best paper in vision and robotics.

    For more detailed information, please consult the CFP at the following

    Paper Submission Deadline 30 January 2009
    Acceptance/Rejection notification 20 February 2009
    Revised camera-ready papers due 6 March 2009

    Program Co-Chairs
    Frank Ferrie, McGill University
    Mark Fiala, Ryerson University

    More information about the conference can be found at the main site

    Here (mplab blog) is a video of the technology used to superimpose the yellow line in american football games.

    Ecamm Network has started producing wireless web cams that communicate to your computer via bluetooth. Resolution is 640×480, battery is 4 Hours, size is 2″x2.5″x0.625″, price is $150.

    In the future, I expect these technologies will improve across all dimensions. We may want to keep our eye on them and decide when they become useful for us.

    A new household robot created in Japan is capable of rinsing the dishes in the sink before neatly lining them up in the dishwasher and pressing the start button for the washing cycle.

    The multi-jointed robot arm, created by scientists the University of Tokyo with the electronics company Panasonic, is one of a series of prototype devices designed to perform household chores.

    Fitted with 18 delicate sensors, the kitchen assistant robot (KAR) is able to grasp delicate china and cutlery without dropping or breaking them in a palm-like device.

    Using its internal camera along with the sensors, the robot is able to determine the shapes and sizes of dirty dishes and utensils placed in the sink before picking them up and loading them in the dishwasher.

    « go backkeep looking »follow the MPLab on Twitter