FireStats is not installed in the database
Laryngitis homøopatiske midler køb propecia edderkop bid retsmidler
The goal of the MPLab is to develop systems that perceive and interact with humans in real time using natural communication channels. To this effect we are developing perceptual primitives to detect and track human faces and to recognize facial expressions. We are also developing algorithms for robots that develop and learn to interact with people on their own. Applications include personal robots, perceptive tutoring systems, and system for clinical assessment, monitoring, and intervention.

  • Introduction to the MPLab (PDF)
  • MPLAB 5 Year Progress Report (PDF)

  • NEWS


    MIT Media Laboratory: The Human Speechome Project

    Stepping into a Child’s Shoes 

    http://www.apple.com/science/profiles/mit/ 

    Yes, it’s an article by Apple meant to highlight the use of Macs in Science, but it’s interesting all the same.

    Monday April 21, 2008

    Morning (10am)
    - Preparing and turning RUBI on for testing
    - No malfunctions in RUBI

    Afternoon (2pm – 4:30 pm)
    - Introducing RUBI to children
    - Children plays and interact with RUBI
    - No malfunctions in RUBI
    - Video taping ok
    - RUBI Log given to teachers

    Tuesday April 22, 2008
    Morning (10 am)
    - Teacher-directed activities with RUBI
    - Children play with RUBI approx. 30 min.
    - No malfunctions in RUBI
    - Video taping ok

    Afternoon (3 pm-5 pm)
    - Child-centered playing with RUBI
    - No malfunctions in RUBI
    - Video taping ok
    - Video clips available

    Wednesday April 23, 2008
    Morning (9:30 am – 10:15am)
    - Teacher-directed activities with RUBI
    - Children play with RUBI approx. 45 min.
    - No malfunctions in RUBI
    - Video taping ok

    Afternoon
    N/A: RUBI at UCSD for NSF Site visit

    This post is password protected. To view it please enter your password below:


    Youtube video on our research on using automatic facial expression recognition for intelligent tutoring systems: 

    An article from the New York Times:

    http://www.nytimes.com/2008/04/08/science/08tier.html?_r=2&oref=slogin&oref=slogin

    Please refer to the article if my explanation is unclear.

    The basic idea of the article is that “cognitive dissonance” methodology is flawed in that it ignores the Monty Hall effect, and calls people “irrational” when really people are exhibiting Monty-Hall congruent preferences.

    Let’s say you have three choices that you should value equally. Presented with choices red and blue, you choose red arbitrarily. Now I give you a choice between blue and green, and you choose green about 2/3 of the time. Assuming that people are following a probability matching strategy (and there are realistic conditions under which it is an ideal strategy), this means you think green has 2/3 chance of being better than blue, whereas before you thought each was equally likely to be better than the other. Psychologists call this “rationalizing your initial (arbitrary) rejection.”

    In contrast, the new argument goes “If blue was worse than red, there’s a 2/3 chance that it’s worse than green, because green could be better than red and blue, or worse than red but better than blue. In 1/3 of outcomes blue is worse than red but better than green”. This can be seen as a variant of the Monty Hall problem, where the odds change conditioned on the first choice.

    I am not sure whether to be convinced by this new argument, much as I enjoy seeing the agenda of savvy statistical methods being evangelized. It seems to me that if you carried around full Bayesian posteriors, the effect would vanish.

    Dr. Chen’€™s analysis as it’s presented in the New York Times article seems flawed in a way that unfortunately too many things are flawed: it takes the mode of a probability distribution to be representative a distribution itself. That is, it assumes that the probability at the mode is 1 and 0 elsewhere. In literature this is called a “€œmaximum a posteriori€” (MAP) strategy.

    If people were proper statisticians, they would carry around with them a notion of uncertainty. They would say to themselves on picking red over blue “€œI think red is better, but I’€™m completely unsure about that”€. Then when faced with the choice of blue and green they would say “I thought red was better than blue, but I was completely unsure about it, so maybe blue really is better than red”€. If red was better than blue, then 2/3 of the time green will be better than blue. If blue was better than red, then 2/3 of the time green will be better than red. Weighing these outcomes by their uncertainty (50/50) gives that green is just as likely to be better than blue, not 2/3 as Dr. Chen suggests.

    [The math is the same for preferences, so Dr. Chen'€™s argument for "€œslight preferences"€ is not sufficient]

    Dr. Chen’s argument depends on people *forgetting* information, specifically about the certainty of their previous choices. While this is a good candidate explanation for what’s going on, it’s not evidence for people doing “€œthe right thing.”€

    You can verify this yourself in simulations. Randomly assign red, green, blue the numbers “1/2/3″. Have the simulation ask you whether red is better than blue, and pick what you like. Then have the simulation give you a choice between the other two. 50% of the time, you will be wrong. This is in stark contrast to the real Monty Hall problem, where in simulations you will find yourself happily winning 67% of the time.

    Critically, the Monty Hall problem relies on revelation of new information (€”The car is not behind door number 2″) in the context of the previous choice. The choice itself is not sufficient, and gives no new information. If Monty Hall didn’t show you the goat, you’d be at 50/50. Ask yourself, in this case of cognitive dissonance, what new information is being revealed.

    Note that if after you chose red over blue I told you “You were right, red was better than blue”, then the probability of green being better than blue goes up to 2/3. This “You were right” information that is conditioned on your previous choice is the key to making this a Monty Hall style problem. Since people get no such feedback, they don’t know if they were right to choose red or wrong, and so the analysis shouldn’t apply.

    Company doing smile detection and intensity rating.  Link to article http://hosted.ap.org/dynamic/stories/J/JAPAN_SMILE_MEASURE?SITE=WIRE&SECTION=HOME&TEMPLATE=DEFAULT” 

    Slashdot links to an article here: http://anand.typepad.com/datawocky/2008/03/more-data-usual.html that is relevant to us, and probably good general advice. The idea is something like, having more potentially helpful features is better than having a better algorithm. The article is about how augmenting the Netflix Dataset with simple information collected from IMDB leads to top-tier performance with very simple algorithms.

    Workshop illustrating the use of robots on education


    « go backkeep looking »follow the MPLab on Twitter