FireStats is not installed in the database
Laryngitis homøopatiske midler køb propecia edderkop bid retsmidler
The goal of the MPLab is to develop systems that perceive and interact with humans in real time using natural communication channels. To this effect we are developing perceptual primitives to detect and track human faces and to recognize facial expressions. We are also developing algorithms for robots that develop and learn to interact with people on their own. Applications include personal robots, perceptive tutoring systems, and system for clinical assessment, monitoring, and intervention.

  • Introduction to the MPLab (PDF)
  • MPLAB 5 Year Progress Report (PDF)

  • NEWS


    Google has introduced an in-browser video chat feature for its internet-based mail client, GMail. It seems to work quite simply and effectively across platform (I’ve used it for Mac-PC). It requires installation of browser plug-ins.

    We may want to consider this as a platform for having meetings with collaborators.

    More information here: http://mail.google.com/videochat

    We are beginning a section of the MPLab blog to maintain a knowledgebase for the expertise we gain as we adapt our existing Machine Perception solutions to be more compatible with the OpenCV platform.

    ===========================================================================
    Call For Papers: Autonomous Robots – Special Issue on Robot Learning
    ===========================================================================
    Quick Facts
    =========
    Editors: Jan Peters, Max Planck Institute for Biological Cybernetics,
    Andrew Y. Ng, Stanford University
    Journal: Autonomous Robots
    Submission Deadline: November 8, 2008
    Author Notification: March 1, 2009
    Revised Manuscripts: June 1, 2009
    Approximate Publication Date: 4th Quarter, 2009

    Abstract
    ======
    Creating autonomous robots that can learn to act in unpredictable
    environments has been a long standing goal of robotics, artificial
    intelligence, and the cognitive sciences. In contrast, current
    commercially available industrial and service robots mostly execute
    fixed tasks and exhibit little adaptability. To bridge this gap,
    machine learning offers a myriad set of methods some of which have
    already been applied with great success to robotics problems. Machine
    learning is also likely play an increasingly important role in
    robotics as we take robots out of research labs and factory floors,
    into the unstructured environments inhabited by humans and into other
    natural environments.

    To carry out increasingly difficult and diverse sets of tasks, future
    robots will need to make proper use of perceptual stimuli such as
    vision, lidar, proprioceptive sensing and tactile feedback, and
    translate these into appropriate motor commands. In order to close
    this complex loop from perception to action, machine learning will be
    needed in various stages such as scene understanding, sensory-based
    action generation, high-level plan generation, and torque level motor
    control. Among the important problems hidden in these steps are
    robotic perception, perceptuo-action coupling, imitation learning,
    movement decomposition, probabilistic planning, motor primitive
    learning, reinforcement learning, model learning, motor control, and
    many others.

    Driven by high-profile competitions such as RoboCup and the DARPA
    Challenges, as well as the growing number of robot learning research
    programs funded by governments around the world (e.g., FP7-ICT, the
    euCognition initiative, DARPA Legged Locomotion and LAGR programs),
    interest in robot learning has reached an unprecedented high point.
    The interest in machine learning and statistics within robotics has
    increased substantially; and, robot applications have also become
    important for motivating new algorithms and formalisms in the machine
    learning community.

    In this Autonomous Robots Special Issue on Robot Learning, we intend
    to outline recent successes in the application of domain-driven
    machine learning methods to robotics. Examples of topics of interest
    include, but are not limited to:
    • learning models of robots, task or environments
    • learning deep hierarchies or levels of representations from sensor
    & motor representations to task abstractions
    • learning plans and control policies by imitation, apprenticeship
    and reinforcement learning
    • finding low-dimensional embeddings of movement as implicit
    generative models
    • integrating learning with control architectures
    • methods for probabilistic inference from multi-modal sensory
    information (e.g., proprioceptive, tactile, vision)
    • structured spatio-temporal representations designed for robot
    learning
    • probabilistic inference in non-linear, non-Gaussian stochastic
    systems (e.g., for planning as well as for optimal or adaptive
    control)
    From several recent workshops, it has become apparent that there is a
    significant body of novel work on these topics. The special issue will
    only focus on high quality articles based on sound theoretical
    development as well as evaluations on real robot systems.

    Time Line
    ========
    Submission Deadline: November 8, 2008
    Author Notification: March 1, 2009
    Revised Manuscripts: June 1, 2009
    Approximate Publication Date: 4th Quarter, 2009

    Editors
    ======
    Inquiries on this special issue should be send to one of the editors
    listed below.

    Jan Peters (http://www.jan-peters.net/)
    Senior Research Scientist, Head of the Robot Learning Laboratory
    Department for Machine Learning and Empirical Inference, Max Planck
    Institute for Biological Cybernetics, Tuebingen, Germany

    Andrew Y. Ng (http://ai.stanford.edu/~ang/)
    Assistant Professor
    Department of Computer Science, Stanford University, Palo Alto, USA_______________________________________________
    Comp-neuro mailing list
    Comp-neuro@neuroinf.org

    http://www.neuroinf.org/mailman/listinfo/comp-neuro

    I just read a news item that Picasa is giving users the ability to use creative-commons licenses for their images. The original news story is here (note that Nick in the story’s comments is not me):

    http://lessig.org/blog/2008/09/picasa_web_albums_goes_cc.html

    Perhaps we should explore this as potential future source for object recognition datasets?

    http://www.cc.gatech.edu/AAAI-SS09-LFH/Home.html

    Sharon Leal and Aldert Vrij , University of Portsmouth
    Published on Journal of Nonverbal Behavior.

    Though I did not read this paper (UCSD does not purchase the online version). This article received pretty good media coverage (Telegraph).

    They reported that people telling a lie  (and afterwards) has substantial higher (8x) blinking rate  than normal situation. They use some electronic devices attached to the eye to measure the blink rate.

    BBC created a website for visitors to test their ability in discerning true versus fake smiles in 20 videos. The test is based on Paul Ekman’s research.

    http://www.bbc.co.uk/science/humanbody/mind/surveys/smiles/index.shtml

    I get 18 out of 20 correct, but I was cheating since I know the key AU that makes the difference. 

    Appeared in SIGGRAPH 2008. This can be used to de-identification of faces in google street views.

    They use OMRON face/feature detectors. I think we have to release (at least) our feature detectors to stimulate more creative applications before ours lose the advantage in the market. Is there anyway we could get a copy of these commercial detectors and do some benchmark?


    « go backkeep looking »follow the MPLab on Twitter