FireStats is not installed in the database
Laryngitis homøopatiske midler køb propecia edderkop bid retsmidler

Asobo Touch Screen Failed

Author:movellan @ July 31st, 2007 Leave a Comment

At about 2:30 pm today Cynthia visited Room 1 and found that the touch screen had stopped responding. She restarted asobo and things worked fine. Cynthia said asobo was very hot at the time. I suspect air flow had been impeded by the dvitovga converter.

Asobo Adaptive Scheduler Experiment Started

Author:movellan @ July 31st, 2007 2 Comments

Cynthia Taylor is leading our first formal experiment with Asobo. Goal is to evaluate an adaptive scheduler that learns the activities the children like most and presents them so as to maximize the interaction with children.

Asobo Adaptive Scheduler Experiment Day 1

Author:movellan @ July 31st, 2007 Leave a Comment

I visited room 1 at 9:00 am this morning. There were several children sitting around asobo but alicia did not let anybody else play with him. She would embrace him whenever anyone tried to touch the screen.

In the current situation it felt that it’d be hard to differentiate an adaptive controller from a random controller. They clearly loved the songs, but other than that it did not seem it matter that much what game they saw.

In general room 1 was a bit on the chaotic side. There was Beach Boys music being played quite loud outside. Several children were crying because their moms had just left, and the teachers had to multi-task between taking care of the children and cleaning up the breakfast mess. Asobo was helping distracting some children during this transition period.

Social Interaction 3000 Frames Per Second

Author:movellan @ July 30th, 2007 1 Comment

Check out this article from WIRED magazine on the microadjustments in slow dancing that are visible when captured at 3000 frames per second. So, … what’s the bandwidth of social interaction?
The visual system system to be designed as a bandpass filter with a bandwith of about 15 Hz. This suggest that’s probably the useful bandwidth for social interaction. Yet, again there may be cool things happening at higher frequencies.

Dashan Gao: Decision Theoretic Visual Salience, Wed. August 1, 10am

Author:nick @ July 30th, 2007 Leave a Comment

At our weekly MPLab meeting this Wednesday, 1 August 2007, at 10am, Dashan Gao will be giving us his talk on “Decision-theoretic visual saliency and its implications for pre-attentive vision” — work that will appear in the proceedings of the IEEE International Conference on Computer Vision (ICCV) in Brazil later this year. Dashan is a student of Nuno Vasconcelos in the Statistical Visual Computing Lab (SVCL) in the Electrical Engineering department.

 

Abstract Follows below:

Read more

NYT Article on Sociable Robots

Author:movellan @ July 29th, 2007 Leave a Comment

Here is the Link

NSF Workshop on the Science and Engineering of Learning

Author:movellan @ July 26th, 2007 3 Comments

I just came from a really fun workshop organized by NSF on the Science and Engineering of Learning . Here is a bulleted lists of some ideas that I found particularly important.

  • An interesting scientific question for the science of learning: Why do we sleep and why is it important for learning.
  • Operating in non-stationary environments
  • Are there universal organizing principles of learning accross time scales and spatial scales: predict the future, infomax.
  • In Machine Learning we tend to focus on solving one problem at a time. But the brain has to solve many problems, and solving many problems at once may be easier than solving probles separately, due to synergies between many problems.
  • In the visual pathway from retina to IT we have a chain of about 10-15 synaptic layers. This may point to the importance of understanding why the chose has chosen “deep” multi-layered learning.
  • We already have robot appliances: the diswasher, the washing machine, the dryer. Problem is they solve the problem very well by having a very controlled environment. We need systems that work in less controlled conditions.
  • Should we focus on general purpose robots or on robots that may be special purpose but can operate in unconstrained environments.
  • Should we focus general purpose intelligence or a bag of tricks. Or should we attempt to understand the general principles behind a bag of tricks.
  • Patt Kuh;s’ work shows that between 6 -12 months babies change from “citizens of the world” to being
    “languagge specific listeners”, . For example before 6 months japanese infants can differentiate r/l but by 12 months they no longer can.

    One critical question is why, and how can we reproduce such effects with technology.

  • Social psychologists talk about The mere pressence effect . People do perform better when there is another human present.
  • Howard Nusbaum has some cool work on Synthetic speech learning
  • Machine Learning can help clarify Chomskian “poverty of the stimulus” arguments
  • It is imporant to define benchmarks task for autonomous learning system.
  • There are well known vocabulary standards for different ages. We could use
    them to choose key words and track progress when using social robots as teachers.
  • We will soon be able to fit 100 million transistors in the space of a single neuron.

    How do we learn to imitate?

    Jay McClelland’s developmental model of Baby Vowel Learning

    Author:nick @ July 26th, 2007 1 Comment

    Article (Scientific American)

    Read more

    keep looking »