Asobo Touch Screen Failed
At about 2:30 pm today Cynthia visited Room 1 and found that the touch screen had stopped responding. She restarted asobo and things worked fine. Cynthia said asobo was very hot at the time. I suspect air flow had been impeded by the dvitovga converter.
Asobo Adaptive Scheduler Experiment Started
Cynthia Taylor is leading our first formal experiment with Asobo. Goal is to evaluate an adaptive scheduler that learns the activities the children like most and presents them so as to maximize the interaction with children.
Asobo Adaptive Scheduler Experiment Day 1
I visited room 1 at 9:00 am this morning. There were several children sitting around asobo but alicia did not let anybody else play with him. She would embrace him whenever anyone tried to touch the screen.
In the current situation it felt that it’d be hard to differentiate an adaptive controller from a random controller. They clearly loved the songs, but other than that it did not seem it matter that much what game they saw.
In general room 1 was a bit on the chaotic side. There was Beach Boys music being played quite loud outside. Several children were crying because their moms had just left, and the teachers had to multi-task between taking care of the children and cleaning up the breakfast mess. Asobo was helping distracting some children during this transition period.
Social Interaction 3000 Frames Per Second
Check out this article from WIRED magazine on the microadjustments in slow dancing that are visible when captured at 3000 frames per second. So, … what’s the bandwidth of social interaction?
The visual system system to be designed as a bandpass filter with a bandwith of about 15 Hz. This suggest that’s probably the useful bandwidth for social interaction. Yet, again there may be cool things happening at higher frequencies.
Dashan Gao: Decision Theoretic Visual Salience, Wed. August 1, 10am
At our weekly MPLab meeting this Wednesday, 1 August 2007, at 10am, Dashan Gao will be giving us his talk on “Decision-theoretic visual saliency and its implications for pre-attentive vision” — work that will appear in the proceedings of the IEEE International Conference on Computer Vision (ICCV) in Brazil later this year. Dashan is a student of Nuno Vasconcelos in the Statistical Visual Computing Lab (SVCL) in the Electrical Engineering department.
Abstract Follows below:
NYT Article on Sociable Robots
NSF Workshop on the Science and Engineering of Learning
I just came from a really fun workshop organized by NSF on the Science and Engineering of Learning . Here is a bulleted lists of some ideas that I found particularly important.
“languagge specific listeners”, . For example before 6 months japanese infants can differentiate r/l but by 12 months they no longer can.
One critical question is why, and how can we reproduce such effects with technology.
them to choose key words and track progress when using social robots as teachers.
How do we learn to imitate?