FireStats is not installed in the database
Laryngitis homøopatiske midler køb propecia edderkop bid retsmidler

NIPS 2007 Report: Filter Boost

Author:movellan @ December 5th, 2007 3 Comments

  • Filter Boost is a boosting approach designed to learn from a large population of examples, more than you can handle in memory. It combines a filter than selects samples from the population and a boosting-like algorithm. nips2007_0060.pdf

    NIPS 2007: More Monday Night Posters

    Author:paul @ December 4th, 2007 1 Comment

    Multiple Instance Active Learning

    -key idea: we have a series of bags that are either positive or negative.  Each bag contains a series of examples.  We know that each negative bag contains no positives and each positive bag contains at least one positive.  We assume the bag level labels are easy to obtain.  This work gives several strategies for selecting individual examples in the positive bags to query for labels.  These strategies are more or less heuristic, but the results are strong.  This is the same setup as the weakly supervised object detection problem.

    Learning Monotonic transforms:

    - really cool.   Simultaneous learn an svm and a monotonic transformation on the features.  These monotonic transforms can model saturation effects and other nonlinearities.

    Variational inference for network failures:

    - interesing application of variational inference.  Very similar to the idea of predicting a set of diseases from a set of observed symptoms.  The system is an expert system in that we use a noisy or model for the expression of a symptom given a disease where the noisy or parameters are given.  Some additional tricks are used such as putting beta priors on the individual diseases.

    Learning Thresholds for Cascades using Backward Pruning (Work with Paul Viola)

    A cool idea for picking thresholds.  Train a classifier on all examples.  At the end select all positives that are above a certain threshold and now train a series of cascades.  The threshold selected at each level of the cascade sould guarantee that none of the positives that would survive to the end are removed.

    Audio Tags for music: similar to work at UCSD except uses discriminative instead of generative framework.  Also, they test on a much harder dataset and use the tags to reproduce a notion of artist similarity induced by collaborative filtering.  The people who did this work are aware of the work at UCSD.

    Language recognition from phones:

    They put a phone recognizer as a front end to an n-gram model for predecting which language the speech is from (multiclass: e.g. english, spanish, german, etc.).  A pruning algorithm is used to prevent combinatorial explosion in the number of features.  Just thinking out loud, but is this a possible justification for the loss of discrimination of certain phones that are not part of your native language?

    NIPS 2007: Monday Night Posters

    Author:nick @ December 4th, 2007 Leave a Comment

    • Receding Horizon DDP: Tom Erez gave a version of this talk to Emo’s lab just before Neuroscience. Yuval Tassa said he may be going to work for Emo. It seems like really good work. The situation is continuous state space. The idea is to use Emo-like methods to start with a trajectory (open loop policy) and iteratively estimate the value function around it, and improve it to a local optimum. Fill the space with a small number of open loop trajectories, and then for closed loop control, always figure out which learned trajectory you are near and follow that. They worked with 15 and 30 dimensional continuous spaces, which apparently is quite high. Receding Horizon is appropriate where you have an implicit value gradient, or at least the local value function is in line with the global value function. In practice I would guess that this is quite often the case. This is something I would want to replicate. The application was a swimming world, which we should look into as a test of algorithms, etc.
    • Random Sampling of  States in Dynamic Programming: Chris Atkeson is apparently a famous control guy at CMU. Tom Erez was very respectful to him. Atkeson’s poster was about discretizing state space using randomly sampled points and some heuristics about whether to keep the points. The points define a voronoi diagram, each with a local linear dynamics, quadratic value function estimator. This is something I woulld like to replicate. The application was up-to-4-link inverted pendulum and swing up problems. Atkeson claims to  be the only one he knows of with such success on 4-link inverted pendulums. This is another system we should test algorithms on.
    • Bayesian Policy Learning with Trans-Dimensional MCMC: “Trans-dimensional” means probability distributions defined on variable-length random variables through statistics of those random variables. These guys were using probability functions defined on trajectories to do trajectory sampling. It seemed like a fancy way to do policy improvement, without much benefit over other methods for doing the same thing.
    • Optimistic Linear Programming gives Logarithmic Regret for Irreducible MDPs: “Regret” means “difference between optimal average reward and obtained average reward.” It is a common way to measure the loss incurred by exploration of the MDP, and is a currently important topic to RL people. Peter Bartlett showed that a good policy that yields low regret is to keep confidence intervals on the value function and to act as if you will receive the upper bound of your confidence interval. Based on your confidence interval and the amount of time spent acting, there are provable regret bounds.
    • The Epoch-Greedy Algorithm for Multi-armed Bandids with Side Information: The “contextual bandit problem” is a weird MDP where actions affect reward but not state transitions (you float around the world among states X, depending on x you have different reward distributions). Based on the complexity of the learning problem (how well and simply X predicts reward), things can be proven about a strategy “act randomly (uniform) / act greedily” based on how many experiences in this state you have.
    • Learning To Race by Model-Based Reinforcement Learning with Adaptive Abstraction: The Microsoft Applied Games group applied reinforcement learning to a car-driving game. To get it to work, they came up with fancy heuristic ways for discretizing state space on the fly, and combined it with fancy online dynamic programming methods (prioritized sweeping). Eric Wiewiora was very impressed. He said it was one of the first compelling applications of real reinforcement learning that he’d seen. I talked to these guys a bit about applying Emo-style continuous-state-space optimal control to their problem, and they were interested.

    NIPS 2007: Apprenticeship learning

    Author:ianfasel @ December 4th, 2007 Leave a Comment

    Check out these papers if you get a chance:

    J. Zico Kolter, Pieter Abbeel, Andrew Ng, “Hierarchical Apprenticeship Learning with Application to Quadruped Locomotion.”  Advances in Neural Information Processing Systems. 2007. [ps.gz] [pdf] [bibtex] [slide] [audio]

    Umar Syed, Robert Schapire, “A Game-Theoretic Approach to Apprenticeship Learning.” Advances in Neural Information Processing Systems. 2007. [ps.gz] [pdf] [bibtex] [supplemental] [slide]

     

     

    NIPS 2007: Awesome Talk on Human Computation

    Author:movellan @ December 3rd, 2007 Leave a Comment

    Luis Von Ahn gave a fresh talk about using computer games whose product is that humans label images and sounds for free. Lots of things to learn about what Luis is doing.

  • Here is a wikepedia link to Luis http://en.wikipedia.org/wiki/Luis_von_Ahn
  • Here is a link to the ESP game http://www.espgame.org/

    NIPS 2007 Report: Hinton’s Tutorial on Deep Belief Nets

    Author:movellan @ December 3rd, 2007 Leave a Comment

  • Cool relationship between directed sigmoid belief nets and undirected boltzmann machines. Particularly the learinng rule for sigmoid belief nets becomes the Boltzmman rule if you use an infinite set of directed layers with linked weights.
  • Keeps emphasizing the form of independence that restricted Boltzmann macines are good at: conditional independence of the hidden units given the observable vector. This reverses the standard ICA approach where you have unconditional independence and the Conditional Random Fields. He called this generative conditional random fields and conditional RBMs.
  • Conditional RBMs from Hinton and Sutskever. Designed to deal with dynamicsl data. Used by Taylor Roweis and Hinton 2007 for generating skeletal animation models.
  • NIPS 2007 paper on learning the orientation of a face using deep belief networks combined with Gaussian Processes
    Here is the Paper

  • Bengio et al 2007 has a paper on how to extend RBM’s to the exponential family.
  • Hinton paper on Stochastic Embedding Using
  • Lots of deep autoencoders pretrained one layer at a time using the Boltzmann machine algorithm then finetuning it using backprop
  • salakhuditnov and Hinton -> Semantic hash mappings.

    NIPS 2007: Leon Bottou tutorial

    Author:nick @ December 3rd, 2007 2 Comments

    Leon Bottou is pushing slightly sophisticated gradient descent methods over batch methods for the case in which you want to make use of tons of data. He shows that they can be much quicker without loss of accuracy, esp. in the cases people like these days (CRFs, convex optimizations, etc.). It may be worth looking into these current gradient methods.

    NIPS 2007 Report: Poggio’s Tutorial on Visual Processing

    Author:movellan @ December 3rd, 2007 Leave a Comment

  • This paper is useful to justify the multiple view based approach to multipose face detection. Logothetis, N. K., Pauls, J., and Poggio, T. (1995). Shape recognition in the inferior temporal cortex of monkeys. Current Biology, 5:552–563. It shows existance of scale and location invariant but pose dependent object detection neurons in IT.

  • The human brain has as many neurons as 1 million flies.
  • Anterior IT face cells paper: Desimone et al 1984. We should get a copy of this classic paper.
  • Hung Kreiman Poggio and Dicarlo 2005: After 90 msecs from stim presentation information appears in neurons in IT for object category detection. Information peak by 125 msec.
  • Nature Neuroscience 2007 paper by Anzai Peng Van Essen shows modern data on V2 It may justify the idea of orientation histograms
  • Fabre-Thorpe has very cool experiments on human rapid recognition of visual categories Link to her lab
  • Knoblich Bouvire Poggio 2007. Shunting inhibition model of multidimensional Gaussian
  • Link to Poggios’s group PAMI 2007 paper on bio inspired object recognition
  • Jhuang Serre Wolf and Poggio ICCV paper on recognizing activities. Get a copy.
  • Prefontal cortex may prime the features that are most important for a particular task. LIP would compute a saliency map.
    « go backkeep looking »