FireStats is not installed in the database
Laryngitis homøopatiske midler køb propecia edderkop bid retsmidler

NIPS 2007: More Monday Night Posters

Author:paul @ December 4th, 2007 1 Comment

Multiple Instance Active Learning

-key idea: we have a series of bags that are either positive or negative.  Each bag contains a series of examples.  We know that each negative bag contains no positives and each positive bag contains at least one positive.  We assume the bag level labels are easy to obtain.  This work gives several strategies for selecting individual examples in the positive bags to query for labels.  These strategies are more or less heuristic, but the results are strong.  This is the same setup as the weakly supervised object detection problem.

Learning Monotonic transforms:

- really cool.   Simultaneous learn an svm and a monotonic transformation on the features.  These monotonic transforms can model saturation effects and other nonlinearities.

Variational inference for network failures:

- interesing application of variational inference.  Very similar to the idea of predicting a set of diseases from a set of observed symptoms.  The system is an expert system in that we use a noisy or model for the expression of a symptom given a disease where the noisy or parameters are given.  Some additional tricks are used such as putting beta priors on the individual diseases.

Learning Thresholds for Cascades using Backward Pruning (Work with Paul Viola)

A cool idea for picking thresholds.  Train a classifier on all examples.  At the end select all positives that are above a certain threshold and now train a series of cascades.  The threshold selected at each level of the cascade sould guarantee that none of the positives that would survive to the end are removed.

Audio Tags for music: similar to work at UCSD except uses discriminative instead of generative framework.  Also, they test on a much harder dataset and use the tags to reproduce a notion of artist similarity induced by collaborative filtering.  The people who did this work are aware of the work at UCSD.

Language recognition from phones:

They put a phone recognizer as a front end to an n-gram model for predecting which language the speech is from (multiclass: e.g. english, spanish, german, etc.).  A pruning algorithm is used to prevent combinatorial explosion in the number of features.  Just thinking out loud, but is this a possible justification for the loss of discrimination of certain phones that are not part of your native language?