TUM Logo

Robust Learning from Multiple Experts

unc-crowd-md.png

 

In the tranditional supervised settings, each instance is associated with only one label, which is known as the accurate label. In our setting, the true label is unknown. Instead, we may have several labels provided by different experts. However, these labels may contain a lot of noise. For instance, a casual user may give random label for every instance. A malicious user may deliberately produce wrong labels. Unfortunately, we don't have any prior knowledge regarding the experts and thus have no way to 'filter out' those troublemaker. A simple and naive strategy would be take the majority vote and use it as the true label. However, when those troublemakers dominate the vote, we cannot expect the model to learn accurately. Therefore, this line of research explores more sophisticated method for

  1. Learning the ground truth label for each instance.
  2. Learning a classifier/regressor that generalizes well on unseen instances.
  3. Learning a expertise model for each expert.


The result of this work can be adapted to many applications in practice, e.g. sensor network, satellite localization.

Researcher: Huang Xiao


Topic of Interests

  • Learning from multiple experts
  • Developing a crowdsourcing platform

 

Demonstration

We show a 2-D demonstration on how to combine multiple experts knowledge to obtain a latent ground truth.

ResizedImage600261-mplexp_demo1.png
ResizedImage600127-mplexp_demo2.png
ResizedImage600280-mplexp_demo3.png

(a) Synthetic data and ground truth, and also responses from observers are marked in different colors. On the right, four observers are simulated by monotonic functions, and shaded area
illustrates the pointwise variance. Note that the 4-th adversarial observer misbehaves to produce counterproductive observations in opposite to the ground truth. (b,c,d) estimated ground
truth on the test set using baselines SVR-AVG, GPR-AVG and LOB. (e) estimated ground truth and observer models by nonlinear observer model NLOB.