An Introduction to Confident Learning: Finding and Learning with Label Errors in Datasets
419
post-template-default,single,single-post,postid-419,single-format-standard,bridge-core-1.0.6,ajax_fade,page_not_loaded,,qode-title-hidden,qode_grid_1300,footer_responsive_adv,qode-content-sidebar-responsive,qode-theme-ver-18.2,qode-theme-bridge,wpb-js-composer js-comp-ver-6.0.5,vc_responsive

An Introduction to Confident Learning: Finding and Learning with Label Errors in Datasets

Cambridge, MA, November 3, 2019 – Knowledge AI‘s Chief AI Scientist @Curtis Northcutt paper on Confident Learning has just been released. Congratulations to Curtis and his Google and Massachusetts Institute of Technology team for this work in the emerging field!

Confident Learning: Estimating Uncertainty in Dataset Labels

(Authors: Curtis G. NorthcuttLu JiangIsaac L. Chuang)

Paper link: https://arxiv.org/abs/1911.00068

Abstract: Learning exists in the context of data, yet notions of confidence typically focus on model predictions, not label quality. Confident learning (CL) has emerged as an approach for characterizing, identifying, and learning with noisy labels in datasets, based on the principles of pruning noisy data, counting to estimate noise, and ranking examples to train with confidence. Here, we generalize CL, building on the assumption of a classification noise process, to directly estimate the joint distribution between noisy (given) labels and uncorrupted (unknown) labels. This generalized CL, open-sourced as 𝚌𝚕𝚎𝚊𝚗𝚕𝚊𝚋, is provably consistent under reasonable conditions, and experimentally performant on ImageNet and CIFAR, outperforming recent approaches, e.g. MentorNet, by 30% or more, when label noise is non-uniform. 𝚌𝚕𝚎𝚊𝚗𝚕𝚊𝚋 also quantifies ontological class overlap, and can increase model accuracy (e.g. ResNet) by providing clean data for training.

Learn More About Confident Learning and Its Benefits

Blog Post: https://l7.curtisnorthcutt.com/confident-learning

The blog post further elaborates on the released paper, and it discusses an emerging, principled framework to identify label errors, characterize label noise, and learn with noisy labels known as confident learning (CL), open-sourced as the cleanlab Python package.

Unlike most machine learning approaches, confident learning requires no hyperparameters. We use cross-validation to obtain predicted probabilities out-of-sample. Confident learning features a number of other benefits. CL

  • directly estimates the joint distribution of noisy and true labels
  • works for multi-class datasets
  • finds the label errors (errors are ordered from most likely to least likely)
  • is non-iterative (finding training label errors in ImageNet takes 3 minutes)
  • is theoretically justified (realistic conditions exactly find label errors and consistent estimation of the joint distribution)
  • does not assume randomly uniform label noise (often unrealistic in practice)
  • only requires predicted probabilities and noisy labels (any model can be used)
  • does not require any true (guaranteed uncorrupted) labels
  • extends naturally to multi-label datasets
  • is free and open-sourced as the cleanlab Python package for characterizing, finding, and learning with label errors.

The theoretical and experimental results emphasize the practical nature of confident learning, e.g. identifying numerous label issues in ImageNet and CIFAR and improving standard ResNet performance by training on a cleaned dataset. Confident learning motivates the need for further understanding of uncertainty estimation in dataset labels, methods to clean training and test sets, and approaches to identify ontological and label issues in datasets.

No Comments

Post A Comment