A benchmarking framework and dataset for learning to defer in human-AI decision-making.

Journal: Scientific data
PMID:

Abstract

Learning to Defer (L2D) algorithms improve human-AI collaboration by deferring decisions to human experts when they are likely to be more accurate than the AI model. These can be crucial in high-stakes tasks like fraud detection, where false negatives can cost victims their life savings. The primary challenge in training and evaluating these systems is the high cost of acquiring expert predictions, often leading to the use of simplistic simulated expert behavior in benchmarks. We introduce OpenL2D, a framework generating synthetic experts with adjustable decision-making processes and work capacity constraints for more realistic L2D testing. Applied to a public fraud detection dataset, OpenL2D creates the financial fraud alert review dataset (FiFAR), which contains predictions from 50 fraud analysts for 30 K instances. We show that FiFAR's synthetic experts are similar to real experts in metrics such as consistency and inter-expert agreement. Our L2D benchmark reveals that performance rankings of L2D algorithms vary significantly based on the available experts, highlighting the need to consider diverse expert behavior in L2D benchmarking.

Authors

  • Jean V Alves
    Feedzai, Coimbra, Portugal. jean.alves@feedzai.com.
  • Diogo Leitão
    Feedzai, Coimbra, Portugal.
  • Sérgio Jesus
    Feedzai, Coimbra, Portugal.
  • Marco O P Sampaio
    Feedzai, Coimbra, Portugal.
  • Javier Liébana
    Feedzai, Coimbra, Portugal.
  • Pedro Saleiro
    Feedzai, Coimbra, Portugal.
  • Mário A T Figueiredo
    Instituto de Telecomunicações, Instituto Superior Técnico, Universidade de Lisboa, Lisboa, Portugal. Electronic address: mario.figueiredo@tecnico.ulisboa.pt.
  • Pedro Bizarro
    Feedzai, Coimbra, Portugal.