Human-centred mechanism design with Democratic AI.

Journal: Nature human behaviour
Published Date:

Abstract

Building artificial intelligence (AI) that aligns with human values is an unsolved problem. Here we developed a human-in-the-loop research pipeline called Democratic AI, in which reinforcement learning is used to design a social mechanism that humans prefer by majority. A large group of humans played an online investment game that involved deciding whether to keep a monetary endowment or to share it with others for collective benefit. Shared revenue was returned to players under two different redistribution mechanisms, one designed by the AI and the other by humans. The AI discovered a mechanism that redressed initial wealth imbalance, sanctioned free riders and successfully won the majority vote. By optimizing for human preferences, Democratic AI offers a proof of concept for value-aligned policy innovation.

Authors

  • Raphael Koster
    Deepmind, London, UK.
  • Jan Balaguer
    Deepmind, London, UK.
  • Andrea Tacchetti
    Deepmind, London, UK.
  • Ari Weinstein
    DeepMind, London, UK.
  • Tina Zhu
    Deepmind, London, UK.
  • Oliver Hauser
    Department of Economics and Institute for Data Science and Artificial Intelligence, University of Exeter, Exeter, UK.
  • Duncan Williams
    Digital Creativity Labs, University of York, York, United Kingdom.
  • Lucy Campbell-Gillingham
    Deepmind, London, UK.
  • Phoebe Thacker
    Deepmind, London, UK.
  • Matthew Botvinick
    DeepMind, London, UK. botvinick@google.com.
  • Christopher Summerfield
    DeepMind, 5 New Street Square, London, UK; Department of Experimental Psychology, University of Oxford, Oxford, UK.