STELA: a community-centred approach to norm elicitation for AI alignment.

Journal: Scientific reports
Published Date:

Abstract

Value alignment, the process of ensuring that artificial intelligence (AI) systems are aligned with human values and goals, is a critical issue in AI research. Existing scholarship has mainly studied how to encode moral values into agents to guide their behaviour. Less attention has been given to the normative questions of whose values and norms AI systems should be aligned with, and how these choices should be made. To tackle these questions, this paper presents the STELA process (SocioTEchnical Language agent Alignment), a methodology resting on sociotechnical traditions of participatory, inclusive, and community-centred processes. For STELA, we conduct a series of deliberative discussions with four historically underrepresented groups in the United States in order to understand their diverse priorities and concerns when interacting with AI systems. The results of our research suggest that community-centred deliberation on the outputs of large language models is a valuable tool for eliciting latent normative perspectives directly from differently situated groups. In addition to having the potential to engender an inclusive process that is robust to the needs of communities, this methodology can provide rich contextual insights for AI alignment.

Authors

  • Stevie Bergman
    Google DeepMind, London, UK.
  • Nahema Marchal
    Google DeepMind, London, UK. nahemamarchal@google.com.
  • John Mellor
    Google DeepMind, London, UK.
  • Shakir Mohamed
    DeepMind, London, UK.
  • Iason Gabriel
    DeepMind, London N1C 4DN, United Kingdom.
  • William Isaac
    Google DeepMind, London, UK.