Humans program artificial delegates to accurately solve collective-risk dilemmas but lack precision.
Journal:
Proceedings of the National Academy of Sciences of the United States of America
Published Date:
Jun 24, 2025
Abstract
In an era increasingly influenced by autonomous machines, it is only a matter of time before strategic individual decisions that impact collective goods will also be made virtually through the use of artificial delegates. Through a series of behavioral experiments that combine delegation to autonomous agents and different choice architectures, we pinpoint what may get lost in translation when humans delegate to algorithms. We focus on the collective-risk dilemma, a game where participants must decide whether or not to contribute to a public good, where the latter must reach a target in order for them to keep their personal endowments. To test the effect of delegation beyond its functionality as a commitment device, participants are asked to play the game a second time, with the same group, where they are given the chance to reprogram their agents. As our main result we find that, when the action space is constrained, people who delegate contribute more to the public good, even if they have experienced more failure and inequality than people who do not delegate. However, they are not more successful. Failing to reach the target, after getting close to it, can be attributed to precision errors in the agent's algorithm that cannot be corrected amid the game. Thus, with the digitization and subsequent limitation of our interactions, artificial delegates appear to be a solution to help preserving public goods over many iterations of risky situations. But actual success can only be achieved if humans learn to adjust their agents' algorithms.