Similar failures of consideration arise in human and machine planning.
Journal:
Cognition
PMID:
40086083
Abstract
Humans are remarkably efficient at decision making, even in "open-ended" problems where the set of possible actions is too large for exhaustive evaluation. Our success relies, in part, on processes for calling to mind the right candidate actions. When these processes fail, the result is a kind of puzzle in which the value of a solution would be obvious once it is considered, but never gets considered in the first place. Recently, machine learning (ML) architectures have attained or even exceeded human performance on open-ended decision making tasks such as playing chess and Go. We ask whether the broad architectural principles that underlie ML success in these domains generate similar consideration failures to those observed in humans. We demonstrate a case in which they do, illuminating how humans make open-ended decisions, how this relates to ML approaches to similar problems, and how both architectures lead to characteristic patterns of success and failure.