Toward biologically plausible artificial vision.

Journal: The Behavioral and brain sciences
PMID:

Abstract

Quilty-Dunn et al. argue that deep convolutional neural networks (DCNNs) optimized for image classification exemplify structural disanalogies to human vision. A different kind of artificial vision - found in reinforcement-learning agents navigating artificial three-dimensional environments - can be expected to be more human-like. Recent work suggests that language-like representations substantially improves these agents' performance, lending some indirect support to the language-of-thought hypothesis (LoTH).

Authors

  • Mason Westfall
    Department of Philosophy, Philosophy-Neuroscience-Psychology Program, Washington University in St. Louis, St. Louis, MO, USA w.mason@wustl.eduhttp://www.masonwestfall.com.