Interpretable deep learning for deconvolutional analysis of neural signals.
Journal:
Neuron
PMID:
40081364
Abstract
The widespread adoption of deep learning to model neural activity often relies on "black-box" approaches that lack an interpretable connection between neural activity and network parameters. Here, we propose using algorithm unrolling, a method for interpretable deep learning, to design the architecture of sparse deconvolutional neural networks and obtain a direct interpretation of network weights in relation to stimulus-driven single-neuron activity through a generative model. We introduce our method, deconvolutional unrolled neural learning (DUNL), and demonstrate its versatility by applying it to deconvolve single-trial local signals across multiple brain areas and recording modalities. We uncover multiplexed salience and reward prediction error signals from midbrain dopamine neurons, perform simultaneous event detection and characterization in somatosensory thalamus recordings, and characterize the heterogeneity of neural responses in the piriform cortex and across striatum during unstructured, naturalistic experiments. Our work leverages advances in interpretable deep learning to provide a mechanistic understanding of neural activity.