AI Medical Compendium Journal:
Neural computation

Showing 121 to 130 of 203 articles

Learning in Wilson-Cowan Model for Metapopulation.

Neural computation
The Wilson-Cowan model for metapopulation, a neural mass network model, treats different subcortical regions of the brain as connected nodes, with connections representing various types of structural, functional, or effective neuronal connectivity be...

Active Inference and Intentional Behavior.

Neural computation
Recent advances in theoretical biology suggest that key definitions of basal cognition and sentient behavior may arise as emergent properties of in vitro cell cultures and neuronal networks. Such neuronal networks reorganize activity to demonstrate s...

Toward a Free-Response Paradigm of Decision Making in Spiking Neural Networks.

Neural computation
Spiking neural networks (SNNs) have attracted significant interest in the development of brain-inspired computing systems due to their energy efficiency and similarities to biological information processing. In contrast to continuous-valued artificia...

Improving Recall in Sparse Associative Memories That Use Neurogenesis.

Neural computation
The creation of future low-power neuromorphic solutions requires specialist spiking neural network (SNN) algorithms that are optimized for neuromorphic settings. One such algorithmic challenge is the ability to recall learned patterns from their nois...

Replay as a Basis for Backpropagation Through Time in the Brain.

Neural computation
How episodic memories are formed in the brain is a continuing puzzle for the neuroscience community. The brain areas that are critical for episodic learning (e.g., the hippocampus) are characterized by recurrent connectivity and generate frequent off...

Relating Human Error-Based Learning to Modern Deep RL Algorithms.

Neural computation
In human error-based learning, the size and direction of a scalar error (i.e., the "directed error") are used to update future actions. Modern deep reinforcement learning (RL) methods perform a similar operation but in terms of scalar rewards. Despit...

Sparse-Coding Variational Autoencoders.

Neural computation
The sparse coding model posits that the visual system has evolved to efficiently code natural stimuli using a sparse set of features from an overcomplete dictionary. The original sparse coding model suffered from two key limitations; however: (1) com...

Trainable Reference Spikes Improve Temporal Information Processing of SNNs With Supervised Learning.

Neural computation
Spiking neural networks (SNNs) are the next-generation neural networks composed of biologically plausible neurons that communicate through trains of spikes. By modifying the plastic parameters of SNNs, including weights and time delays, SNNs can be t...

Inference on the Macroscopic Dynamics of Spiking Neurons.

Neural computation
The process of inference on networks of spiking neurons is essential to decipher the underlying mechanisms of brain computation and function. In this study, we conduct inference on parameters and dynamics of a mean-field approximation, simplifying th...

Human Eyes-Inspired Recurrent Neural Networks Are More Robust Against Adversarial Noises.

Neural computation
Humans actively observe the visual surroundings by focusing on salient objects and ignoring trivial details. However, computer vision models based on convolutional neural networks (CNN) often analyze visual input all at once through a single feedforw...