AIMC Topic: Models, Neurological

Clear Filters Showing 71 to 80 of 1162 articles

Schizophrenia recognition based on three-dimensional adaptive graph convolutional neural network.

Scientific reports
Previous deep learning-based brain network research has made significant progress in understanding the pathophysiology of schizophrenia. However, it ignores the three-dimensional spatial characteristics of EEG signals and cannot dynamically learn the...

Hybrid neural networks for continual learning inspired by corticohippocampal circuits.

Nature communications
Current artificial systems suffer from catastrophic forgetting during continual learning, a limitation absent in biological systems. Biological mechanisms leverage the dual representation of specific and generalized memories within corticohippocampal...

Temporal pavlovian conditioning of a model spiking neural network for discrimination sequences of short time intervals.

Journal of computational neuroscience
The brain's ability to learn and distinguish rapid sequences of events is essential for timing-dependent tasks, such as those in sports and music. However, the mechanisms underlying this ability remain an active area of research. Here, we present a P...

The calcitron: A simple neuron model that implements many learning rules via the calcium control hypothesis.

PLoS computational biology
Theoretical neuroscientists and machine learning researchers have proposed a variety of learning rules to enable artificial neural networks to effectively perform both supervised and unsupervised learning tasks. It is not always clear, however, how t...

Recurrent neural networks with transient trajectory explain working memory encoding mechanisms.

Communications biology
Whether working memory (WM) is encoded by persistent activity using attractors or by dynamic activity using transient trajectories has been debated for decades in both experimental and modeling studies, and a consensus has not been reached. Even thou...

Low-power artificial neuron networks with enhanced synaptic functionality using dual transistor and dual memristor.

PloS one
Artificial neurons with bio-inspired firing patterns have the potential to significantly improve the performance of neural network computing. The most significant component of an artificial neuron circuit is a large amount of energy consumption. Rece...

Self-supervised learning of scale-invariant neural representations of space and time.

Journal of computational neuroscience
Hippocampal representations of space and time seem to share a common coding scheme characterized by neurons with bell-shaped tuning curves called place and time cells. The properties of the tuning curves are consistent with Weber's law, such that, in...

Investigating the intrinsic top-down dynamics of deep generative models.

Scientific reports
Hierarchical generative models can produce data samples based on the statistical structure of their training distribution. This capability can be linked to current theories in computational neuroscience, which propose that spontaneous brain activity ...

Incremental accumulation of linguistic context in artificial and biological neural networks.

Nature communications
Large Language Models (LLMs) have shown success in predicting neural signals associated with narrative processing, but their approach to integrating context over large timescales differs fundamentally from that of the human brain. In this study, we s...

Towards parameter-free attentional spiking neural networks.

Neural networks : the official journal of the International Neural Network Society
Brain-inspired spiking neural networks (SNNs) are increasingly explored for their potential in spatiotemporal information modeling and energy efficiency on emerging neuromorphic hardware. Recent works incorporate attentional modules into SNNs, greatl...