AI Medical Compendium Topic:
Learning

Clear Filters Showing 561 to 570 of 1361 articles

LGLNN: Label Guided Graph Learning-Neural Network for few-shot learning.

Neural networks : the official journal of the International Neural Network Society
Graph Neural Networks (GNNs) have been employed for few-shot learning (FSL) tasks. The aim of GNN based FSL is to transform the few-shot learning problem into a graph node classification or edge labeling tasks, which can thus fully explore the relati...

Tweaking Deep Neural Networks.

IEEE transactions on pattern analysis and machine intelligence
Deep neural networks are trained so as to achieve a kind of the maximum overall accuracy through a learning process using given training data. Therefore, it is difficult to fix them to improve the accuracies of specific problematic classes or classes...

Hierarchical and Self-Attended Sequence Autoencoder.

IEEE transactions on pattern analysis and machine intelligence
It is important and challenging to infer stochastic latent semantics for natural language applications. The difficulty in stochastic sequential learning is caused by the posterior collapse in variational inference. The input sequence is disregarded i...

Beneficial Perturbation Network for Designing General Adaptive Artificial Intelligence Systems.

IEEE transactions on neural networks and learning systems
The human brain is the gold standard of adaptive learning. It not only can learn and benefit from experience, but also can adapt to new situations. In contrast, deep neural networks only learn one sophisticated but fixed mapping from inputs to output...

Toward the Optimal Design and FPGA Implementation of Spiking Neural Networks.

IEEE transactions on neural networks and learning systems
The performance of a biologically plausible spiking neural network (SNN) largely depends on the model parameters and neural dynamics. This article proposes a parameter optimization scheme for improving the performance of a biologically plausible SNN ...

A Plug-in Method for Representation Factorization in Connectionist Models.

IEEE transactions on neural networks and learning systems
In this article, we focus on decomposing latent representations in generative adversarial networks or learned feature representations in deep autoencoders into semantically controllable factors in a semisupervised manner, without modifying the origin...

Optimizing Attention for Sequence Modeling via Reinforcement Learning.

IEEE transactions on neural networks and learning systems
Attention has been shown highly effective for modeling sequences, capturing the more informative parts in learning a deep representation. However, recent studies show that the attention values do not always coincide with intuition in tasks, such as m...

Comparative Convolutional Dynamic Multi-Attention Recommendation Model.

IEEE transactions on neural networks and learning systems
Recently, an attention mechanism has been used to help recommender systems grasp user interests more accurately. It focuses on their pivotal interests from a psychology perspective. However, most current studies based on it only focus on part of user...

Scalable Inverse Reinforcement Learning Through Multifidelity Bayesian Optimization.

IEEE transactions on neural networks and learning systems
Data in many practical problems are acquired according to decisions or actions made by users or experts to achieve specific goals. For instance, policies in the mind of biologists during the intervention process in genomics and metagenomics are often...

Micro Learning Support Vector Machine for Pattern Classification: A High-Speed Algorithm.

Computational intelligence and neuroscience
The support vector machine theory has been developed into a very mature system at present. The original support vector machine to solve the optimization problem is transformed into a direct calculation formula of line in this paper and the model is (...