AIMC Topic: Learning

Clear Filters Showing 481 to 490 of 1397 articles

Acquisition of chess knowledge in AlphaZero.

Proceedings of the National Academy of Sciences of the United States of America
We analyze the knowledge acquired by AlphaZero, a neural network engine that learns chess solely by playing against itself yet becomes capable of outperforming human chess players. Although the system trains without access to human games or guidance,...

One-Class Convolutional Neural Networks for Water-Level Anomaly Detection.

Sensors (Basel, Switzerland)
Companies that own water systems to provide water storage and distribution services always strive to enhance and efficiently distribute water to different places for various purposes. However, these water systems are likely to face problems ranging f...

Revisiting graph neural networks from hybrid regularized graph signal reconstruction.

Neural networks : the official journal of the International Neural Network Society
Graph neural networks (GNNs) have shown strong graph-structured data processing capabilities. However, most of them are generated based on the message-passing mechanism and lack of the systematic approach to guide their developments. Meanwhile, a uni...

Achieving small-batch accuracy with large-batch scalability via Hessian-aware learning rate adjustment.

Neural networks : the official journal of the International Neural Network Society
We consider synchronous data-parallel neural network training with a fixed large batch size. While the large batch size provides a high degree of parallelism, it degrades the generalization performance due to the low gradient noise scale. We propose ...

What is the simplest model that can account for high-fidelity imitation?

The Behavioral and brain sciences
What inductive biases must be incorporated into multi-agent artificial intelligence models to get them to capture high-fidelity imitation? We think very little is needed. In the right environments, both instrumental- and ritual-stance imitation can e...

Efficient Perturbation Inference and Expandable Network for continual learning.

Neural networks : the official journal of the International Neural Network Society
Although humans are capable of learning new tasks without forgetting previous ones, most neural networks fail to do so because learning new tasks could override the knowledge acquired from previous data. In this work, we alleviate this issue by propo...

Factors of Influence for Transfer Learning Across Diverse Appearance Domains and Task Types.

IEEE transactions on pattern analysis and machine intelligence
Transfer learning enables to re-use knowledge learned on a source task to help learning a target task. A simple form of transfer learning is common in current state-of-the-art computer vision models, i.e., pre-training a model for image classificatio...

Ada-LISTA: Learned Solvers Adaptive to Varying Models.

IEEE transactions on pattern analysis and machine intelligence
Neural networks that are based on the unfolding of iterative solvers as LISTA (Learned Iterative Soft Shrinkage), are widely used due to their accelerated performance. These networks, trained with a fixed dictionary, are inapplicable in varying model...

A Novel Deep Neural Network Method for HAR-Based Team Training Using Body-Worn Inertial Sensors.

Sensors (Basel, Switzerland)
Human activity recognition (HAR) became a challenging issue in recent years. In this paper, we propose a novel approach to tackle indistinguishable activity recognition based on human wearable sensors. Generally speaking, vision-based solutions strug...

An emotion analysis in learning environment based on theme-specified drawing by convolutional neural network.

Frontiers in public health
Emotion in the learning process can directly influence the learner's attention, memory, and cognitive activities. Several literatures indicate that hand-drawn painting could reflect the learner's emotional status. But, such an evaluation of emotional...