AI Medical Compendium

Explore the latest research on artificial intelligence and machine learning in medicine.

Showing 71 to 80 of 2841 articles

Clear Filters

StoCFL: A stochastically clustered federated learning framework for Non-IID data with dynamic client participation.

Neural networks : the official journal of the International Neural Network Society
Federated learning is a distributed learning framework that takes full advantage of private data samples kept on edge devices. In real-world federated learning systems, these data samples are often decentralized and Non-Independently Identically Dist...

Low-Rank Representation with Empirical Kernel Space Embedding of Manifolds.

Neural networks : the official journal of the International Neural Network Society
Low-Rank Representation (LRR) methods integrate low-rank constraints and projection operators to model the mapping from the sample space to low-dimensional manifolds. Nonetheless, existing approaches typically apply Euclidean algorithms directly to m...

Knowledge-Guided Semantically Consistent Contrastive Learning for sequential recommendation.

Neural networks : the official journal of the International Neural Network Society
Contrastive learning has gained dominance in sequential recommendation due to its ability to derive self-supervised signals for addressing data sparsity problems. However, caused by random augmentations (e.g., crop, mask, and reorder), existing metho...

A rule- and query-guided reinforcement learning for extrapolation reasoning in temporal knowledge graphs.

Neural networks : the official journal of the International Neural Network Society
Extrapolation reasoning in temporal knowledge graphs (TKGs) aims at predicting future facts based on historical data, and finds extensive application in diverse real-world scenarios. Existing TKG reasoning methods primarily focus on capturing the fac...

SQGE: Support-query prototype guidance and enhancement for few-shot relational triple extraction.

Neural networks : the official journal of the International Neural Network Society
The current few-shot relational triple extraction (FS-RTE) techniques, which rely on prototype networks, have made significant progress. Nevertheless, the scarcity of data in the support set results in both intra-class and inter-class gaps in FS-RTE....

Continual learning with Bayesian compression for shared and private latent representations.

Neural networks : the official journal of the International Neural Network Society
This paper proposes a new continual learning method with Bayesian Compression for Shared and Private Latent Representations (BCSPLR), which learns a compact model structure while preserving the accuracy. In Shared and Private Latent Representations (...

Dataset-free weight-initialization on restricted Boltzmann machine.

Neural networks : the official journal of the International Neural Network Society
In feed-forward neural networks, dataset-free weight-initialization methods such as LeCun, Xavier (or Glorot), and He initializations have been developed. These methods randomly determine the initial values of weight parameters based on specific dist...

Continual learning of conjugated visual representations through higher-order motion flows.

Neural networks : the official journal of the International Neural Network Society
Learning with neural networks from a continuous stream of visual information presents several challenges due to the non-i.i.d. nature of the data. However, it also offers novel opportunities to develop representations that are consistent with the inf...

PrivCore: Multiplication-activation co-reduction for efficient private inference.

Neural networks : the official journal of the International Neural Network Society
The marriage of deep neural network (DNN) and secure 2-party computation (2PC) enables private inference (PI) on the encrypted client-side data and server-side models with both privacy and accuracy guarantees, coming at the cost of orders of magnitud...

DRTN: Dual Relation Transformer Network with feature erasure and contrastive learning for multi-label image classification.

Neural networks : the official journal of the International Neural Network Society
The objective of multi-label image classification (MLIC) task is to simultaneously identify multiple objects present in an image. Several researchers directly flatten 2D feature maps into 1D grid feature sequences, and utilize Transformer encoder to ...