AI Medical Compendium Topic

Explore the latest research on artificial intelligence and machine learning in medicine.

Pattern Recognition, Automated

Showing 91 to 100 of 1638 articles

Clear Filters

TriCAFFNet: A Tri-Cross-Attention Transformer with a Multi-Feature Fusion Network for Facial Expression Recognition.

Sensors (Basel, Switzerland)
In recent years, significant progress has been made in facial expression recognition methods. However, tasks related to facial expression recognition in real environments still require further research. This paper proposes a tri-cross-attention trans...

Enhanced Hand Gesture Recognition with Surface Electromyogram and Machine Learning.

Sensors (Basel, Switzerland)
This study delves into decoding hand gestures using surface electromyography (EMG) signals collected via a precision Myo-armband sensor, leveraging machine learning algorithms. The research entails rigorous data preprocessing to extract features and ...

CASL: Capturing Activity Semantics Through Location Information for Enhanced Activity Recognition.

IEEE/ACM transactions on computational biology and bioinformatics
Using portable tools to monitor and identify daily activities has increasingly become a focus of digital healthcare, especially for elderly care. One of the difficulties in this area is the excessive reliance on labeled activity data for correspondin...

Research into the Applications of a Multi-Scale Feature Fusion Model in the Recognition of Abnormal Human Behavior.

Sensors (Basel, Switzerland)
Due to the increasing severity of aging populations in modern society, the accurate and timely identification of, and responses to, sudden abnormal behaviors of the elderly have become an urgent and important issue. In the current research on compute...

Effective Emotion Recognition by Learning Discriminative Graph Topologies in EEG Brain Networks.

IEEE transactions on neural networks and learning systems
Multichannel electroencephalogram (EEG) is an array signal that represents brain neural networks and can be applied to characterize information propagation patterns for different emotional states. To reveal these inherent spatial graph features and i...

Enhancing gait recognition by multimodal fusion of mobilenetv1 and xception features via PCA for OaA-SVM classification.

Scientific reports
Gait recognition has become an increasingly promising area of research in the search for noninvasive and effective methods of person identification. Its potential applications in security systems and medical diagnosis make it an exciting field with w...

Integration of Tracking, Re-Identification, and Gesture Recognition for Facilitating Human-Robot Interaction.

Sensors (Basel, Switzerland)
For successful human-robot collaboration, it is crucial to establish and sustain quality interaction between humans and robots, making it essential to facilitate human-robot interaction (HRI) effectively. The evolution of robot intelligence now enabl...

FMCW Radar Human Action Recognition Based on Asymmetric Convolutional Residual Blocks.

Sensors (Basel, Switzerland)
Human action recognition based on optical and infrared video data is greatly affected by the environment, and feature extraction in traditional machine learning classification methods is complex; therefore, this paper proposes a method for human acti...

Coupled Multimodal Emotional Feature Analysis Based on Broad-Deep Fusion Networks in Human-Robot Interaction.

IEEE transactions on neural networks and learning systems
A coupled multimodal emotional feature analysis (CMEFA) method based on broad-deep fusion networks, which divide multimodal emotion recognition into two layers, is proposed. First, facial emotional features and gesture emotional features are extracte...

Versatile Graph Neural Networks Toward Intuitive Human Activity Understanding.

IEEE transactions on neural networks and learning systems
Benefiting from the advanced human visual system, humans naturally classify activities and predict motions in a short time. However, most existing computer vision studies consider those two tasks separately, resulting in an insufficient understanding...