Improved two-view interactional fuzzy learning based on mutual-rectification and knowledge-mergence.
Journal:
Neural networks : the official journal of the International Neural Network Society
Published Date:
Sep 1, 2025
Abstract
Nasopharyngeal carcinoma (NPC) is a malignant tumor that originates from the back of the nasal canal from above the soft palate to the upper larynx. Because the nasopharyngeal location is deeply hidden, it is often difficult for a single imaging means to clarify its complex adjacency. In addition, there exist some differences and uncertainties in its clinical manifestations. Although two-view fuzzy classifiers can effectively tap into the nasopharyngeal location for hidden information and exhibit good classification performance, existing fuzzy reasoning for predicting whether or not a nasopharyngeal cancer often stems from the inability to reuse the one-sided rules. Therefore, a novel two-view mutual rectification and knowledge mergence Takagi-Sugeno-Kang fuzzy classifier (TVRM-TFC) is proposed here to address the challenge of using imaging means to fine-tune the organ tissues. Firstly, Kullback-Leibler divergence (KLIC) is used to select important features from various imaging sections (i.e., pieces of knowledge). Secondly, the interpretable zero-order Takagi-Sugeno-Kang (TSK) fuzzy classifier is used as the basic training unit to simultaneously obtain satisfactory accuracies and concise linguistic interpretability. Thirdly, from the perspective of both imaging means and the organ, this study fine-tunes the information required for decision-making between different imaging means, so that the complementary advantages of the different views may improve the decision-making information and thus increase decision accuracies. Finally, the perspective of imaging technology and the organ are merged to capture decision-making knowledge. These decision-making advantages from different views are organically integrated to compensate information and further optimize the decision-making information. The merits of the proposed classifier are demonstrated through comparative experimental analysis on CT and MRI data.