Exploiting instance-label dynamics through reciprocal anchored contrastive learning for few-shot relation extraction.
Journal:
Neural networks : the official journal of the International Neural Network Society
PMID:
40024050
Abstract
In the domain of Few-shot Relation Extraction (FSRE), the primary objective is to distill relational facts from limited labeled datasets. This task has recently witnessed significant advancements through the integration of Pre-trained Language Models (PLMs) within a supervised contrastive learning schema, which effectively leverages the dynamics between instance and label information. Despite these advancements, the comprehensive utilization of extensive instance-label pairs, aimed at facilitating the extraction of semantically rich representations within this paradigm, has yet to be fully harnessed. To bridge this gap, we introduce a Reciprocal Anchored Contrastive Learning framework (RACL) for few-shot relation extraction, which is predicated on the premise that instance-label pairs provide distinct yet inherently complementary insights into textual semantics. Specifically, RACL employs a symmetric contrastive objective that incorporates both instance-level and label-level contrastive losses, promoting a more integrated and unified representational space. This approach is engineered to effectively delineate the nuanced relationships between instance attributes and relational facts, while simultaneously optimizing information sharing across different perspectives within the same relations. Extensive experiments on the FSRE benchmark datasets demonstrate the superiority of our approach as compared to the state-of-the-art baselines. Further ablation studies on Zero-shot and None-of-the-above settings confirm its robustness and adaptability in practical applications.