10-09-2024 | Research
Incorporating Template-Based Contrastive Learning into Cognitively Inspired, Low-Resource Relation Extraction
Published in: Cognitive Computation
Log inActivate our intelligent search to find suitable subject content or patents.
Select sections of text to find matching patents with Artificial Intelligence. powered by
Select sections of text to find additional relevant content using AI-assisted search. powered by (Link opens in a new window)
Abstract
TempCL
). Through the use of template, we limit the model’s attention to the semantic information that is contained in a relation. Then, we employ a contrastive learning strategy using both group-wise and instance-wise perspectives to leverage shared semantic information within the same relation type to achieve a more coherent semantic representation. Particularly, the proposed group-wise contrastive learning minimizes the discrepancy between the template and original sentences in the same label group and maximizes the difference between those from separate label groups under limited annotation settings. Our experiment results on two public datasets show that our model TempCL
achieves state-of-the-art results for low-resource relation extraction in comparison to baselines. The relative error reductions range from 0.68 to 1.32%. Our model encourages the feature to be aligned with both the original and template sentences. Using two contrastive losses, we exploit shared semantic information underlying sentences (both original and template) that have the same relation type. We demonstrate that our method reduces the noise caused by tokens that are unrelated and constrains the model’s attention to the tokens that are related.