10.09.2024 | Research
Incorporating Template-Based Contrastive Learning into Cognitively Inspired, Low-Resource Relation Extraction
Erschienen in: Cognitive Computation
EinloggenAktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.
Wählen Sie Textabschnitte aus um mit Künstlicher Intelligenz passenden Patente zu finden. powered by
Markieren Sie Textabschnitte, um KI-gestützt weitere passende Inhalte zu finden. powered by (Link öffnet in neuem Fenster)
Abstract
TempCL
). Through the use of template, we limit the model’s attention to the semantic information that is contained in a relation. Then, we employ a contrastive learning strategy using both group-wise and instance-wise perspectives to leverage shared semantic information within the same relation type to achieve a more coherent semantic representation. Particularly, the proposed group-wise contrastive learning minimizes the discrepancy between the template and original sentences in the same label group and maximizes the difference between those from separate label groups under limited annotation settings. Our experiment results on two public datasets show that our model TempCL
achieves state-of-the-art results for low-resource relation extraction in comparison to baselines. The relative error reductions range from 0.68 to 1.32%. Our model encourages the feature to be aligned with both the original and template sentences. Using two contrastive losses, we exploit shared semantic information underlying sentences (both original and template) that have the same relation type. We demonstrate that our method reduces the noise caused by tokens that are unrelated and constrains the model’s attention to the tokens that are related.