Skip to main content
Top

2023 | Book

Information Retrieval

28th China Conference, CCIR 2022, Chongqing, China, September 16–18, 2022, Revised Selected Papers

share
SHARE
insite
SEARCH

About this book

This book constitutes the refereed proceedings of the 28th China Conference on Information Retrieval, CCIR 2022, held in Chongqing, China, in September 2022. Information retrieval aims to meet the demand of human on the Internet to obtain information quickly and accurately.
The 8 full papers presented were carefully reviewed and selected from numerous submissions. The papers provide a wide range of research results in information retrieval area.

Table of Contents

Frontmatter
A Position-Aware Word-Level and Clause-Level Attention Network for Emotion Cause Recognition
Abstract
Emotion cause recognition is a vital task in natural language processing (NLP), which aims to identify the reason of emotion expressed in text. Both industry and academia have realized the importance of the relationship between emotion word and context. However, most existing methods usually ignore the fact that the position information is also crucial for detecting the emotion cause. When an emotion word occurs in a clause, its neighboring words and clauses should be given more attractive than others with a long distance. In this paper, we propose a novel framework Position-aware Word-level and Clause-level Attention (PWCA) Network based on bidirectional GRU. PWCA not only concentrates on the position information of emotion word, but also builds the relation between emotion clause and candidate clause by leveraging word-level and clause-level attention mechanism. The experimental results show that our model obviously outperforms other state-of-the-art methods. Through the visualization of attention over words, we validate our observation mentioned above.
Yufeng Diao, Liang Yang, Xiaochao Fan, Hongfei Lin
ID-Agnostic User Behavior Pre-training for Sequential Recommendation
Abstract
Recently, sequential recommendation has emerged as a widely studied topic. Existing researches mainly design effective neural architectures to model user behavior sequences based on item IDs. However, this kind of approach highly relies on user-item interaction data and neglects the attribute- or characteristic-level correlations among similar items preferred by a user. In light of these issues, we propose IDA-SR, which stands for ID-Agnostic User Behavior Pre-training approach for Sequential Recommendation. Instead of explicitly learning representations for item IDs, IDA-SR directly learns item representations from rich text information. To bridge the gap between text semantics and sequential user behaviors, we utilize the pre-trained language model as text encoder, and conduct a pre-training architecture on the sequential user behaviors. In this way, item text can be directly utilized for sequential recommendation without relying on item IDs. Extensive experiments show that the proposed approach can achieve comparable results when only using ID-agnostic item representations, and performs better than baselines by a large margin when fine-tuned with ID information.
Shanlei Mu, Yupeng Hou, Wayne Xin Zhao, Yaliang Li, Bolin Ding
Enhance Performance of Ad-hoc Search via Prompt Learning
Abstract
Recently, pre-trained language models (PTM) have achieved great success on ad hoc search. However, the performance decline in low-resource scenarios demonstrates the capability of PTM has not been inspired fully. As a novel paradigm to apply PTM to downstream tasks, prompt learning is a feasible scheme to boost PTM’s performance by aligning the pre-training task and downstream task. This paper investigates the effectiveness of the standard prompt learning paradigm on the ad hoc search task. Based on various PTMs, two types of prompts are tailored for the ad hoc search task. Overall experimental results on the MS Marco dataset show the credible better performance of our prompt learning method than fine-tuning based methods and another previous prompt learning based model. Experiments conducted in various resource scenarios show the stability of prompt learning. RoBERTa and T5 deliver better results compared to BM25 using 100 training queries utilizing prompt learning, while fine-tuning based methods need more data. Further analysis shows the significance of the uniformity of tasks’ format and adding continuous tokens into training in our prompt learning method.
Shenghao Yang, Yiqun Liu, Xiaohui Xie, Min Zhang, Shaoping Ma
Syntax-Aware Transformer for Sentence Classification
Abstract
Sentence classification is a significant task in natural language processing (NLP) and is applied in many fields. The syntactic and semantic properties of words and phrases often determine the success of sentence classification. Previous approaches based on sequential modeling mainly ignored the explicit syntactic structures in a sentence. In this paper, we propose a Syntax-Aware Transformer (SA-Trans), which integrates syntactic information in the transformer and obtains sentence embeddings by combining syntactic and semantic information. We evaluate our SA-Trans on four benchmark classification datasets (i.e., AG’News, DBpedia, ARP, ARF), and the experimental results manifest that our SA-Trans model achieves competitive performance compared to the baseline models. Finally, the case study further demonstrates the importance of syntactic information for the classification task.
Jiajun Shan, Zhiqiang Zhang, Yuwei Zeng, Yuyan Ying, Haiyan Wu, Haiyu Song, Yanhong Chen, Shengchun Deng
Evaluation of Deep Reinforcement Learning Based Stock Trading
Abstract
Stock is one of the most important targets in investment. However, it is challenging to manually design a profitable strategy in the highly dynamic and complex stock market. Modern portfolio management usually employs quantitative trading, which utilizes computers to support decision-making or perform automated trading. Deep reinforcement learning (Deep RL) is an emerging machine learning technology that can solve multi-step optimal control problems. In this article, we propose a method to model multi-stock trading process according to reinforcement learning theory and implement our trading agents based on two popular actor-critic algorithms: A2C and PPO. We train and evaluate the agents on two datasets from 2010–2021 Chinese stock market multiple times. The experimental results show that both agents can achieve an annual return rate that outstrips the baseline by 8.8% and 16.8% on average on two datasets, respectively. Asset curve and asset distribution chart are plotted to prove that the policy the agent learned is reasonable. We also employ a track training strategy, which can further enhance the agent’s performance by about 7.7% with little extra training time.
Yining Zhang, Zherui Zhang, Hongfei Yan
InDNI: An Infection Time Independent Method for Diffusion Network Inference
Abstract
Diffusion network inference aims to reveal the message propagation process among users and has attracted many research interests due to the fundamental role it plays in some real applications, such as rumor-spread forecasting and epidemic controlling. Most existing methods tackle the task with exact node infection time. However, collecting infection time information is time-consuming and labor-intensive, especially when information flows are huge and complex. To combat the problem, we propose a new diffusion network inference algorithm that only relies on infection states. The proposed method first encodes several observation states into a node infection matrix and then obtains the node embedding via the variational autoencoder (VAE). Nodes with the least Wasserstein distance of embeddings are predicted for existing propagation edges. Meanwhile, to reduce the complexity, a novel clustering-based filtering strategy is designed for selecting latent propagation edges. Extensive experiments show that the proposed model outperforms the state-of-the-art infection time independent models while demonstrating comparable performance over infection time based models.
Guoxin Chen, Yongqing Wang, Jiangli Shao, Boshen Shi, Huawei Shen, Xueqi Cheng
Beyond Precision: A Study on Recall of Initial Retrieval with Neural Representations
Abstract
Vocabulary mismatch is a central problem in information retrieval (IR), i.e., the relevant documents may not contain the same (symbolic) terms of the query. Recently, neural representations have shown great success in capturing semantic relatedness, leading to new possibilities to alleviate the vocabulary mismatch problem in IR. However, most existing efforts in this direction have been devoted to the re-ranking stage. That is to leverage neural representations to help re-rank a set of candidate documents, which are typically obtained from an initial retrieval stage based on some symbolic index and search scheme (e.g., BM25 over the inverted index). This naturally raises a question: if the relevant documents have not been found in the initial retrieval stage due to vocabulary mismatch, there would be no chance to re-rank them to the top positions later. Therefore, in this paper, we study the problem how to employ neural representations to improve the recall of relevant documents in the initial retrieval stage. Specifically, to meet the efficiency requirement of the initial stage, we introduce a neural index for the neural representations of documents, and propose two hybrid search schemes based on both neural and symbolic indices, namely the parallel search scheme and the sequential search scheme. Our experiments show that both hybrid index and search schemes can improve the recall of the initial retrieval stage with small overhead.
Yan Xiao, Yixing Fan, Ruqing Zhang, Jiafeng Guo
A Learnable Graph Convolutional Neural Network Model for Relation Extraction
Abstract
Relation extraction is the task of extracting the semantic relationships between two named entities in a sentence. The task relies on semantic dependencies relevant to named entities. Recently, graph convolutional neural networks have shown great potential in supporting this task, wherein dependency trees are usually adopted to learn semantic dependencies between entities. However, the requirement of external toolkits to parse sentences poses a problem, owing to them being error prone. Furthermore, entity relations and parsing structures vary in semantic expressions. Therefore, manually designed rules are required to prune the structure of the dependency trees. This study proposed a novel learnable graph convolutional neural network model (L-GCN) that directly encodes every word of a sentence as nodes of a graph neural network. Then, the L-GCN uses a learnable adjacency matrix to encode dependencies between nodes. The model offers the advantage of automatically learning high-order abstract representations of the semantic dependencies between words. Moreover, a fusion module was designed to aggregate the global and local semantic structure information of sentences. Further, the proposed L-GCN was evaluated on the ACE 2005 English dataset and Chinese Literature Text Corpus. The experimental results confirmed the effectiveness of L-GCN in learning the semantic dependencies of a relation instance. Moreover, it clearly outperformed previous dependency-tree-based models.
Jinling Xu, Yanping Chen, Yongbin Qin, Ruizhang Huang
Backmatter
Metadata
Title
Information Retrieval
Editors
Yi Chang
Xiaofei Zhu
Copyright Year
2023
Electronic ISBN
978-3-031-24755-2
Print ISBN
978-3-031-24754-5
DOI
https://doi.org/10.1007/978-3-031-24755-2