贺学剑, 陈安琪, 郭志强, 王致茹, 陈群. 基于渐进机器学习的中文问句匹配方法[J]. 工程科学学报. DOI: 10.13374/j.issn2095-9389.2023.11.05.002
引用本文: 贺学剑, 陈安琪, 郭志强, 王致茹, 陈群. 基于渐进机器学习的中文问句匹配方法[J]. 工程科学学报. DOI: 10.13374/j.issn2095-9389.2023.11.05.002
HE Xuejian, CHEN Anqi, GUO Zhiqiang, WANG Zhiru, CHEN Qun. Question-matching approach based on gradual machine learning[J]. Chinese Journal of Engineering. DOI: 10.13374/j.issn2095-9389.2023.11.05.002
Citation: HE Xuejian, CHEN Anqi, GUO Zhiqiang, WANG Zhiru, CHEN Qun. Question-matching approach based on gradual machine learning[J]. Chinese Journal of Engineering. DOI: 10.13374/j.issn2095-9389.2023.11.05.002

基于渐进机器学习的中文问句匹配方法

Question-matching approach based on gradual machine learning

  • 摘要: 问句匹配旨在判断不同问句的意图是否相近. 近年来,随着大型预训练语言模型的发展,利用其挖掘问句对在语义层面隐含的匹配信息,取得了目前为止最好的性能. 然而,由于基于独立同分布假设,在真实场景中,这些深度学习模型的性能仍然受制于训练数据的充足程度和目标数据与训练数据之间的分布漂移. 本文提出一种基于渐进机器学习的中文问句匹配方法. 该方法基于渐进机器学习框架,从不同角度提取问句特征,构建融合各类特征信息的因子图,然后通过迭代的因子推理实现从易到难的渐进学习. 在特征建模中,设计并实现了两种类型特征的提取:(1)基于TF-IDF(Term frequency-inverse document frequency)的关键词特征;(2)基于DNN(Deep neural network)的深度语义特征. 最后,通过通用的基准中文数据集LCQMC和BQ corpus验证了所提方法的有效性. 实验表明,相比于单纯的深度学习模型,基于渐进机器学习的方法可以有效提升问句匹配的准确率,且其性能优势随着标签训练数据的减少而增大.

     

    Abstract: Question matching attempts to determine whether the intentions of two different questions are similar. Recently, with the development of large-scale pretrained DNN (Deep neural network) language models, state-of-the-art question-matching performance has been achieved. However, due to the independent and identically distributed assumption, the performance of these DNN models in real-world scenarios is limited by the adequacy of the training data and the distribution drift between the target and training data. In this study, we propose a novel gradual machine learning (GML)-based approach for Chinese question matching. Beginning with initially labeled instances, this approach gradually labels target instances in order of increasing hardness via iterative factor inference on a factor graph. The proposed solution first extracts diverse semantic features from different perspectives and then constructs a factor graph by fusing the extracted features to facilitate gradual learning from easy to hard. In feature modeling, we extract and model two complementary types of features: 1) TF-IDF-based keyword features, which can capture the shallow semantic similarity between two questions; 2) DNN-based deep semantic features, which can capture the latent semantic similarity between two questions. We model keyword features as unary factors in a factor graph, which define their influence on the matching status of the two questions. The DNN-based features contain global and local features, where the global features correspond to a question pair’s matching probability as estimated by a DNN model, and the local features correspond to the semantic similarity between two neighboring question pairs estimated by their vector representations in a DNN’s embedding space. To facilitate gradual inference, we model the DNN-based global and local features as unary and binary factors, respectively, in a factor graph. Finally, we implement a GML solution for question matching based on an open-sourced GML inference engine. We validated the efficacy of the proposed approach through a comparative study on two open-sourced Chinese benchmark datasets, LCQMC and the BQ corpus. Extensive experiments demonstrate that compared with pure deep learning models, the proposed solution effectively improves the accuracy of question matching, and its performance advantage generally increases with a decrease in labeled training data. Our experiments also demonstrate that the performance of the proposed solution is very robust w.r.t key algorithmic parameters, indicating its applicability in real-world scenarios. In addition, our work on the GML solution is orthogonal to existing deep learning-based question-matching algorithms because our solution can easily accommodates and leverages other deep language models.

     

/

返回文章
返回