Abstract:
Question matching attempts to determine whether the intentions of two different questions are similar. Recently, with the development of large-scale pretrained DNN (Deep neural network) language models, state-of-the-art question-matching performance has been achieved. However, due to the independent and identically distributed assumption, the performance of these DNN models in real-world scenarios is limited by the adequacy of the training data and the distribution drift between the target and training data. In this study, we propose a novel gradual machine learning (GML)-based approach for Chinese question matching. Beginning with initially labeled instances, this approach gradually labels target instances in order of increasing hardness via iterative factor inference on a factor graph. The proposed solution first extracts diverse semantic features from different perspectives and then constructs a factor graph by fusing the extracted features to facilitate gradual learning from easy to hard. In feature modeling, we extract and model two complementary types of features: 1) TF-IDF-based keyword features, which can capture the shallow semantic similarity between two questions; 2) DNN-based deep semantic features, which can capture the latent semantic similarity between two questions. We model keyword features as unary factors in a factor graph, which define their influence on the matching status of the two questions. The DNN-based features contain global and local features, where the global features correspond to a question pair’s matching probability as estimated by a DNN model, and the local features correspond to the semantic similarity between two neighboring question pairs estimated by their vector representations in a DNN’s embedding space. To facilitate gradual inference, we model the DNN-based global and local features as unary and binary factors, respectively, in a factor graph. Finally, we implement a GML solution for question matching based on an open-sourced GML inference engine. We validated the efficacy of the proposed approach through a comparative study on two open-sourced Chinese benchmark datasets, LCQMC and the BQ corpus. Extensive experiments demonstrate that compared with pure deep learning models, the proposed solution effectively improves the accuracy of question matching, and its performance advantage generally increases with a decrease in labeled training data. Our experiments also demonstrate that the performance of the proposed solution is very robust w.r.t key algorithmic parameters, indicating its applicability in real-world scenarios. In addition, our work on the GML solution is orthogonal to existing deep learning-based question-matching algorithms because our solution can easily accommodates and leverages other deep language models.