A Siamese adversarial neural network framework was constructed based on the Siamese neural network and incorporating adversarial training method. The framework is designed for training and retrieval of multi-lingual patent text representation models, using Contrastive Loss as the training loss function, aiming to better fine-tune the training-related text representation models. The effectiveness of the Siamese adversarial neural network framework and the fine-tuned models were mainly validated through multiple comparative experiments designed on self-built parallel patent corpora of Thai, Vietnamese and seven other languages including German, French, Japanese, Korean, and Russian. 基于孪生神经网络并融入对抗训练方法构建了一个孪生对抗神经网络框架。该框架面向多语言、跨语种的专利文本表示模型训练任务及检索应用任务所构建,采用对比损失(Contrastive Loss)作为模型训练的损失函数,旨在更好地微调训练相关文本表示模型。孪生对抗神经网络框架及微调训练后的模型有效性主要通过在自建的泰语、越南语小语种专利平行语料以及自建的包含泰语、越南语、德语、法语、日语、韩语、俄语 7 种语言在内的专利平行语料上设计多组对比实验进行验证。
通过在包含同一专利小语种原文及中文翻译版本的数据库中,以小语种文本表示为检索对象检索最相近的N个文本表示,计算对应中文翻译版本文本表示所在排位得出指标。