高级搜索

留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

基于循环卷积多任务学习的多领域文本分类方法

谢金宝 李嘉辉 康守强 王庆岩 王玉静

谢金宝, 李嘉辉, 康守强, 王庆岩, 王玉静. 基于循环卷积多任务学习的多领域文本分类方法[J]. 电子与信息学报. doi: 10.11999/JEIT200869
引用本文: 谢金宝, 李嘉辉, 康守强, 王庆岩, 王玉静. 基于循环卷积多任务学习的多领域文本分类方法[J]. 电子与信息学报. doi: 10.11999/JEIT200869
Jinbao XIE, Jiahui LI, Shouqiang KANG, Qingyan WANG, Yujing WANG. A Multi-domain Text Classification Method Based on Recurrent Convolution Multi-task Learning[J]. Journal of Electronics and Information Technology. doi: 10.11999/JEIT200869
Citation: Jinbao XIE, Jiahui LI, Shouqiang KANG, Qingyan WANG, Yujing WANG. A Multi-domain Text Classification Method Based on Recurrent Convolution Multi-task Learning[J]. Journal of Electronics and Information Technology. doi: 10.11999/JEIT200869

基于循环卷积多任务学习的多领域文本分类方法

doi: 10.11999/JEIT200869
基金项目: 基于工业互联网的协作式智能机器人产教融合创新应用平台(2020CJPT004),黑龙江省自然科学基金(LH2019E058),智能机器人湖北省重点实验室开放基金(HBIR202004),黑龙江省普通高校基本科研业务费专项资金(LGYC2018JC027)
详细信息
    作者简介:

    谢金宝:男,1980年生,副教授,研究方向为自然语言处理、人工智能

    李嘉辉:男,1995年生,硕士生,研究方向为自然语言处理、人工智能

    康守强:男,1980年生,教授,研究方向为智能诊断、人工智能

    王庆岩:男,1984年生,副教授,研究方向为智能诊断、人工智能、智能图像处理

    王玉静:女,1983年生,副教授,研究方向为智能诊断、人工智能

    通讯作者:

    李嘉辉 maillijiahui@163.com

  • 中图分类号: TP391.1

A Multi-domain Text Classification Method Based on Recurrent Convolution Multi-task Learning

Funds: The collaborative intelligent robot production and education integrates innovative application platform based on the industrial Internet (2020CJPT004), The Natural Science Foundation of Heilongjiang Province (LH2019E058), The open fund projects of Hubei Key Laboratory of Intelligent Robot (Wuhan Institute of Technology) (HBIR 202004), The Fundamental Research Fundation for Universities of Heilongjiang Province (LGYC2018JC027)
  • 摘要: 文本分类任务中,不同领域的文本很多表达相似,具有相关性的特点,可以解决有标签训练数据不足的问题。采用多任务学习的方法联合学习能够将不同领域的文本利用起来,提升模型的训练准确率和速度。该文提出循环卷积多任务学习模型用于文本多分类,将多个任务的文本共同建模,分别利用多任务学习、循环神经网络(RNN)和卷积神经网络(CNN)模型的优势获取多领域文本间的相关性、文本长期依赖关系、提取文本的局部特征。基于多领域文本分类数据集进行丰富的实验,该文提出的循环卷积多任务学习模型(MTL-LC)不同领域的文本分类平均准确率达到90.1%,比单任务学习模型循环卷积单任务学习模型(STL-LC)提升了6.5%,与当前热门的多任务学习模型完全共享多任务学习模型(FS-MTL)、对抗多任务学习模型(ASP-MTL)、间接交流多任务学习框架(IC-MTL)相比分别提升了5.4%, 4%和2.8%。
  • 图  1  MTL-LC多任务学习模型

    图  2  共享LSTM层

    图  3  共享LSTM和CNN层

    图  4  MTL-LC与STL-LC模型每个领域分类准确率的对比

    图  5  MTL-LC与MTL-LSTM模型每个领域分类准确率的对比

    图  6  不同领域数量下模型的准确率

    表  1  参数设置

    超参数取值选择
    隐藏层状态维数50/100/128100
    卷积核大小1/2/3/4/51/2/3
    过滤器个数50/64/100/128/256100
    dropout0.3/0.4/0.5/0.6/0.7/0.80.7
    训练次数10/20/30/40/5040
    批次8/16/3216
    学习率0.1/0.01/0.001/0.00050.0005
    下载: 导出CSV

    表  2  与其它模型对比(%)

    任务LSTMCNNMTL-DNNMTL-CNNFS-MTLASP-MTLIC-MTLMTL-LC
    books77.879.882.284.582.584.086.289.0
    electronics79.880.381.783.285.786.888.592.8
    DVD78.077.884.284.083.585.588.089.5
    kitchen81.878.580.783.286.086.288.291.5
    apparel82.882.085.083.784.587.087.589.8
    camera82.583.086.286.086.589.289.093.0
    health83.384.385.787.288.088.289.592.0
    music77.077.884.783.781.282.585.788.0
    toys82.880.587.789.284.588.089.291.3
    video85.581.885.081.583.784.586.091.0
    baby84.881.088.087.788.088.288.791.8
    magazines90.586.589.587.792.592.292.293.3
    software84.081.585.786.586.287.287.293.3
    sports80.881.883.284.085.585.776.791.5
    IMDB76.377.583.286.282.585.586.589.0
    MR70.867.075.574.574.776.778.075.3
    平均数81.280.184.384.584.786.187.390.1
    下载: 导出CSV

    表  3  MTL-LC与STL-LC模型准确率与时间比较

    方法STL-LCMTL-LC
    平均准确率(%)83.690.1
    平均1次训练的时间(s)483.4270.3
    下载: 导出CSV

    表  4  MTL-LC模型使用不同卷积核的准确率对比

    卷积核12345(3, 4, 5)(2, 3, 4)(1, 2, 3)
    准确率(%)88.689.589.289.289.489.389.590.1
    下载: 导出CSV
  • 谢金宝, 侯永进, 康守强, 等. 基于语义理解注意力神经网络的多元特征融合中文文本分类[J]. 电子与信息学报, 2018, 40(5): 1258–1265. doi: 10.11999/JEIT170815

    XIE Jinbao, HOU Yongjin, KANG Shouqiang, et al. Multi-feature fusion based on semantic understanding attention neural network for Chinese text categorization[J]. Journal of Electronics &Information Technology, 2018, 40(5): 1258–1265. doi: 10.11999/JEIT170815
    KIM Y. Convolutional neural networks for sentence classification[C]. The 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), Doha, Qatar, 2014: 1746–1751. doi: 10.3115/v1/D14-1181.
    BLITZER J, DREDZE M, and PEREIRA F. Biographies, Bollywood, boom-boxes and blenders: Domain adaptation for sentiment classification[C]. The 45th Annual Meeting of the Association of Computational Linguistics (ACL), Prague, Czech Republic, 2007: 440–447.
    CARUANA R. Multitask learning[J]. Machine Learning, 1997, 28(1): 41–75. doi: 10.1023/A:1007379606734
    LIU Xiaodong, GAO Jianfeng, HE Xiaodong, et al. Representation learning using multi-task deep neural networks for semantic classification and information retrieval[C]. The 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL), Denver, Colorado, 2015: 912–921.
    LIU Pengfei, QIU Xipeng, and HUANG Xuanjing. Recurrent neural network for text classification with multi-task learning[C]. The Twenty-Fifth International Joint Conference on Artificial Intelligence (IJCAI), New York, USA, 2016: 2873–2879.
    LIU Pengfei, QIU Xipeng, and HUANG Xuanjing. Adversarial multi-task learning for text classification[C]. The 55th Annual Meeting of the Association for Computational Linguistics (ACL), Vancouver, Canada, 2017: 1–10.
    HOCHREITER S and SCHMIDHUBER J. Long short-term memory[J]. Neural Computation, 1997, 9(8): 1735–1780. doi: 10.1162/neco.1997.9.8.1735
    王鑫, 李可, 宁晨, 等. 基于深度卷积神经网络和多核学习的遥感图像分类方法[J]. 电子与信息学报, 2019, 41(5): 1098–1105. doi: 10.11999/JEIT180628

    WANG Xin, LI Ke, NING Chen, et al. Remote sensing image classification method based on deep convolution neural network and multi-kernel learning[J]. Journal of Electronics &Information Technology, 2019, 41(5): 1098–1105. doi: 10.11999/JEIT180628
    KALCHBRENNER N, GREFENSTETTE E, and BLUNSOM P. A convolutional neural network for modelling sentences[C]. The 52nd Annual Meeting of the Association for Computational Linguistics (ACL), Baltimore, Maryland, 2014: 655–665.
    刘宗林, 张梅山, 甄冉冉, 等. 融入罪名关键词的法律判决预测多任务学习模型[J]. 清华大学学报: 自然科学版, 2019, 59(7): 497–504. doi: 10.16511/j.cnki.qhdxxb.2019.21.020

    LIU Zonglin, ZHANG Meishan, ZHEN Ranran, et al. Multi-task learning model for legal judgment predictions with charge keywords[J]. Journal of Tsinghua University:Science and Technology, 2019, 59(7): 497–504. doi: 10.16511/j.cnki.qhdxxb.2019.21.020
    COLLOBERT R and WESTON J. A unified architecture for natural language processing: Deep neural networks with multitask learning[C]. The 25th International Conference on Machine Learning (ICML), Helsinki, Finland, 2008: 160–167.
    LI Shoushan, HUANG Churen, and ZONG Chengqing. Multi-domain sentiment classification with classifier combination[J]. Journal of Computer Science and Technology, 2011, 26(1): 25–33. doi: 10.1007/s11390-011-9412-y
    LIU Pengfei, QIU Xipeng, and HUANG Xuanjing. Deep multi-task learning with shared memory for text classification[C]. The 2016 Conference on Empirical Methods in Natural Language Processing (EMNLP), Austin, USA, 2016: 118–127.
    LIU Pengfei, FU Jie, DONG Yue, et al. Learning multi-task communication with message passing for sequence learning[C]. The AAAI Conference on Artificial Intelligence (AAAI), Palo Alto, USA, 2019: 4360–4367.
    YUAN Zhigang, WU Sixing, WU Fangzhao, et al. Domain attention model for multi-domain sentiment classification[J]. Knowledge-Based Systems, 2018, 155: 1–10. doi: 10.1016/j.knosys.2018.05.004
    MAAS A L, DALY R E, PHAM P T, et al. Learning word vectors for sentiment analysis[C]. The 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies (ACL), Portland, USA, 2011: 142–150.
    PANG Bo and LEE L. Seeing stars: Exploiting class relationships for sentiment categorization with respect to rating scales[C]. The 43rd Annual Meeting on Association for Computational Linguistics (ACL), Ann Arbor, USA, 2005: 115–124.
  • 加载中
图(6) / 表(4)
计量
  • 文章访问数:  118
  • HTML全文浏览量:  33
  • PDF下载量:  23
  • 被引次数: 0
出版历程
  • 收稿日期:  2020-10-09
  • 修回日期:  2021-02-03
  • 网络出版日期:  2021-03-01

目录

    /

    返回文章
    返回

    官方微信,欢迎关注