基于多层级视觉融合的图像描述模型

周东明, 张灿龙, 李志欣, 王智文

电子学报 ›› 2021, Vol. 49 ›› Issue (7) : 1286-1290.

PDF(1236 KB)
PDF(1236 KB)
电子学报 ›› 2021, Vol. 49 ›› Issue (7) : 1286-1290. DOI: 10.12263/DZXB.20191296
学术论文

基于多层级视觉融合的图像描述模型

作者信息 +

Image Captioning Model Based on Multi‑Level Visual Fusion

Author information +
文章历史 +

本文亮点

传统方法在视觉策略网络中只关注实体,不能够推理出实体和属性之间的联系,在语言策略网络存在暴露偏差和误差累计问题.为此,提出了一个基于强化学习的多层级视觉融合网络模型.在视觉策略网络中通过多层级神经网络模块将视觉特征转化为视觉知识的特征集.融合网络生成使描述语句更加流畅的虚词,用于视觉策略网络和语言策略网络的互动.在语言策略网络中使用基于强化学习的自批评策略梯度算法对视觉融合网络实现端到端的优化.实验结果表明,该模型在MS‑COCO数据集取得不错效果,将Karpathy分割测试中的CIDEr值从120.1提高到124.3.

HeighLight

Traditional methods only focus on entities in the visual strategy network and cannot deduce the relationship between entities and attributes. There are problems of exposure bias and error accumulation in the language strategy network. Therefore, this paper proposes a multi‑level visual fusion network model based on reinforcement learning. In the visual strategy network, multi‑level sub‑neural network module is used to transform visual features into feature sets of visual knowledge. The fusion network generates the function words which make the description sentences more fluent and can be used for the interaction between the visual strategy network and the language strategy network. The gradient algorithm of self‑criticism strategy based on reinforcement learning is used to optimize the visual fusion network end‑to‑end. The experimental results show that the model can get good results in MS‑COCO data set and improve the CIDEr value of Karpathy segmentation test from 120.1 to 124.3.

引用本文

导出引用
周东明 , 张灿龙 , 李志欣 , 王智文. 基于多层级视觉融合的图像描述模型[J]. 电子学报, 2021, 49(7): 1286-1290. https://doi.org/10.12263/DZXB.20191296
ZHOU Dong-ming , ZHANG Can-long , LI Zhi-xin , WANG Zhi-wen. Image Captioning Model Based on Multi‑Level Visual Fusion[J]. Acta Electronica Sinica, 2021, 49(7): 1286-1290. https://doi.org/10.12263/DZXB.20191296
中图分类号: TP181   

参考文献

1
ChenSJinQWangP. Say as you wish: fine‑grained control of image caption generation with abstract scene graphs[A].Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition[C]. USA:IEEE, 2020. 9962 - 9971.
2
ShiJZhangHLiJ. Explainable and explicit visual reasoning over scene graphs[A].Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition[C]. California:IEEE,2019. 8376 - 8384.
3
张志昌,曾扬扬,庞雅丽. 融合语义角色和自注意力机制的中文文本蕴含识别[J]. 电子学报202048(11): 2162 - 2169.
ZHANGZhi‑changZENGYang‑yangPANGYa‑li. A Chinese textual entailment recognition method incorporating semantic role and self‑attention[J]. Acta Electronica Sinica202048(11): 2162 - 2169.(in Chinese)
4
RennieS JMarcheretEMrouehY. Self‑critical sequence training for image captioning[A].Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition[C]. Hawaii:IEEE, 2017. 7008 - 7024.
5
LuJYangJBatraD. Neural baby talk[A]. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition[C]. Salt Lake City: IEEE, 2018. 7219 - 7228.
6
AndersonPHeXBuehlerC. Bottom‑up and top‑down attention for image captioning and visual question answering[A].Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition[C]. Salt Lake City:IEEE, 2018. 6077 - 6086.
7
汤鹏杰, 王瀚漓, 许恺晟. LSTM 逐层多目标优化及多层概率融合的图像描述[J]. 自动化学报201844(7): 1237 - 1249.
TANGPeng‑jieWANGHan‑liXUKai‑sheng. Multi‑ objective layer‑wise optimization and multi‑level probability fusion for image description generation using lstm[J]. Acta Automatica Sinica201844(7): 1237 - 1249.(in Chinese)
8
DeshpandeAAnejaJWangL. Fast, diverse and accurate image captioning guided by part‑of‑speech[A]. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition[C]. California:IEEE, 2019. 10695 - 10704.
9
YangXTangKZhangH. Auto‑encoding scene graphs for image captioning[A].Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition[C]. California:IEEE, 2019. 10685 - 10694.
10
ChenLZhangHXiaoJ. SCA‑CNN: spatial and channel‑ wise attention in convolutional networks for image captioning[A]. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition[C]. Hawaii:IEEE, 2017. 5659 - 5667.
11
FengYMaLLiuW. Unsupervised image captioning[A].Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition[C]. California:IEEE, 2019. 4125 - 4134.
12
JiangWMaLJiangY. Recurrent fusion network for image captioning[A].Proceedings of the European Conference on Computer Vision[C]. Germany: Springer, 2018. 499 - 515.

基金

国家自然科学基金(61866004)
广西自然科学基金(2018GXNSFDA281009)
广西“八桂学者”创新研究团队
广西多源信息挖掘与安全重点实验室基金(20‑A‑03‑01)
广西研究生教育创新计划(XYCSZ2020071)
PDF(1236 KB)

Accesses

Citation

Detail

段落导航
相关文章

/