电子学报 ›› 2022, Vol. 50 ›› Issue (6): 1319-1330.DOI: 10.12263/DZXB.20210818

所属专题: 电磁频谱智能+

• 电磁频谱智能+ • 上一篇    下一篇

基于多智能体深度强化学习的分布式协同干扰功率分配算法

饶宁, 许华, 蒋磊, 宋佰霖, 史蕴豪   

  1. 空军工程大学信息与导航学院,陕西 西安 710077
  • 收稿日期:2021-06-30 修回日期:2021-10-10 出版日期:2022-06-25 发布日期:2022-06-25
  • 作者简介:饶 宁 男,1997年8月出生,江西上饶人.现为空军工程大学信息与导航学院硕士研究生.主要研究方向为通信对抗、强化学习. E-mail: raoningmabma@163.com
    许 华 男,1976年4月出生,湖北宜昌人.现为空军工程大学信息与导航学院教授、博士生导师.主要研究方向为通信对抗、信号盲处理.E-mail: 13720720010@139.com
    蒋 磊 男,1974年6月出生,江苏无锡人.现为空军工程大学信息与导航学院副教授、硕士生导师.主要研究方向为通信对抗、无线通信技术.E-mail: jleimail@126.com
    宋佰霖 男,1997年11月出生,辽宁沈阳人.现为空军工程大学信息与导航学院硕士研究生.主要研究方向为通信对抗、强化学习. E-mail: songbail@126.com
    史蕴豪 男,1996年7月出生,陕西咸阳人.现为空军工程大学信息与导航学院博士研究生.主要研究方向为信号识别、深度学习. E-mail: shiyunhaoai@163.com

Allocation Algorithm of Distributed Cooperative Jamming Power Based on Multi-Agent Deep Reinforcement Learning

RAO Ning, XU Hua, JIANG Lei, SONG Bai-lin, SHI Yun-hao   

  1. Information and Navigation College of Air Force Engineering University,Xi’an,Shaanxi 710077,China
  • Received:2021-06-30 Revised:2021-10-10 Online:2022-06-25 Published:2022-06-25

摘要:

针对战场通信对抗协同干扰中的干扰功率分配难题,本文基于多智能体深度强化学习设计了一种分布式协同干扰功率分配算法.具体地,将通信干扰功率分配问题构建为完全协作的多智能体任务,采用集中式训练、分布式决策的方式缓解多智能体系统环境非平稳、决策维度高的问题,减少智能体之间的通信开销,并加入最大策略熵准则控制各智能体的探索效率,以最大化累积干扰奖励和最大化干扰策略熵为优化目标,加速各智能体间协同策略的学习.仿真结果表明,所提出的分布式算法能有效解决高维协同干扰功率分配难题,相比于已有的集中式分配算法具有学习速度更快、波动性更小等优点,且相同条件下干扰效率可高出集中式算法16.8%.

长摘要
针对战场通信对抗场景下多设备协同干扰中的干扰功率分配难题,本文基于多智能体深度强化学习设计了一种分布式协同干扰功率分配算法。具体地,将通信干扰功率分配问题构建为完全协作的多智能体任务,融合了多智能体系统中集中式学习方法和独立式学习方法各自的优点,采用集中式训练、分布式决策的算法结构缓解多智能体系统环境非平稳、决策维度高、训练收敛难的问题,减少智能体之间的通信开销,并加入最大策略熵准则控制各智能体的探索效率,以最大化累积干扰奖励和最大化干扰策略熵为优化目标,加速各智能体间协同策略的学习。在奖励函数中,综合考虑了整体干扰压制任务的实现以及干扰功率利用的最优化,可以在不同干扰压制系数条件下,自适应地调整合理的干扰功率分配方案。仿真结果表明,所提出的分布式算法能有效解决高维协同干扰功率分配难题,相比于已有的集中式分配算法具有学习速度更快,波动性更小等优点,且相同条件下干扰效率可高出集中式算法16.8%;消融实验表明最大策略熵可以进一步提升探索效率,更快地找到最优方案。

关键词: 通信对抗, 协同功率分配, 多智能体深度强化学习, 分布式策略, 最大策略熵

Abstract:

In order to solve the problem of jamming power allocation in battlefield cooperative communication countermeasures, this paper designs a distributed cooperative jamming power allocation method based on multi-agent deep reinforcement learning. Specifically, modeling the communication jamming power allocation as a fully cooperative multi-agent task, then the framework of centralized training and distributed decision-making is adopted to alleviate the characteristic of non-stationary environment and high dimensions in multi-agent system, reducing the communication overhead between agents as well, and introducing the maximum policy entropy criterion to control the exploration efficiency of each agent. Regarding maximizing the cumulative jamming reward and maximizing the entropy of the jamming policy as the optimization goal, then accelerates the learning of cooperative strategies. Simulation results indicate the proposed distributed method can effectively solve the high-dimensional cooperative jamming power allocation problem. Compared with the existing centralized allocation method, it has faster learning speed and less volatility, and the jamming efficiency is 16.8% higher than that of the centralized method under the same conditions.

Extended Abstract
In order to solve the problem of jamming power allocation in multi device cooperative jamming in battlefield cooperative communication countermeasure scenario, this paper designs a distributed cooperative jamming power allocation method based on multi-agent deep reinforcement learning. Specifically, modeling the communication jamming power allocation as a fully cooperative multi-agent task, combining the advantages of centralized learning method and independent learning method in multi-agent system, then the framework of centralized training and distributed decision-making is adopted to alleviate the characteristic of non-stationary environment, high decision-making dimensions and difficult training convergence in multi-agent system, reducing the communication overhead between agents as well, and introducing the maximum policy entropy criterion to control the exploration efficiency of each agent. Regarding maximizing the cumulative jamming reward and maximizing the entropy of the jamming policy as the optimization goal, then accelerates the learning of cooperative strategies. In the reward function, the realization of the overall jamming suppression task and the optimization of jamming power utilization are comprehensively considered, and the reasonable jamming power allocation scheme can be adaptively adjusted under different jamming suppression coefficients. Simulation results indicate the proposed distributed method can effectively solve the high-dimensional cooperative jamming power allocation problem, compared with the existing centralized allocation method, it has faster learning speed and less volatility, and the jamming efficiency is 16.8% higher than that of the centralized method under the same conditions. The ablation experiment shows that the maximum strategy entropy can further improve the exploration efficiency and find the optimal scheme faster.

Key words: communication countermeasures, cooperative resource allocation, multi-agent deep reinforcement learning, distributed strategy, maximum policy entropy

中图分类号: