电子学报 ›› 2021, Vol. 49 ›› Issue (9): 1675-1681.DOI: 10.12263/DZXB.20200243

• 学术论文 • 上一篇    下一篇

基于SAC的多智能体深度强化学习算法

肖硕1,2, 黄珍珍2, 张国鹏2, 杨树松3, 江海峰2, 李天旭2   

  1. 1.矿山数字化教育部工程研究中心,江苏 徐州 221000
    2.中国矿业大学计算机科学与技术学院,江苏 徐州 221000
    3.宁波市轨道交通集团有限公司,浙江 宁波 315000
  • 收稿日期:2020-03-09 修回日期:2020-05-14 出版日期:2021-10-21
    • 作者简介:
    • 肖 硕 男,1981年9月生于江苏徐州。中国矿业大学副教授,硕士生导师,研究方向为人工智能、物联网等。E‑mail:sxiao@cumt.edu.cn
      黄珍珍 女,1981年8月生于江苏徐州。中国矿业大学副研究馆员,研究方向为计算机网络、人工智能、物联网等。E‑mail:huangzhenzhen@cumt.edu.cn
      张国鹏 男,1978年6月生于江苏徐州。中国矿业大学教授,博士生导师,研究方向为人工智能、物联网等。E‑mail:gpzhang@cumt.edu.cn
    • 基金资助:
    • 国家自然科学基金 (62071470); 徐州市科技计划项目 (KC19011)

Deep Reinforcement Learning Algorithm of Multi⁃agent Based on SAC

XIAO Shuo1,2, HUANG Zhen-zhen2, ZHANG Guo-peng2, YANG Shu-song3, JIANG Hai-fei2, LI Tian-xu2   

  1. 1.Engineering Research Center of Mine Digitalization, Ministry of Education, Xuzhou, Jiangsu 221000, China
    2.School of Computer Sciences and Technology, China University of Mining & Technology, Xuzhou, Jiangsu 221000, China
    3.Operating Branch, Ningbo Rail Transit Group Co., LTD., Ningbo, Zhejiang 315000, China
  • Received:2020-03-09 Revised:2020-05-14 Online:2021-10-21 Published:2021-09-25
    • Supported by:
    • National Natural Science Foundation of China (62071470); Xuzhou Science and Technology Project of Jiangsu Province (KC19011)

摘要:

由于多智能体所处环境动态变化,并且单个智能体的决策也会影响其他智能体,这使得单智能体深度强化学习算法难以在多智能体环境中保持稳定. 为了适应多智能体环境,本文利用集中训练和分散执行框架Cen?tralized Training with Decentralized Execution(CTDE),对单智能体深度强化学习算法Soft Actor?Critic(SAC)进行了改进,引入智能体通信机制,构建Multi?Agent Soft Actor?Critic(MASAC)算法. MASAC中智能体共享观察信息和历史经验,有效减少了环境不稳定性对算法造成的影响.最后,本文在协同以及协同竞争混合的任务中,对MASAC算法性能进行了实验分析,结果表明MASAC相对于SAC在多智能体环境中具有更好的稳定性.

关键词: 多智能体环境, 集中训练, 分散执行, 多智能体深度强化学习

Abstract:

Due to the dynamic change of multi-agent environment, and the decision of single agent will affect other agents, it is difficult for the deep reinforcement learning algorithm of single agent to maintain stability in multi-agent environment. In order to adapt to multi-agent environment, this paper uses centralized training and decentralized execution framework (CTDE) to improve single agent deep reinforcement learning algorithm soft actor-critic (SAC). By introducing agent communication mechanism, in multi-agent soft actor-critic (MASAC), agents share observation information and historical experience, which effectively reduces the impact of environmental instability on the algorithm. Finally, in the task of cooperation and cooperation and competition, the performance of MASAC algorithm is analyzed experimentally. The results show that MASAC has better stability than SAC in multi-agent environment.

Key words: multi-agent environments, centralized training, decentralized execution, multi-agent deep reinforcement learning

中图分类号: