电子学报 ›› 2014, Vol. 42 ›› Issue (11): 2157-2161.DOI: 10.3969/j.issn.0372-2112.2014.11.005

• 学术论文 • 上一篇    下一篇

一种新的基于值函数迁移的快速Sarsa算法

傅启明1, 刘全1,2, 尤树华1, 黄蔚1, 章晓芳1   

  1. 1. 苏州大学计算机科学与技术学院, 江苏苏州 215006;
    2. 吉林大学符号计算与知识工程教育部重点实验室, 吉林长春 130012
  • 收稿日期:2013-09-12 修回日期:2014-03-03 出版日期:2014-11-25
    • 通讯作者:
    • 刘全
    • 作者简介:
    • 傅启明 男,1985年生于江苏淮安,博士生.主要研究方向为强化学习、贝叶斯推理及遗传算法.
    • 基金资助:
    • 国家自然科学基金 (No.61103045,No.61303108); 江苏省自然科学基金 (No.BK2012616); 江苏省高校自然科学研究项目 (No.13KJB520020); 吉林大学符号计算与知识工程教育部重点实验室资助 (No.93K17012K04)

A Novel Fast Sarsa Algorithm Based on Value Function Transfer

FU Qi-ming1, LIU Quan1,2, YOU Shu-hua1, HUANG Wei1, ZHANG Xiao-fang1   

  1. 1. Institute of Computer Science and Technology, Soochow University, Suzhou, Jiangsu 215006, China;
    2. Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun, Jilin 130012, China
  • Received:2013-09-12 Revised:2014-03-03 Online:2014-11-25 Published:2014-11-25

摘要:

知识迁移是当前机器学习领域的一个新的研究热点.其基本思想是通过将经验知识从历史任务到目标任务的迁移,达到提高算法收敛速度和收敛精度的目的.针对当前强化学习领域中经典算法收敛速度慢的问题,提出在学习过程中通过迁移值函数信息,减少算法收敛所需要的样本数量,加快算法的收敛速度.基于强化学习中经典的在策略Sarsa算法的学习框架,结合值函数迁移方法,优化算法初始值函数的设置,提出一种新的基于值函数迁移的快速Sarsa算法——VFT-Sarsa.该算法在执行前期,通过引入自模拟度量方法,在状态空间以及动作空间一致的情况下,对目标任务中的状态与历史任务中的状态之间的距离进行度量,对其中相似并满足一定条件的状态进行值函数迁移,而后再通过学习算法进行学习.将VTF-Sarsa算法用于Random Walk问题,并与经典的Sarsa算法、Q学习算法以及具有较好收敛速度的QV算法进行比较,实验结果表明,该算法在保证收敛精度的基础上,具有更快的收敛速度.

关键词: 强化学习, VFT-Sarsa算法, 自模拟度量, 值函数迁移

Abstract:

Knowledge Transfer has gradually became a research hot pot in machine learning,which tries to transfer the knowledge from the historical tasks to the target task in order to speed up the convergence rate and improve the performance of algorithms.With respect to the slow convergence rate of traditional reinforcement learning algorithms,this paper proposed to transfer the value function between different similar learning tasks with the same state space and action space,which tries to reduce the needed samples in the target task and speed up the convergence rate.Based on the framework of on-policy Sarsa algorithm,combined with the value function transfer method,this paper put forward a novel fast Sarsa algorithm based on the value function transfer—VFT-Sarsa.At the beginning,the algorithm uses Bisimulation metric to measure the distance between states in target task and historical task on the condition that these tasks have the same state space and action space,transfers the value function if the distance meets some condition,and finally executes the learning algorithm.At the end,apply the proposed algorithm in Random Walk,compared with Sarsa algorithm,Q-Learning and QV algorithm,the results show that the proposed algorithm can get a better convergence rate with a good performance.

Key words: reinforcement learning, VFT-Sarsa algorithm, bisimulation metric, value function transfer

中图分类号: