电子学报 ›› 2017, Vol. 45 ›› Issue (2): 278-284.DOI: 10.3969/j.issn.0372-2112.2017.02.003

• 学术论文 • 上一篇    下一篇

并行计算框架Spark的自适应缓存管理策略

卞琛1,2, 于炯1, 英昌甜1, 修位蓉1   

  1. 1. 新疆大学信息科学与工程学院, 新疆乌鲁木齐 830046;
    2. 乌鲁木齐职业大学信息工程学院, 新疆乌鲁木齐 830002
  • 收稿日期:2015-09-02 修回日期:2015-11-16 出版日期:2017-02-25
    • 作者简介:
    • 卞琛,男,1981年出生.博士研究生,CCF会员.主要研究方向包括:内存计算、高性能计算、分布式系统等.E-mail:bianchen0720@126.com;于炯,男,1964年出生.教授、博士生导师,CCF高级会员.主要研究方向包括:网格计算、并行计算、分布式系统等.
    • 基金资助:
    • 国家自然科学基金 (No.61262088,No.61462079)

Self-Adaptive Strategy for Cache Management in Spark

BIAN Chen1,2, YU Jiong1, YING Chang-tian1, XIU Wei-rong1   

  1. 1. School of Information Science and Engineering, Xinjiang University, Urumqi, Xinjiang 830046, China;
    2. School of Information and Engineering, Urumqi Vocational University, Urumqi, Xinjiang 830002, China
  • Received:2015-09-02 Revised:2015-11-16 Online:2017-02-25 Published:2017-02-25

摘要:

并行计算框架Spark缺乏有效缓存选择机制,不能自动识别并缓存高重用度数据;缓存替换算法采用LRU,度量方法不够细致,影响任务的执行效率.本文提出一种Spark框架自适应缓存管理策略(Self-Adaptive Cache Management,SACM),包括缓存自动选择算法(Selection)、并行缓存清理算法(Parallel Cache Cleanup,PCC)和权重缓存替换算法(Lowest Weight Replacement,LWR).其中,缓存自动选择算法通过分析任务的DAG(Directed Acyclic Graph)结构,识别重用的RDD并自动缓存.并行缓存清理算法异步清理无价值的RDD,提高集群内存利用率.权重替换算法通过权重值判定替换目标,避免重新计算复杂RDD产生的任务延时,保障资源瓶颈下的计算效率.实验表明:我们的策略提高了Spark的任务执行效率,并使内存资源得到有效利用.

关键词: 并行计算, 缓存管理策略, Spark, 弹性分布式数据集

Abstract:

As a parallel computation framework,Spark does not have a good strategy to select valuable RDD to cache in limited memory.When memory has been full load,Spark will discard the least recently used RDD while ignoring other factors such as the computation cost and so on.This paper proposed a self-adaptive cache management strategy (SACM),which comprised of automatic selection algorithm(Selection),parallel cache cleanup algorithm (PCC) and lowest weight replacement algorithm (LWR).Selection algorithm can seek valuable RDDs and cache their partitions to speed up data intensive computations.PCC clean-up the valueless RDD sasynchronously to improve memory utilization.LWR takes comprehensive consideration of the usage frequency of RDD,the RDD's computation cost,and the size of RDD.Experiment results show that Spark with our selection algorithm calculates faster than traditional Spark,parallel cleanup algorithm contributes to the improvement of memory utilization,and LWR shows better performance in limited memory.

Key words: parallel computing, cache management strategy, Spark, resilient distribution datasets

中图分类号: