Abstract：With the rapid development of IoT applications and the explosive growth of terminal devices, the traditional centralized big data processing mode--cloud computing, it has been unable to meet the user’s comprehensive requirements for low latency and energy consumption. To complement the cloud computing paradigm, the concept of edge computing was proposed and received great attention, but the edge computing is not omnipotent. Based on the limitation of single edge node’s computation and storage resources and the demand for efficient computing services in big data scenarios, this paper proposes a deep reinforcement learning based cloud-edge collaborative computation offloading mechanism. Specifically, based on a comprehensive consideration of computing resources, bandwidth and offloading policy, an optimization problem is formulated to minimize the weight sum of all user tasks’ execution delay and energy consumption. An asynchronous cloud-edge collaborative deep reinforcement learning algorithm (ACEC-DRL) is proposed to solve such optimization problem. According to environment differences of edge nodes in the edge cloud, this algorithm can adaptively adjust offloading policy to effectively reduce the correlation of samples. At the same time, an asynchronous multi-threaded method is developed to replace the traditional experience playback mechanism, which avoids the high memory overhead of edge cloud. In addition, a cloud-edge advantage function with better reward feedback for offloading action is constructed to replace the traditional loss function, it makes ACEC-DRL can obtain the optimal offloading decision more quickly. Finally, the extensive simulation results show that the proposed ACEC-DRL algorithm has the characteristics of fast convergence and high robustness, and its optimal offloading policy closely approximates to the solution of greedy algorithm with the lowest computation cost.