调度(生产过程)
人工神经网络
深度学习
人工智能
实时计算
作者
Shreshth Tuli,Shashikant Ilager,Kotagiri Ramamohanarao,Rajkumar Buyya
标识
DOI:10.1109/tmc.2020.3017079
摘要
The ubiquitous adoption of Internet-of-Things (IoT) based applications has resulted in the emergence of the Fog computing paradigm, which allows seamlessly harnessing both mobile-edge and cloud resources. Efficient scheduling of application tasks in such environments is challenging due to constrained resource capabilities, mobility factors in IoT, resource heterogeneity, network hierarchy, and stochastic behaviors. Existing heuristic-based and Reinforcement Learning approaches lack generalizability and quick adaptability, thus failing to tackle this problem optimally. They are also unable to utilize the temporal workload patterns and are suitable only for centralized setups. Thus, we propose an Asynchronous-Advantage-Actor-Critic (A3C) based real-time scheduler for stochastic Edge-Cloud environments allowing decentralized learning, concurrently across multiple agents. We use the Residual Recurrent Neural Network (R2N2) architecture to capture a large number of host and task parameters together with temporal patterns to provide efficient scheduling decisions. The proposed model is adaptive and able to tune different hyper-parameters based on the application requirements. We explicate our choice of hyper-parameters through sensitivity analysis. The experiments conducted on real-world data set show a significant improvement in terms of energy consumption, response time, Service-Level-Agreement and running cost by 14.4%, 7.74%, 31.9%, and 4.64%, respectively when compared to the state-of-the-art algorithms.
科研通智能强力驱动
Strongly Powered by AbleSci AI