计算机科学
可扩展性
并行计算
深度学习
内存管理
交错存储器
内存映射
编译程序
计算机体系结构
记忆模型
平面存储模型
计算机硬件
半导体存储器
共享内存
操作系统
人工智能
作者
Haoyang Zhang,Yuanyuan Zhou,Xue Yang,Yiqi Liu,Jian Huang
出处
期刊:Cornell University - arXiv
日期:2023-10-13
标识
DOI:10.1145/3613424.3614309
摘要
To break the GPU memory wall for scaling deep learning workloads, a variety of architecture and system techniques have been proposed recently. Their typical approaches include memory extension with flash memory and direct storage access. However, these techniques still suffer from suboptimal performance and introduce complexity to the GPU memory management, making them hard to meet the scalability requirement of deep learning workloads today. In this paper, we present a unified GPU memory and storage architecture named G10 driven by the fact that the tensor behaviors of deep learning workloads are highly predictable. G10 integrates the host memory, GPU memory, and flash memory into a unified memory space, to scale the GPU memory capacity while enabling transparent data migrations. Based on this unified GPU memory and storage architecture, G10 utilizes compiler techniques to characterize the tensor behaviors in deep learning workloads. Therefore, it can schedule data migrations in advance by considering the available bandwidth of flash memory and host memory. The cooperative mechanism between deep learning compilers and the unified memory architecture enables G10 to hide data transfer overheads in a transparent manner. We implement G10 based on an open-source GPU simulator. Our experiments demonstrate that G10 outperforms state-of-the-art GPU memory solutions by up to 1.75$\times$, without code modifications to deep learning workloads. With the smart data migration mechanism, G10 can reach 90.3\% of the performance of the ideal case assuming unlimited GPU memory.
科研通智能强力驱动
Strongly Powered by AbleSci AI