RGB-D Simultaneous Localization and Mapping (SLAM) in indoor environments is a hot topic in computer vision and robotics communities, and the dynamic environment is a remaining problem. Dynamic environments, which are often caused by dynamic humans in indoor environments, usually lead to the camera pose tracking method failure, feature association error or loop closure failure. In this paper, we propose a robust dense RGB-D SLAM method which efficiently detects humans and fast reconstructs the static backgrounds in the dynamic human environments. By using the deep learning-based human body detection method, we first quickly recognize the human body joints in the current RGB frame, even when the body is occluded. We then apply graph-based segmentation on the 3D point clouds, which separates the detected moving humans from the static environments. Finally, the left static environment is aligned with a state-of-the-art frame-to-model scheme. Experimental results on common RGB-D SLAM benchmark show that the proposed method achieves outstanding performance in dynamic environments. Moreover, it is even comparable to the performance of the related state-of-the-art methods in static environments.