Location is an important distinguishing information for instance segmentation. In this paper, we propose a novel model, called Location Sensitive Network (LSNet), for human instance segmentation. LSNet integrates instance-specific location information into one-stage segmentation framework. Specifically, in the segmentation branch, Pose Attention Module (PAM) encodes the location information into the attention regions through coordinates encoding. Based on the location information provided by PAM, the segmentation branch is able to effectively distinguish instances in feature-level. Moreover, we propose a combination operation named Keypoints Sensitive Combination (KSCom) to utilize the location information from multiple sampling points. These sampling points construct the points representation for instances via human keypoints and random points. Human keypoints provide the spatial locations and semantic information of the instances, and random points expand the receptive fields. Based on the points representation for each instance, KSCom effectively reduces the mis-classified pixels. Our method is validated by the experiments on public datasets. LSNet-5 achieves 56.2 mAP at 18.5 FPS on COCOPersons. Besides, the proposed method is significantly superior to its peers in the case of severe occlusion.