For scenes with significant changes in light intensity or perspective, the traditional ORB feature extraction method is prone to the problems of local homogenization of feature points and errors arising from the accumulated drift. A monocular visual Simultaneous Localization and Mapping algorithm (SLAM) based on SuperPoint network (SP-VSLAM) is proposed. The SuperPoint network is used to extract feature points and calculate binary descriptors simultaneously, combined with Non-Maximal Suppression methods to achieve uniform detection and description of image feature points. Then construct robust and accurate feature association information. It is fused with the back-end local mapping, loop & map merging modules of the ORB-SLAM3 system to construct a complete monocular vision SLAM system. Using the public measurement datasets TUM, the localization accuracy is calculated for five sequences in monocular mode, and the experimental results show that the proposed method has smaller trajectory error and higher pose estimation accuracy in static, partial low dynamic and high dynamic environments, which can effectively enhance the robustness of tracking.