Abstract Current visual simultaneous localization and mapping (SLAM) systems have demonstrated commendable efficacy in static environments. However, the presence of dynamic objects in real-world settings frequently leads to system discrepancies, significantly impairing the accuracy and robustness of SLAM systems. Conventional visual SLAM approaches typically utilize epipolar constraints to mitigate the impact of outliers; nevertheless, they encounter limitations when confronted with a substantial number of dynamic or planar moving objects. To tackle these issues, this paper introduces a novel dynamic visual SLAM system, termed DEG-SLAM. Initially, the system employs the YOLOv5 object detection network to identify dynamic objects, subsequently relaying the semantic information to the tracking module. During the tracking phase, both semantic information and epipolar constraints are leveraged to filter out dynamic feature points. To address the challenges posed by the malfunctioning of epipolar constraints in degenerate scenes, DEG-SLAM incorporates a degenerate constraint mechanism aimed at further eliminating dynamic feature points. Furthermore, a reprojection constraint has been introduced to enhance the filtering of absent dynamic feature points that lie outside the detection boxes. Experimental findings reveal that DEG-SLAM significantly improves accuracy and robustness when compared to traditional ORB-SLAM3 in dynamic environments. The performance benefits of DEG-SLAM are particularly evident in degenerate scenarios, thereby affirming its practicality and reliability in complex settings.