In dynamic environments, achieving accurate and robust Visual SLAM (Simultaneous Localization and Mapping) remains a significant challenge, particularly for applications in robotic navigation and autonomous driving. This study introduces YG-SLAM, an innovative approach that integrates YOLOv8 and geometric constraints within the ORB-SLAM2 framework to adapt effectively to dynamic scenarios. YOLOv8 is employed for instance segmentation and dynamic object detection, enriching the semantic information while extracting image feature points. Geometric constraints, including epipolar geometry algorithms and Lucas-Kanade optical flow methods, are utilized to filter out dynamic objects effectively.The tracking thread exclusively relies on static feature points for camera pose estimation, substantially improving the system's localization accuracy. Experimental results on the TUM dataset demonstrate that YG-SLAM significantly outperforms traditional ORB-SLAM2 in dynamic environments. Specifically, the Root Mean Square Error (RMSE) of the absolute trajectory errors reduced by 96.51%, and the RMSE of the relative pose errors decreased by 93.60%, marking a significant advancement in the field of Visual SLAM.