To tackle the challenges posed by the insensitivity of current multi-scale networks to image detailed information and their limited capacity to model contextual relationships, our paper proposes a novel semantic segmentation network called LEFNet. It is based on Parallel Feature Enhancement (PFE) and Adaptive Weighted Feature Fusion (AWFF). In the coding stage, the PFE module performs posterior enhancement of multi-scale features by Detail Sharpening Attention (DSA) and High-level Dilation Fusion (HDF) methods. The DSA guides the learning of detailed information in low-level features. The HDF broadens the perceptual field, enabling the acquisition of richer high-level features. In the decoding stage, the AWFF module supersedes the conventional feature fusion methods. The AWFF module constructs perceptual factors for each multi-scale feature map, enabling weighted learning of features. It emphasizes features with stronger semantic information, making them more decisive in pixel classification. It integrates features more reasonably based on the relevance of global contextual information, fully releasing the expressive potential of encoded features. Our method achieves mIoU scores of 82.8%, 49.3%, and 45.4% on the Cityscapes, ADE 20K, and COCO-Stuff 164K datasets, respectively, reaching an advanced level on popular benchmarks. The experimental results show that LEFNet alleviates the challenge of insensitivity of multi-scale networks to image detailed information, improves its ability to model contextual relationships, and significantly improves segmentation performance.