Monocular 3D lane detection is a key component of an autonomous driving perception system. Current mainstream methods are mostly based on inverse perspective mapping (IPM) for spatial transformation, but IPM assumes flat ground and static camera parameters, which makes it difficult to adapt to the complexity of the actual driving environment. We focus on a 3D lane detection method named Modified BEV-LaneDet (M-BEV-LaneDet) network. Firstly, inspired by the slender structure of lanes, the Bird's-Eye-View Feature Aggregation Module (BEV-FAM) is proposed to enhance the extraction capability of lanes in the BEV features by expanding the convolutional receptive fields. Secondly, it proposes a lightweight Deep Layer Aggregation Module (DLAM) as the feature extraction backbone to effectively reduce the number of model parameters and improve the multi-scale feature aggregation capability. Experimental results on the OpenLane dataset demonstrate our method outperforms previous methods in terms of F-score, being 1.1% higher than the BEVLaneDet[1] network with the amount of parameters remaining largely unchanged.