Since rain shows a variety of shapes and directions, learning the degradation representation is extremely challenging for single image deraining. Existing methods mainly propose to designing complicated modules to implicitly learn latent degradation representation from rainy images. However, it is hard to decouple the content-independent degradation representation due to the lack of explicit constraint, resulting in over- or under-enhancement problems. To tackle this issue, we propose a novel Latent Degradation Representation Constraint Network (LDRCNet) that consists of the Direction-Aware Encoder (DAEncoder), Deraining Network, and Multi-Scale Interaction Block (MSIBlock). Specifically, the DAEncoder is proposed to extract latent degradation representation adaptively by first using the deformable convolutions to exploit the direction property of rain streaks. Next, a constraint loss is introduced to explicitly constraint the degradation representation learning during training. Last, we propose an MSIBlock to fuse with the learned degradation representation and decoder features of the deraining network for adaptive information interaction to remove various complicated rainy patterns and reconstruct image details. Experimental results on five synthetic and four real datasets demonstrate that our method achieves state-of-the-art performance. The source code is available at https://github.com/Madeline-hyh/LDRCNet.