Underwater object detection plays an important role in a variety of marine applications. However, the complexity of the underwater environment (e.g. complex background) and the quality degradation problems (e.g. color deviation) significantly affect the performance of the deep learning-based detector. Many previous works tried to improve the underwater image quality by overcoming the degradation of underwater or designing stronger network structures to enhance the detector feature extraction ability to achieve a higher performance in underwater object detection. However, the former usually inhibits the performance of underwater object detection while the latter does not consider the gap between open-air and underwater domains. This paper presents a novel framework to combine underwater object detection with image reconstruction through a shared backbone and Feature Pyramid Network (FPN). The loss between the reconstructed image and the original image in the reconstruction task is used to make the shared structure have better generalization capability and adaptability to the underwater domain, which can improve the performance of underwater object detection. Moreover, to combine different level features more effectively, UNet-based FPN (UFPN) is proposed to integrate better semantic and texture information obtained from deep and shallow layers, respectively. Extensive experiments and comprehensive evaluation on the URPC2020 dataset show that our approach can lead to 1.4% mAP and 1.0% mAP absolute improvements on RetinaNet and Faster R-CNN baseline with negligible extra overhead. The code is available at https://github.com/BIGWangYuDong/uwtoolbox.