High dynamic range (HDR) image shows richer scene brightness and details for better visual effects than conventional low dynamic range (LDR) images since they utilize more bits to express pixel values. With little input information, the challenge of HDR reconstruction is to recover the details lost in the under- /over-exposed regions of an image. The majority of current methods for single-frame HDR reconstruction fail to focus on image denoising and color balance. In this work, we address these difficulties by proposing the notion of extracting image luminance features and texture features separately. The method is based on a dual-input channel encoder-decoder structure and utilizes a spatial feature transform module to implement information interaction between both input branches at different feature scales. In addition, our proposed network includes a weighting network to preserve useful information about the image selectively. Through both quantitative and qualitative experiments, we demonstrate the effectiveness of the components proposed in the network. In comparison to other existing mainstream methods in the field on publicly available datasets, we have demonstrated that the proposed method enables noise reduction while recovering the lost image details. The experimental results show that our method achieves state-of-the-art performance in the task of single-frame HDR reconstruction. The code is available at https://github.com/AMSTL-PING/DEUNet-HDRI.git