Integrating RGB and depth information has advanced salient object detection but low-quality depth maps lead to inaccurate results. The current methods address this issue by employing either a weighting approach or by estimating depth images directly from RGB images. However, these methods face limitations in low-contrast RGB scenarios and fluctuating illumination conditions. To overcome these limitations, a new model has been proposed that discards low-quality depth images and formulates an incomplete multi-modality salient object detection learning. To the best of our knowledge, this is the first incomplete multi-modality salient object detection model that is capable of describing the common latent multi-modality correlation representation between RGB and depth modalities. The model acquires a resilient representation of multiple modalities even when some depth samples are missing due to noise or data scarcity. The proposed approach follows a three-step process: concealing modality-specific representation, correlating common latent representation, and fusing multilevel representation. We processed shallow and deep features separately in Shallow Common Latent Representation (SCLR) block and Deep Common Latent Representation (DCLR) block, respectively. The model outperforms 14 state-of-the-art saliency detectors on 6 benchmark datasets.