This paper addresses the problem of jointly haze detection and color correction from a single underwater image. We present a framework based on stacked conditional Generative adversarial networks (GAN) to learn the mapping between the underwater images and the air images in an end-to-end fashion. The proposed architecture can be divided into two components, i.e., haze detection sub-network and color correction sub-network, each with a generator and a discriminator. Specifically, a underwater image is fed into the first generator to produce a hazing detection mask. Then, the underwater image along with the predicted mask go through the second generator to correct the color of the underwater image. Experimental results show the advantages of our proposed method over several state-of-the-art methods on publicly available synthetic and real underwater datasets.