Conventional Computed Tomography (CT) produces volumetric images by computing inverse Radon transformation using X-ray projections from different angles, which results in high dose radiation, long reconstruction time and artifacts. Biologically, prior knowledge or experience can be utilized to identify volumetric information from 2D images to certain extents. a deep learning network, XctNet, is proposed to gain this prior knowledge from 2D pixels and produce volumetric data. In the proposed framework, self-attention mechanism is used for feature adaptive optimization; multiscale feature fusion is used to further improve the reconstruction accuracy; a 3D branch generation module is proposed to generate the details of different generation fields. Comparisons are made with the state-of-arts methods using public dataset and XctNet shows significantly higher image quality as well as better accuracy (SSIM and PSNR values of XctNet are 0.8681 and 29.2823 respectively).