Intravascular ultrasound (IVUS) imaging allows direct visualization of the coronary vessel wall and is suitable for assessing atherosclerosis and the degree of stenosis. Accurate segmentation and lumen and median-adventitia (MA) measurements from IVUS are essential for such a successful clinical evaluation. However, current automated segmentation by commercial software relies on manual corrections, which is time-consuming and user-dependent. We aim to develop a deep learning-based method using an encoder-decoder deep architecture to automatically and accurately extract both lumen and MA border. Inspired by the dual-path design of the state-of-the-art model IVUS-Net, our method named IVUS-U-Net++ achieved an extension of the U-Net++ model. More specifically, a feature pyramid network was added to the U-Net++ model, enabling the utilization of feature maps at different scales. Following the segmentation, the Pearson correlation and Bland-Altman analyses were performed to evaluate the correlations of 12 clinical parameters measured from our segmentation results and the ground truth. A dataset with 1746 IVUS images from 18 patients was used for training and testing. Our segmentation model at the patient level achieved a Jaccard measure (JM) of 0.9080 ± 0.0321 and a Hausdorff distance (HD) of 0.1484 ± 0.1584 mm for the lumen border; it achieved a JM of 0.9199 ± 0.0370 and an HD of 0.1781 ± 0.1906 mm for the MA border. The 12 clinical parameters measured from our segmentation results agreed well with those from the ground truth (all p-values are smaller than .01). Our proposed method shows great promise for its clinical use in IVUS segmentation.