Quality degradation (QD) is common in the fundus images collected from the clinical environment. Although diagnosis models based on convolutional neural networks (CNN) have been extensively used to interpret retinal fundus images, their performances under QD have not been assessed. To understand the effects of QD on the performance of CNN-based diagnosis model, a systematical study is proposed in this paper. In our study, the QD of fundus images is controlled by independently or simultaneously importing quantified interferences (e.g., image blurring, retinal artifacts, and light transmission disturbance). And the effects of diabetic retinopathy (DR) grading systems are thus analyzed according to the diagnosis performances on the degraded images. With images degraded by quantified interferences, several CNN-based DR grading models (e.g., AlexNet, SqueezeNet, VGG, DenseNet, and ResNet) are evaluated. The experiments demonstrate that image blurring causes a significant decrease in performance, while the impacts from light transmission disturbance and retinal artifacts are relatively slight. Superior performances are achieved by VGG, DenseNet, and ResNet in the absence of image degradation, and their robustness is presented under the controlled degradation.