Domain shift is a common problem in medical image analysis, where models trained on one dataset may perform poorly on another dataset with different imaging protocol. To address this, various methods based on single-source domain generalization (SSDG) have been proposed to learn a model that can generalize to unseen domains from a single source domain. In this work, we approach SSDG from the perspective of input space augmentation to generate diverse and realistic images. Specifically, we moderate the strength of image augmentations through random composition of augmentations using a linear decay function. We also introduce a self-interpolation technique to improve training stability and sample diversity by facilitating a smoother transition between augmented and non-augmented images. Experiments on multi-site fundus image segmentation datasets demonstrate that our method outperforms prior SSDG methods without any additional computation cost. In addition, the proposed self-interpolation technique can also be integrated seamlessly with existing methods to improve SSDG performance.