The aim of transfer learning is to improve the performance of learning models in the target domain by transferring knowledge from the related source domain. However, not all data instances in the source domain are reliable for the learning task in the target domain. Unreliable source–domain data may lead to negative transfer. To address this problem, we propose a novel strategy for selecting reliable data instances from the source domain based on evidence theory. Specifically, a mass function is formulated to measure the degree of ignorance and reliability of the source domain data with respect to the learning task in the target domain. By selecting reliable instances with low degree of ignorance from the source domain, the domain adaptation of the transfer learning models is enhanced. Moreover, the proposed data-selection strategy is independent of specific learning algorithms and can be regarded as a common preprocessing technique for transfer learning. Experiments on both simulated and real-world datasets validated that the proposed data selection strategy can improve the performance of various types of transfer learning methods.