As a distributed machine learning approach, federated learning enables multiple clients to collaboratively train a deep learning model for a common artificial intelligence task and share their gradients only. However, recent gradient inversion attacks demonstrate the possibility to reconstruct clients' training data from shared gradients and thus pose a severe threat to the privacy of federated learning. In this paper, we focus on the neuron-exclusivity-based gradient inversion attack, which is the first analytic attack based on the neuron exclusivity state. Since the key condition of sufficient exclusivity is required to construct the neuron-exclusivity-based attack, we propose a batch-perturbation-based targeted defense aiming to eliminate the exclusivity state of the training batches. The batch perturbation algorithm can be modeled as an optimization problem that aims to find the optimal perturbation on an input batch to satisfy the secure boundary condition. Then we transform the optimization problem into a linear programming problem and solve it with PuLP. We evaluate our proposed defense on two datasets: MNIST and OrganAMNIST. The experiment results demonstrate that our proposed defense can effectively prevent the neuron-exclusivity-based attack while having almost no negative impact on model training and model performance.