Motivated by emerging applications, such as live-streaming e-commerce, promotions, and recommendations, we introduce and solve a general class of nonstationary multi-armed bandit problems that have the following two features: (i) the decision maker can pull and collect rewards from up to [Formula: see text] out of N different arms in each time period and (ii) the expected reward of an arm immediately drops after it is pulled and then nonparametrically recovers as the arm’s idle time increases. With the objective of maximizing the expected cumulative reward over T time periods, we design a class of purely periodic policies that jointly set a period to pull each arm. For the proposed policies, we prove performance guarantees for both the offline and the online problems. For the offline problem when all model parameters are known, the proposed periodic policy obtains a long-run approximation ratio that is at the order of [Formula: see text], which is asymptotically optimal when K grows to infinity. For the online problem when the model parameters are unknown and need to be dynamically learned, we integrate the offline periodic policy with the upper confidence bound procedure to construct on online policy. The proposed online policy is proved to approximately have [Formula: see text] regret against the offline benchmark. Our framework and policy design may shed light on broader offline planning and online learning applications with nonstationary and recovering rewards. This paper was accepted by J. George Shanthikumar, data science. Supplemental Material: The online appendix and data files are available at https://doi.org/10.1287/mnsc.2021.04202 .