Lifeline systems such as transportation and water distribution networks may deteriorate with age, raising the risk of system failure or degradation. Thus, system-level sequential decision-making is essential to address the problem cost-effectively while minimizing the potential loss. Researchers proposed to assess the risk of lifeline systems using Markov Decision Processes (MDPs) to identify a risk-informed operation and maintenance (O&M) policy. In complex systems with many components, however, it is potentially intractable to find MDP solutions because the number of states and action spaces increases exponentially. This paper proposes a multi-agent deep reinforcement learning framework termed parallelized multi-agent Deep Q-Network (PM-DQN) to overcome the curse of dimensionality. The proposed method takes a divide-and-conquer strategy, in which multiple subsystems are identified by community detection, and each agent learns to achieve the O&M policy of the corresponding subsystem. The agents establish policies to minimize the decentralized cost of the cluster unit, including the factorized cost. Such learning processes occur simultaneously in several parallel units, and the trained policies are periodically synchronized with the best ones, thereby improving the master policy. Numerical examples demonstrate that the proposed method outperforms baseline policies, including conventional maintenance schemes and the subsystem-level optimal policy.