Human moral reactions to artificial intelligence (AI) agents' behavior constitute an important aspect of modern-day human-AI relationships. Although previous studies have mainly focused on autonomy ethics, this study investigates how individuals judge AI agents' violations of community ethics (including betrayals and subversions) compared with human violations. Participants' behavioral responses, event-related potentials (ERPs), and individual differences were assessed. Behavioral findings reveal that participants rated AI agents' community-violating actions less morally negative than human transgressions, possibly because AI agents are commonly perceived as having less agency than human adults. The ERP N1 component showed the same pattern with moral rating scores, indicating the modulation effect of human-AI differences on initial moral intuitions. Moreover, the level of social withdrawal correlated with a smaller N1 in the human condition but not in the AI condition. The N2 and P2 components were sensitive to the difference between the loyalty/betrayal and authority/subversion domains but not human/AI differences. Individual levels of moral sense and autistic traits also influenced behavioral data, especially on the loyalty/betrayal domain. In our opinion, these findings offer insights for predicting moral responses to AI agents and guiding ethical AI development aligned with human moral values.