Studies have shown that uncivil comments under an online news article may result in biased perceptions of the news content, and explicit comment moderation has the potential to mitigate this adverse effect. Using an online experiment, the present study extends this line of research with the examination of how interface cues signalling different agents (human vs. machine) in moderating uncivil comments affect a reader's judgment of the news and how prior belief in machine heuristic moderates such effects. The results indicated that perceptions of news bias were attenuated when uncivil comments were moderated by a machine (as opposed to a human) agent, which subsequently engendered greater perceived credibility of the news story. Additionally, such indirect effects were more prominent among readers who strongly believed that machine operations are generally accurate and reliable than those with a weaker prior belief in this rule of thumb.