Algorithms have been the subject of a heated debate regarding their potential to yield biased decisions. Prior research has focused on documenting algorithmic bias and discussing its origins from a technical standpoint. We look at algorithmic bias from a psychological perspective, raising a fundamental question that has received little attention: are people more or less likely to perceive decisions that yield disparities as biased, when such decisions stem from algorithms as opposed to humans? We find that algorithmic decisions that yield gender or racial disparities are less likely to be perceived as biased than human decisions. This occurs because people believe that algorithms, unlike humans, decontextualize decision-making by neglecting individual characteristics and blindly applying rules and procedures irrespective of whom they are judging. In situations that entail the potential for discrimination, this belief leads people to think that algorithms are more likely than humans to treat everyone equally, thus less likely to yield biased decisions. This asymmetrical perception of bias, which occurs both in the general population and among members of stigmatized groups, leads people to endorse stereotypical beliefs that fuel discrimination and reduces their willingness to act against potentially discriminatory outcomes. (PsycInfo Database Record (c) 2021 APA, all rights reserved).