The aim of this study was to determine if peer review conducted under real-world conditions is systematically biased.A repeated-measures design was effectively created when two board-certified obstetrician-gynecologists reviewed the same 26 medical records of patients treated by the same physician, and provided written evaluations of each case and a summary of their criticisms. The reviews were conducted independently for two different, unaffiliated hospitals. Neither reviewer was aware of the other's review, and neither was affiliated with either hospital or knew the physician under review. This study reports the degree of agreement between the two reviewers over the care rendered to these 26 patients.Three of the 26 cases reviewed had complications. Both reviewers criticized these cases, but criticized 2 of them for different reasons. At least one of the reviewers criticized 14 (61%) of the 23 uncomplicated cases, about which no quality concerns had been raised prior to the review. With one exception, they criticized completely different cases and criticized this 1 case for different reasons. Thus, only 4 of the 17 cases criticized by at least one of the reviewers were criticized by both of them, and only 1 of the 4 cases were criticized for the same reason. The Kappa statistic was -0.024, indicating no agreement between the reviewers (P = 0.98).As presently conducted, peer review can be systematically biased even when conducted independently by external reviewers. Dual-process theory of reasoning can account for the bias and predicts how the bias may potentially be eliminated or reduced.