Language bias, both positive and negative, is a well-documented phenomenon exhibited among human interlocutors. We examine whether this bias is exhibited toward virtual assistants, specifically, Apple's Siri and Google Assistant, with various accents. We conducted three studies with different stimuli and designs to investigate U.S. English speakers’ attitudes toward Google's British, Indian, and American voices and Apple's Irish, Indian, South African, British, Australian, and American voices. Analysis reveals consistently lower fluency ratings for Irish, Indian, and South African voices (compared with American) but no consistent results of bias related to competence, warmth, or willingness to interact. Moreover, participants often misidentified voices’ countries of origin but correctly identified them as artificial. We conclude that this overall lack of bias may be due to two possibilities: lack of humanlikeness of the voices and lack of availability of nonstandardized voices and voices from countries toward which those in the United States typically show bias.