Do people prefer that artificial intelligence (AI) aligns with gender stereotypes when requesting help to answer a question? We found that people preferred gender stereotypicality (over counterstereotypicality and androgyny) in voice-based AI when seeking help (e.g., preferring feminine voices to answer questions in feminine domains; Studies 1a–1b). Preferences for stereotypicality were stronger when using binary zero-sum (vs. continuous non-zero-sum) assessments (Study 2). Contrary to expectations, biases were larger when judging human (vs. AI) targets (Study 3). Finally, people were more likely to request (vs. decline) assistance from gender stereotypical (vs. counterstereotypical) human targets, but this choice bias did not extend to AI targets (Study 4). Across studies, we observed stronger preferences for gender stereotypicality in feminine (vs. masculine) domains, potentially due to examining biases in a stereotypically feminine context (helping). These studies offer nuanced insights into conditions under which people use gender stereotypes to evaluate human and non-human entities.