This study explored the potential of eXplainable Artificial Intelligence (XAI) in raising user awareness of algorithmic bias. This study examined the popular "explanation by example" XAI approach, where users receive explanatory examples resembling their input. As this XAI approach allows users to gauge the congruence between these examples and their circumstances, perceived incongruence then evokes perceptions of unfairness and exclusion, prompting users not to put blind trust in the system and raising awareness of algorithmic bias stemming from non-inclusive datasets. The results further highlight the moderating role of users' prior experience with discrimination.