People are often reluctant to incorporate information produced by algorithms into their decisions, a phenomenon called ``algorithm aversion''. This paper shows how algorithm aversion arises when the choice to follow an algorithm conveys information about a human's ability. I develop a model in which workers make forecasts of a random outcome based on their own private information and an algorithm's signal. Low-skill workers receive worse information than the algorithm and hence should always follow the algorithm's signal, while high-skill workers receive better information than the algorithm and should sometimes override it. However, due to reputational concerns, low-skill workers inefficiently override the algorithm to increase the likelihood they are perceived as high-skill. The model provides a fully rational microfoundation for algorithm aversion that aligns with the broad concern that AI systems will displace many types of workers.