Generative chatbots based on artificial intelligence technology have become an essential channel for people to obtain health information. They provide not only comprehensive health information but also real-time virtual companionship. However, the health information provided by AI may not be completely accurate. Employing a 3 × 2 × 2 experimental design, the research examines the effects of interaction types with AI-generated content (AIGC), specifically under virtual companionship and knowledge acquisition scenarios, on the willingness to share health-related rumors. In addition, it explores the impact of the nature of the rumors (fear vs hope) and the role of altruistic tendencies in this context. The results show that people are more willing to share rumors in a knowledge acquisition situation. Fear-type rumors can stimulate people’s willingness to share more than hope-type rumors. Altruism plays a moderating role, increasing the willingness to share health rumors in the scenario of virtual companionship, while decreasing the willingness to share health rumors in the scenario of knowledge acquisition. These findings support Kelley’s three-dimensional attribution theory and negativity bias theory, and extend these results to the field of human–computer interaction. The results of this study help to understand the rumor spreading mechanism in the context of human–computer interaction and provide theoretical support for the improvement of health chatbots.