This study investigates effective feedback mechanisms to maintain human engagement in interactive machine learning (IML) systems, focusing on social media platforms. We developed "Loop," an IML system based on human-in-the-loop (HITL) principles that recommends content while encouraging users to report inaccuracies for model refinement. Loop implements three types of artificial intelligence (AI) feedback on user reports: (a) machine learning (ML)-centric, (b) personal-centric, and (c) community-centric feedback. In addition, we evaluated the relative effectiveness of these feedback types under two different task criticality scenarios: high and low. A user study with 30 participants was conducted to evaluate Loop through questionnaires and interviews. Results showed that participants preferred algorithmic improvements for personal benefit over altruistic contributions to the community, especially for low-criticality tasks. Furthermore, personal-centric feedback had a significant impact on user engagement and satisfaction. Our findings provide insights into the effectiveness of machine feedback in HITL-ML systems, contributing to the design of more engaging and effective IML interfaces. We discuss implications and strategies for encouraging proactive user engagement in HITL-ML-based systems, emphasizing the importance of tailored feedback mechanisms.