This study explores the feasibility of large language models (LLMs) like ChatGPT and Bard as virtual participants in health-related research interviews.The goal is to assess whether these models can function as a "collective knowledge platform" by processing extensive datasets.Framed as a "proof of concept", the research involved 20 interviews with both ChatGPT and Bard, portraying personas based on parents of adolescents.The interviews focused on physician-patient-parent confidentiality issues across fictional cases covering alcohol intoxication, STDs, ultrasound without parental knowledge, and mental health.Conducted in Dutch, the interviews underwent independent coding and comparison with human responses.The analysis identified four primary themes-privacy, trust, responsibility, and etiology-from both AI models and human-based interviews.While the main concepts aligned, nuanced differences in emphasis and interpretation were observed.Bard exhibited less interpersonal variation compared to ChatGPT and human respondents.Notably, AI personas prioritized privacy and age more than human parents.Recognizing disparities between AI and human interviews, researchers must adapt methodologies and refine AI models for improved accuracy and consistency.This research initiates discussions on the evolving role of generative AI in research, opening avenues for further exploration.