There is increased enthusiasm about the use of Artificial Intelligence (AI) technologies in psychotherapy. Notably, AI psychotherapy chatbots are increasing in popularity, especially since the US Food and Drug Administration (FDA) gave one of these apps breakthrough device designation. This article raises concerns about the lack of consideration of potential harms of this technology for clinical trial participants, and current and future users. We outline what these harms might be, by turning to the Belmont Report and the existing literature on harms of (typical) psychotherapy and conclude with two recommendations. Note that our goal is not to articulate doomsday fears regarding the use of AI in psychotherapy contexts; rather we offer a constructive proposal in thinking about the potential harms of these tools and invite clinicians, patients, developers, researchers, policymakers and funding agencies to work together to augment the benefits of these tools and minimize their potential harms.