In recent years, there has been an explosion in the growth of undergraduate statistics and data science programs across the US. Simultaneously, there has been clear guidance written on curriculum development for both data science (De Veaux et al. 2017) and statistics (Carver et al. 2016) programs. While this was occurring, ABET (now simply an acronym, but previously standing for the Accreditation Board for Engineering and Technology), in coordination with organizations such as the American Statistical Association, developed accreditation criteria for Data Science programs. In this manuscript, we discuss our journey through ABET accreditation and discuss how adopting ABET processes for continuous improvement strengthens a program's assessment process. We share best practices for working across multiple departments to collect data not only on individual courses, but also on the program as a whole. While the framework presented was initially established to support ABET accreditation, we argue that a properly executed program assessment should occur regardless of whether or not an institution is seeking ABET accreditation for their data science program. Throughout this manuscript, we also discuss the extent to which ABET requirements naturally fit within our program's existing goals, including an assessment of how ABET requirements align with major ideas in the field of data science education.