Objectives Contemporary clinical assessment of vocal fold adduction and abduction is qualitative and subjective. Herein is described a novel computer vision tool for automated quantitative tracking of vocal fold motion from videolaryngoscopy. The potential of this software as a diagnostic aid in unilateral vocal fold paralysis is demonstrated. Study Design Case‐control. Methods A deep‐learning algorithm was trained for vocal fold localization from videoendoscopy for automated frame‐wise estimation of glottic opening angles. Algorithm accuracy was compared against manual expert markings. Maximum glottic opening angles between adults with normal movements (N = 20) and those with unilateral vocal fold paralysis (N = 20) were characterized. Results Algorithm angle estimations demonstrated a correlation coefficient of 0.97 ( P < .001) and mean absolute difference of 3.72° (standard deviation [SD], 3.49°) in comparison to manual expert markings. In comparison to those with normal movements, patients with unilateral vocal fold paralysis demonstrated significantly lower maximal glottic opening angles (mean 68.75° ± 11.82° vs. 49.44° ± 10.42°; difference, 19.31°; 95% confidence interval [CI] [12.17°–26.44°]; P < .001). Maximum opening angle less than 58.65° predicted unilateral vocal fold paralysis with a sensitivity of 0.85 and specificity of 0.85, with an area under the receiver operating characteristic curve of 0.888 (95% CI [0.784–0.991]; P < .001). Conclusion A user‐friendly software tool for automated quantification of vocal fold movements from previously recorded videolaryngoscopy examinations is presented, termed automated glottic action tracking by artificial intelligence (AGATI) . This tool may prove useful for diagnosis and outcomes tracking of vocal fold movement disorders. Level of Evidence IV Laryngoscope , 131:E219–E225, 2021