像素
标准差
计算机科学
均方误差
人工智能
统计
数学
计算机视觉
作者
Deirdre Larsen,Takeshi Ikuma,Luisa Neubig,Andreas M. Kist,Rebecca Leonard,Andrew J. McWhorter,Melda Kunduk
出处
期刊:Journal of Speech Language and Hearing Research
[American Speech-Language-Hearing Association]
日期:2023-02-13
卷期号:66 (2): 565-572
被引量:1
标识
DOI:10.1044/2022_jslhr-22-00306
摘要
This research note illustrates the effects of video data with nonsquare pixels on the pixel-based measures obtained from videofluoroscopic swallow studies (VFSS).Six pixel-based distance and area measures were obtained from two different videoflouroscopic study units; both yielding videos with nonsquare pixels with different pixel aspect ratios (PARs). The swallowing measures were obtained from the original VFSS videos and from the videos after their pixels were squared.The results demonstrated significant multivariate effects both in video type (original vs. squared) and in the interaction between video type and sample (two video recordings of different patients, different PARs, and opposing tilt angles of the external reference). A wide range of variabilities was observed on the pixel-based measures between original and squared videos with the percent deviation ranging from 0.1% to 9.1% with the maximum effect size of 7.43.This research note demonstrates the effect of disregarding PAR to distance and area pixel-based parameters. In addition, we present a multilevel roadmap to prevent possible measurement errors that could occur. At the planning stage, the PAR of video source should be identified, and, at the analyses stage, video data should be prescaled prior to analysis with PAR-unaware software. No methodology in prior absolute or relative pixel-based studies reports adjustment to the PAR prior to measurements nor identify the PAR as a possible source of variation within the literature. Addressing PAR will improve the precision and stability of pixel-based VFSS findings and improve comparability within and across clinical and research settings.https://doi.org/10.23641/asha.21957134.
科研通智能强力驱动
Strongly Powered by AbleSci AI