推论
广告
考试(生物学)
激励
选择(遗传算法)
业务
展示广告
因果推理
随机试验
在线广告
用户生成的内容
定向广告
内容交付
万维网
计算机科学
数据科学
互联网
机器学习
计量经济学
人工智能
社会化媒体
医学
古生物学
计算机网络
病理
经济
生物
微观经济学
作者
Michael Braun,Eric M. Schwartz
标识
DOI:10.1177/00222429241275886
摘要
Marketers use online advertising platforms to compare user responses to different ad content. But platforms’ experimentation tools deliver different ads to distinct and undetectably optimized mixes of users that vary across ads, even during the test. Because exposure to ads in the test is non-random, the estimated comparisons confound the effect of the ad content with the effect of algorithmic targeting. This means experimenters may not be learning what they think they are learning from ad A-B tests. The authors document these “divergent delivery” patterns during an online experiment for the first time. They explain how algorithmic targeting, user heterogeneity, and data aggregation conspire to confound the magnitude, and even the sign, of ad A-B test results. Analytically, the paper extends the potential outcomes model of causal inference to treat random assignment of ads and user exposure to ads as separate experimental design elements. Managerially, the authors explain why platforms lack incentives to allow experimenters to untangle the effects of ad content from proprietary algorithmic selection of users when running A-B tests. Given that experimenters have diverse reasons for comparing user responses to ads, the authors offer tailored prescriptive guidance to experimenters based on their specific goals.
科研通智能强力驱动
Strongly Powered by AbleSci AI