计算机科学
I类和II类错误
可扩展性
考试(生物学)
机器学习
实证研究
统计假设检验
随机化
样品(材料)
样本量测定
数据挖掘
统计
随机对照试验
数学
医学
外科
古生物学
生物
化学
数据库
色谱法
作者
Chengchun Shi,Shikai Luo,Hongtu Zhu,Rui Song
出处
期刊:Cornell University - arXiv
日期:2021-01-01
被引量:1
标识
DOI:10.48550/arxiv.2111.03908
摘要
Tech companies (e.g., Google or Facebook) often use randomized online experiments and/or A/B testing primarily based on the average treatment effects to compare their new product with an old one. However, it is also critically important to detect qualitative treatment effects such that the new one may significantly outperform the existing one only under some specific circumstances. The aim of this paper is to develop a powerful testing procedure to efficiently detect such qualitative treatment effects. We propose a scalable online updating algorithm to implement our test procedure. It has three novelties including adaptive randomization, sequential monitoring, and online updating with guaranteed type-I error control. We also thoroughly examine the theoretical properties of our testing procedure including the limiting distribution of test statistics and the justification of an efficient bootstrap method. Extensive empirical studies are conducted to examine the finite sample performance of our test procedure.
科研通智能强力驱动
Strongly Powered by AbleSci AI