THE CRITICAL CHALLENGE OF USING LARGE-SCALE DIGITAL EXPERIMENT PLATFORMS FOR SCIENTIFIC DISCOVERY.

Saved in:
Bibliographic Details
Title: THE CRITICAL CHALLENGE OF USING LARGE-SCALE DIGITAL EXPERIMENT PLATFORMS FOR SCIENTIFIC DISCOVERY.
Authors: Abbasi, Ahmed (AUTHOR) aabbasi@nd.edu, Somanchi, Sriram (AUTHOR) somanchi.1@nd.edu, Kelley, Ken (AUTHOR) kkelley@nd.edu
Source: MIS Quarterly. Mar2025, Vol. 49 Issue 1, p1-28. 28p. 9 Diagrams, 3 Charts, 7 Graphs.
Subject Terms: *ELECTRONIC commerce, DIGITAL technology, RESEARCH methodology, EXPERIMENTAL design, MACHINE learning, CAUSAL inference, RESEARCH bias
Abstract: Robust digital experimentation platforms have become increasingly pervasive at major technology and e-commerce firms worldwide. They allow product managers to use data-driven decision-making through online controlled experiments that estimate the average treatment effect (ATE) relative to a status quo control setting and make associated inferences. As demand for experiments continues to grow, orthogonal test planes (OTPs) have become the industry standard for managing the assignment of users to multiple concurrent experimental treatments in companies using large-scale digital experimentation platforms. In recent years, firms have begun to recognize that test planes might be confounding experimental results, but nevertheless, the practical benefits outweigh the costs. However, the uptick in practitioner-led digital experiments has coincided with an increase in academic-industry research partnerships, where largescale digital experiments are being used to scientifically answer research questions, validate design choices, and/or derive computational social science-based empirical insights. In such contexts, confounding and biased estimation may have much more pronounced implications for the validity of scientific findings, contributions to theory, building a cumulative literature, and ultimately practice. The purpose of this Issues and Opinions article is to shed light on OTPs—in our experience, most researchers are unaware of how such test planes can lead to incorrect inferences. We used a case study conducted at a major e-commerce company to illustrate the extent to which interactions in concurrent experiments can bias ATEs, often making them appear more positive than they actually are. We discuss implications for research, including the distinction between practical industry experiments and academic research, methodological best practices for mitigating such concerns, and transparency and reproducibility considerations stemming from the complexity and opacity of large-scale experimentation platforms. More broadly, we worry that confounding in scientific research due to reliance on large-scale digital experiments meant to serve a different purpose is a microcosm of a larger epistemological confounding regarding what constitutes a contribution to scientific knowledge. [ABSTRACT FROM AUTHOR]
Copyright of MIS Quarterly is the property of MIS Quarterly and its content may not be copied or emailed to multiple sites without the copyright holder's express written permission. Additionally, content may not be used with any artificial intelligence tools or machine learning technologies. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
Database: Business Source Index
Description
Abstract:Robust digital experimentation platforms have become increasingly pervasive at major technology and e-commerce firms worldwide. They allow product managers to use data-driven decision-making through online controlled experiments that estimate the average treatment effect (ATE) relative to a status quo control setting and make associated inferences. As demand for experiments continues to grow, orthogonal test planes (OTPs) have become the industry standard for managing the assignment of users to multiple concurrent experimental treatments in companies using large-scale digital experimentation platforms. In recent years, firms have begun to recognize that test planes might be confounding experimental results, but nevertheless, the practical benefits outweigh the costs. However, the uptick in practitioner-led digital experiments has coincided with an increase in academic-industry research partnerships, where largescale digital experiments are being used to scientifically answer research questions, validate design choices, and/or derive computational social science-based empirical insights. In such contexts, confounding and biased estimation may have much more pronounced implications for the validity of scientific findings, contributions to theory, building a cumulative literature, and ultimately practice. The purpose of this Issues and Opinions article is to shed light on OTPs—in our experience, most researchers are unaware of how such test planes can lead to incorrect inferences. We used a case study conducted at a major e-commerce company to illustrate the extent to which interactions in concurrent experiments can bias ATEs, often making them appear more positive than they actually are. We discuss implications for research, including the distinction between practical industry experiments and academic research, methodological best practices for mitigating such concerns, and transparency and reproducibility considerations stemming from the complexity and opacity of large-scale experimentation platforms. More broadly, we worry that confounding in scientific research due to reliance on large-scale digital experiments meant to serve a different purpose is a microcosm of a larger epistemological confounding regarding what constitutes a contribution to scientific knowledge. [ABSTRACT FROM AUTHOR]
ISSN:02767783
DOI:10.25300/misq/2024/18201