TY - JOUR
T1 - An assessment of publication bias using a sample of published clinical trials
AU - Berlin, Jesse A.
AU - Begg, Colin B.
AU - Louis, Thomas A.
N1 - Funding Information:
• Jesse A. Berlin is Research Scientist, New England Research Institute, Watertown, MA 02172. Colin B. Begg is Chairman, Department of Epidemiology and Biostatistics, Memorial Sloan-Kettering Cancer Center, New York, NY 10021.Thomas A. Louis is Professor and Head, Division of Biometry, University of Minnesota School of Public Health, Minneapolis, MN 55455. This work was supported by National Cancer Institute Grant CA-35291. The authors thank Karen Abbett and Jennifer Thomas for assistance with the manuscript, and Thomas Chalmers and Frederick Mosteller for helpful suggestions.
PY - 1989/6
Y1 - 1989/6
N2 - The potential magnitude of publication bias has been examined with a consecutive sample of published cancer clinical trials. The analysis is based on the premise that the magnitude of the true treatment effect is unrelated to design features of the study, in particular sample size. This assumption permits an analysis based only on published studies. Three primary endpoints are examined: Overall patient survival, disease-free survival, and tumor response rate. There are striking trends for each endpoint, with small studies appearing to possess large treatment effects and large studies possessing relatively small effects. It is believed that these differences are primarily due to publication bias. The bias is very large: Absolute differences observed were 41% for overall survival, 79% for disease-free survival, and 17% for response rates. Other study features have been examined that might be associated with bias, or that might be responsible for the striking trends regarding sample size. The results indicate that absence of randomization leads to significant bias, and studies conducted in a single institution are somewhat more prone to bias than multiinstitutional studies, though the trends are less consistent in the latter case. No strong trend was observed for journal type. Nevertheless, none of these variables could account for the strong effect of sample size. Sensitivity analyses of the results were conducted and alternative models were considered. These analyses generally support the contention that the magnitude of the bias due to sample size cannot be explained by alternative factors. An implication of this study is that the results of small published studies are typically unreliable, even taking into account the fact that such trials are imprecise due to sampling variation.
AB - The potential magnitude of publication bias has been examined with a consecutive sample of published cancer clinical trials. The analysis is based on the premise that the magnitude of the true treatment effect is unrelated to design features of the study, in particular sample size. This assumption permits an analysis based only on published studies. Three primary endpoints are examined: Overall patient survival, disease-free survival, and tumor response rate. There are striking trends for each endpoint, with small studies appearing to possess large treatment effects and large studies possessing relatively small effects. It is believed that these differences are primarily due to publication bias. The bias is very large: Absolute differences observed were 41% for overall survival, 79% for disease-free survival, and 17% for response rates. Other study features have been examined that might be associated with bias, or that might be responsible for the striking trends regarding sample size. The results indicate that absence of randomization leads to significant bias, and studies conducted in a single institution are somewhat more prone to bias than multiinstitutional studies, though the trends are less consistent in the latter case. No strong trend was observed for journal type. Nevertheless, none of these variables could account for the strong effect of sample size. Sensitivity analyses of the results were conducted and alternative models were considered. These analyses generally support the contention that the magnitude of the bias due to sample size cannot be explained by alternative factors. An implication of this study is that the results of small published studies are typically unreliable, even taking into account the fact that such trials are imprecise due to sampling variation.
KW - Folded-normal distribution
KW - Meta-analysis
UR - http://www.scopus.com/inward/record.url?scp=0000544148&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=0000544148&partnerID=8YFLogxK
U2 - 10.1080/01621459.1989.10478782
DO - 10.1080/01621459.1989.10478782
M3 - Article
AN - SCOPUS:0000544148
SN - 0162-1459
VL - 84
SP - 381
EP - 392
JO - Journal of the American Statistical Association
JF - Journal of the American Statistical Association
IS - 406
ER -