The potential magnitude of publication bias has been examined with a consecutive sample of published cancer clinical trials. The analysis is based on the premise that the magnitude of the true treatment effect is unrelated to design features of the study, in particular sample size. This assumption permits an analysis based only on published studies. Three primary endpoints are examined: Overall patient survival, disease-free survival, and tumor response rate. There are striking trends for each endpoint, with small studies appearing to possess large treatment effects and large studies possessing relatively small effects. It is believed that these differences are primarily due to publication bias. The bias is very large: Absolute differences observed were 41% for overall survival, 79% for disease-free survival, and 17% for response rates. Other study features have been examined that might be associated with bias, or that might be responsible for the striking trends regarding sample size. The results indicate that absence of randomization leads to significant bias, and studies conducted in a single institution are somewhat more prone to bias than multiinstitutional studies, though the trends are less consistent in the latter case. No strong trend was observed for journal type. Nevertheless, none of these variables could account for the strong effect of sample size. Sensitivity analyses of the results were conducted and alternative models were considered. These analyses generally support the contention that the magnitude of the bias due to sample size cannot be explained by alternative factors. An implication of this study is that the results of small published studies are typically unreliable, even taking into account the fact that such trials are imprecise due to sampling variation.
- Folded-normal distribution
ASJC Scopus subject areas
- Statistics and Probability
- Statistics, Probability and Uncertainty