Schubladenproblem file drawer problem
Diese Seite wurde seit 2 Jahren inhaltlich nicht mehr aktualisiert.
Unter Umständen ist sie nicht mehr aktuell.
BiblioMap
Synonyme
Schubladenproblem, file drawer problem
Definitionen
Als file drawer problem (Schubladenproblem) wird die Gefahr bezeichnet, dass Studien mit nicht signifikanten Ergebnissen eher in der Schublade landen statt publiziert zu werden. Umgekehrt werden Studien mit signifikanten Ergebnissen eher publiziert. Dies wirkt sich ungünstig auf die Verfügbarkeit von Studien zu einer bestimmten Fragenstellung aus, so dass Meta-Analysen eher die Studien mit als signifikant angesehenen Resultaten umfassen als die als nicht signifikant bezeichneteten Resultate. Dies kann zu einer Verzerrung der Metastudie führen.
erfasst im Biblionetz am 31.03.2013A reporting bias and the file drawer problem are opposite sides of the same coin. Consider a researcher who conducts a study and collects data examining four separate effects. Two of the results turn out to be statistically significant while the other two do not achieve statistical significance. A reporting bias arises when the researcher reports only the statistically significant results (Hedges 1988). The researcher’s decision to file away the nonsignificant results, while understandable, creates a file drawer problem (Rosenthal 1979). The problem is that evidence which is relevant to the meta-analytic estimation of effect sizes has been kept out of the public domain. Reviews that exclude these unreported and filed away results are likely to be biased.
Von Paul D. Ellis im Buch The Essential Guide to Effect Sizes (2010) im Text Minimizing bias in meta-analysis auf Seite 117Bemerkungen
In their survey of members of the American Psychological Association, Coursol and Wagner (1986) found that the decision to submit a paper for publication was significantly related to the outcome achieved in study.
Von Paul D. Ellis im Buch The Essential Guide to Effect Sizes (2010) im Text Minimizing bias in meta-analysis auf Seite 117At best the file drawer problem will lead to some inflation in mean estimates. At worst, it will lead to Type I errors. This could happen when the null hypothesis of no effect happens to be true and the majority of studies which have reached this conclusion have gone unreported or have been filed away rather than published. Statistically there will be a small minority of studies that confuse sampling variability with natural variation in the population (that is, their authors report an effect where none exists), and these are much more likely to be submitted for publication.
Von Paul D. Ellis im Buch The Essential Guide to Effect Sizes (2010) im Text Minimizing bias in meta-analysis auf Seite 118Remember Goodhart’s law? “When a measure becomes a target, it ceases to be a good measure.” In a sense this is what has happened with p-values. Because a p-value lower than 0.05 has become essential for publication, p-values no longer serve as a good measure of statistical support. If scientific papers were published irrespective of p-values, these values would remain useful measures of the degree of statistical support for rejecting a null hypothesis. But since journals have a strong preference for papers with p-values below 0.05, p-values no longer serve their original purpose.
Von Carl T. Bergstrom, Jevin D. West im Buch Calling Bullshit (2020) im Text The Susceptibility of Science Verwandte Objeke
Verwandte Begriffe (co-word occurance) | Publikationsbiaspublication bias(0.1), GIGO-Argumentgarbage in - garbage out argument(0.08), apple-and-oranges-Problemapple-and-oranges-problem(0.07) |
Statistisches Begriffsnetz
Zitationsgraph
3 Erwähnungen
- Forschungsmethoden und Evaluation - für Human- und Sozialwissenschaftler (Jürgen Bortz, Nicola Döring) (2001)
- The Essential Guide to Effect Sizes - Statistical Power, Meta-Analysis, and the Interpretation of Research Results (Paul D. Ellis) (2010)
- Calling Bullshit - The Art of Skepticism in a Data-Driven World (Carl T. Bergstrom, Jevin D. West) (2020)