Die Präsentation wird geladen. Bitte warten

Die Präsentation wird geladen. Bitte warten

Small-study effects und Reporting-Bias

Ähnliche Präsentationen


Präsentation zum Thema: "Small-study effects und Reporting-Bias"—  Präsentation transkript:

1 Small-study effects und Reporting-Bias

2 Schritte eines systematischen Cochrane Reviews
Fragestellung festlegen Auswahlkriterien definieren Methoden definieren Studien suchen Auswahlkriterien anwenden Daten extrahieren Bias-Risikos der Studien bewerten Ergebnisse analysieren und darstellen Ergebnisse interpretieren und Schlussfolgerungen ziehen Review optimieren und aktualisieren You will use RevMan throughout the review process, from writing your protocol all the way through to your statistical analysis and the editorial process.

3 Übersicht ‘Small-study effects’ erkennen Reporting-Bias verstehen
Siehe im Handbuch Kapitel 10

4 Zur Erinnerung: Zufallsfehler
Wenn mehrere Studien einen Effekt schätzen, ist jede Studie vom Zufallsfehler betroffen Die Schätzer liegen verteilt um den wahren Effekt – einige niedriger, einige höher Zufalls- fehler WahrerEffekt Effekt- schätzer Usually, the differences between small studies and larger studies vary based on their vulnerability to random sampling error. Any time we conduct a study and estimate an effect, the study is affected by random error – there is a gap between our estimate, and the true effect of the intervention. The estimates of multiple studies will be scattered, sometimes overestimating and sometimes underestimating the effect. Quelle: Julian Higgins

5 Zufallsfehler und kleine Studien
Beim Zufallsfehler wird angenommen, dass… kleine Studien weniger genau sind wie große Studien die Schätzer weiter um den Mittelwert streuen Small-study effects wenn kleine Studien konsistent positive oder negative Ergebnisse haben als große Studien eine mögliche Ursache für Heterogenität verschiedene Erklärungen möglich We can usually assume that small studies will be less precise than large studies in estimating an intervention effect – we expect the results larger studies to be closer to the true effect, and smaller studies to be more widely scattered. This assumption will hold true, for fixed-effect and random-effects meta-analyses, even in the presence of other kinds of heterogeneity such as differences in the intervention or population, except in one case: when the results of the small studies are consistently different to the larger studies, either more positive or more negative. Like other kinds of heterogeneity, it may be the case that differences in the results of the study are somehow related to the size of the study – what this means, and what the different explanations might be, we’ll explore further.

6 Small-study effects erkennen
Muss für jeden Endpunkt einzeln bewertet werden Verfügbare Methoden: Funnel Plots Statistische Tests Sensitivitätsanalysen Ggf. StatistikerIn um Rat fragen The first thing we have to do is identify whether or not small-study effects are at work in our review. There are several methods available to test whether the results of your studies are associated with the study size: we can use funnel plots, statistical tests and sensitivity analysis. We’ll explain the basics of each of these, remembering that you may find small-study effects for some outcomes and not others. These methods are tricky, though, and the best way to proceed is to get advice from a statistician who can assist you in planning your steps and interpreting your findings.

7 Funnel Plots Tragen Effektgröße gegen Studiengröße auf
Studiengröße wird meist durch ein Maß wie Standardfehler angegeben Studien streuen um den kombinierten Effektschätzer Größere Studien am oberen Ende, kleinere Studien weiter unten Man erwartet, dass kleine Studien breiter streuen Ein symmetrischer Graph sieht wie ein umgekehrter Trichter (‘funnel’) aus In RevMan können Funnel Plots erstellt werden Ist in der Regel sinnvoll interpretierbar ≥ 10 Studien mit verschiedener Größe vorhanden sind Funnel plots are a tool available to take the results of a meta-analysis and plot the results of each individual study against a measure of the study’s size (usually represented by a measure of variance like standard error). If the study’s size is not associated with the results, then the plot should represent an inverted funnel – larger studies will be at the top in the centre of the plot, close to the meta-analysed estimate of effect, and smaller studies will progressively scatter more widely either side towards the bottom of the plot. RevMan can generate these plots for you, but it should be emphasised that funnel plots will not give meaningful results if you have fewer than 10 studies in your meta-analysis, or if they are all the same size.

8 Symmetrischer Funnel Plot
0.1 0.33 1 3 2 10 0.6 Standardfehler This is what a funnel plot looks like. You can see that the standard error has been used as the measure of size. The scale is reversed so that studies with low SE (i.e. large studies) will be at the top of the plot, and studies with high SE (i.e. small studies) will be at the bottom. So, the studies at the point of the funnel will be the large studies, and the smaller studies gradually scatter wider and wider towards the bottom. Note that the important vertical line on this plot is not the line of no effect, in this case 1, as it would be on a forest plot. The important vertical line that we want to be in the centre of the triangle is the overall effect estimate from the meta-analysis. For ratio measures, just like a forest plot, a logarithmic scale is used for the measure of treatment effect so that the scale is symmetrical. Effekt Quelle: Matthias Egger & Jonathan Sterne

9 Asymmetrischer Funnel Plot
0.1 0.33 1 3 2 10 0.6 Standardfehler Unpublizierte Studien On our hypothetical plot, this is what it might look like if we have small-study effects. You can see that we have large studies at the top, close to the overall effect estimate, but we don’t have a nice even scatter of smaller studies either side – our smaller studies appear to be consistently estimating lower odds ratios than the larger studies. This is called funnel plot asymmetry, and indicates that we have some kind of small-study effect at work. This might be because these small studies were not published, and couldn’t be found for the review. Effekt Quelle: Matthias Egger & Jonathan Sterne

10 Asymmetrischer Funnel Plot
0.1 0.33 1 3 2 10 0.6 Kleine Studien haben alle positive Effektschätzer Standardfehler Alternatively, it might be that the small studies are consistently finding different results to the large studies, and so there are no studies with results up the other end of the scale. RR Effekt Quelle: Matthias Egger & Jonathan Sterne

11 Kolloide vs. Kristalloide für Volumenersatztherapie
This is a more realistic picture of what a funnel plot might look like for your review – if you are lucky enough to have so many included studies. ASK: Is this plot symmetrical? Yes. Real funnel plots will rarely be perfect triangles, but this one appears fairly symmetrical. Tod Adaptiert von: Perel P, Roberts I. Colloids versus crystalloids for fluid resuscitation in critically ill patients. Cochrane Database of Systematic Reviews 2011, Issue 3.

12 Magnesium bei Herzinfarkt
ASK: How about this example - is this plot symmetrical? No – it appears that there are more studies on the left side of the effect estimate. It can be difficult to judge from a visual inspection like this how much we should be concerned about the missing studies – is it likely to be reporting bias, or is it one of the other reasons? Adaptiert von: Li J, Zhang Q, Zhang M, Egger M. Intravenous magnesium for acute myocardial infarction. Cochrane Database of Systematic Reviews 2007, Issue 2.

13 Gründe für Asymmetrie im Funnel Plot
Zufall Artefakte Einige statistische Größen sind mit dem Standardfehler korreliert, z.B. OR Klinische Unterschiede Unterschiedliche Studienpopulation in kleinen Studien Implementierung ist anders in kleinen Studien Methodische Unterschiede Größeres Bias-Risiko in kleinen Studien Reporting-Bias It’s important not to jump to conclusions about the causes of funnel plot asymmetry and small-study effects. There are many different reasons why this might occur, and you will need to put some thought into distinguishing between these different effects. Knowing your intervention, and the circumstances in which it was implemented in different studies, can help identify causes of funnel plot asymmetry. It’s also important to remember that your review may suffer from some of these problems even if the funnel plots are symmetrical and the tests negative, so you will always be required to explore and understand your results, and consider each of these issues for yourself. The first reason you might find asymmetry is chance – it may just be random chance that the small studies found different effects, particularly in reviews with few studies – which applies to most Cochrane reviews. Secondly, it may be artefactual – some statistics are naturally correlated with their standard errors, for example odds ratios, and in these cases some funnel plot asymmetry is to be expected. It may be due to clinical diversity, or the heterogeneity in your study populations and intervention. For example, you may have different underlying populations in the smaller studies that obtain different benefits from the intervention. Early, small, exploratory studies may have been conducted in high-risk populations, who might receive more benefit from the intervention – in that case you might have a correlation between the effect and the size of the study. This can also apply to the delivery of the intervention – e.g. if larger studies deliver the intervention with less fidelity and monitoring, or less intensity than smaller studies, we might see different effects. It may be that you are already planning subgroup analyses which can clarify these differences in effects. In some cases, methodological diversity may be at work. Small studies may be consistently overestimating the effect due to bias, e.g. poor allocation concealment, lack of blinding, etc. Concern about this is the reason we assess the risk of bias in our included studies in the first place, and the risk of bias assessment in your review may indicate whether these factors may be impacting on your results. If this is occurring in your review, and you have not done so already, you may wish to consider excluding studies at high risk of bias from your analysis. Finally, asymmetry may be caused by reporting biases, otherwise known as publication bias – we’ll come to that later. Quelle: Egger M et al. Bias in meta-analysis detected by a simple, graphical test. BMJ 1997; 315: 629

14 “verbesserte” Funnel Plots mit Konturen
Some enhancements to funnel plots can be helpful in this regard – such as these contour-enhanced funnel plots. Unfortunately these enhanced plots are not currently available in RevMan. The shaded areas on these plots indicate P values – that is, studies falling outside the white area in the middle indicate significant results, at the level of P < 0.1, P < 0.05, and P < 0.01 respectively as we move away from the middle. If studies appear to be missing in the middle of the plot – indicating that the results of the missing studies would not be statistically significant, then this is consistent with our understanding of reporting bias. That is, non-significant trials are less likely to be found, although we should still consider the other possible explanations. The plot on the left is an example of this kind of case. However, if the asymmetry suggests that the missing studies would be statistically significant, especially if they would be significant in the direction considered desirable by the authors, then it’s less likely that studies have not been reported or published for reasons of reporting bias. Looking at the plot on the right, the asymmetry suggests missing studies over to the left side, crossing into the area of statistical significance. This is not consistent with our understanding of reporting bias – it would mean that the non-significant studies had been published, and not the significant ones. It’s much more likely that there is some other reason for the asymmetry. If there are no statistically significant studies at all, then it’s very unlikely that reporting bias is the cause of the asymmetry. Quelle: Sterne JAC, Sutton AJ, Ioannidis JPA et al. Recommendations for examining and interpreting funnel plot asymmetry in meta-analyses of randomised controlled trials. BMJ 2011;342:d4002 doi: /bmj.d4002

15 Asymmetrie durch Heterogenität
Here at the top left is an example of a funnel plot in which there is overall asymmetry. On these plots, the dotted triangle lines indicate the area within which we would expect to find 95% of the data in the absence of bias and heterogeneity. As it turns out, the asymmetry in this plot is due entirely to differences in the effects between subgroups. Separate funnel plots for each of the three subgroups show that none of them is asymmetrical, but there are differences in the effect of the intervention in each subgroup. When we look at all three subgroups overlaid, it looks asymmetrical, but in face what we have is heterogeneity arising from other factors. Quelle: Sterne JAC, Sutton AJ, Ioannidis JPA et al. Recommendations for examining and interpreting funnel plot asymmetry in meta-analyses of randomised controlled trials. BMJ 2011;342:d4002 doi: /bmj.d4002

16 Tests für Funnel-Plot-Asymmetrie
Ist die Assoziation zwischen Studien- und Effektgröße größer als zufällig zu erwarten wäre? Drei Tests werden empfohlen Sie haben generell eine geringe stat. Power, um Reporting-Bias auszuschließen Zusätzlich sollte Form des Funnel Plot betrachtet werden In der Regel nur sinnvoll interpretierbar ≥ 10 Studien versch. Größe vorhanden sind Visual inspection of funnel plots is not always easy or reliable. Aside from funnel plots, it’s also possible to conduct statistical tests to determine whether there is a greater association between study size and effect than we would expect to occur by random chance. There are a range of different tests available, although they have advantages and disadvantages. Three of the available tests are recommended – check the Cochrane Handbook section , and you should definitely get statistical advice before deciding to use any of them. If you are assessing small-study effects, you should always include a visual assessment of the funnel plot as well, and like funnel plots, these statistical tests are usually only useful if you have at least 10 studies. Siehe im Handbuch Kapitel

17 Sensitivitätsanalyse
Wie stark wirkt sich ein Effekt durch kleine Studien auf die Ergebnisse aus? Bei Bedarf StatistikerIn fragen bevor Sie fortfahren Falls Heterogenität (I2 > 0), die Schätzer aus dem Fixed-effect und dem Random-effects Modell vergleichen Gibt es einen Unterschied? Wenn ja, gibt es einen Grund, warum die Intervention in kleineren Studien wirksamer bzw. weniger wirksam sein könnte? Selektionsmodelle (z.B. ‘trim & fill’) und andere Methoden If you suspect that you have identified a small-study effect, you may wish to know how large its impact might be on your results. We have already come across a useful technique for testing this in our separate presentation on heterogeneity: where small studies are systematically different, comparing the fixed- and random-effects meta-analyses will give you a sensitivity analysis of the potential impact of the small studies. If the random-effects model shows a different effect, consider whether it is reasonable to conclude that the intervention was more effective in smaller studies, with reference to possible clinical and methodological diversity between the studies. This is not a perfect test – it is possible for small-study effects to bias the results where there is no heterogeneity, and where fixed-effect and random-effects models give the same result. Selection models (e.g. ‘trim and fill’, other more sophisticated models) can be used, but require expertise and careful application. Do not attempt to use these without statistical advice.

18 Sensitivitätsanalyse
Here we have an example where the size of the studies in the review is correlated with their results – the same example we looked at in the separate presentation on Heterogeneity. This review is looking at intra-venous magnesium for acute myocardial infarction, and measuring mortality. As you can see, there are a few large studies, shown by the large squares, lined up closely to the line of no effect. There are then a lot of small studies, and they are all over to the left of the plot, showing a stronger reduction in mortality. If the small studies were not systematically different to the larger studies, we would expect the fixed-effect diamond to sit neatly inside the diamond for the random-effects model. In this case, we can see that this doesn’t happen. The fixed-effect result is right on the line of no effect – between 0.94 and In comparison, the random-effects result shifts to the left, with a CI of between 0.53 and 0.82 – they don’t overlap at all. The random-effects model, by giving more weight to the smaller studies, has highlighted a systematic difference. Remember that this kind of sensitivity analysis can highlight the presence of small study effects, but it doesn’t tell you why this has happened. It is still your job to consider the possible explanations. Adaptiert von Li J, Zhang Q, Zhang M, Egger M. Intravenous magnesium for acute myocardial infarction. Cochrane Database of Systematic Reviews 2007, Issue 2.

19 Übersicht Small-study effects erkennen Reporting-Bias verstehen

20 Verbreitung von Evidenz
Nicht verfügbar (unpubliziert) Evtl. verfügbar (Doktorarbeiten, Konferenzbeiträge, kl. Journale) Leicht verfügbar (Medline-indexiert) Aktiv verbreitet (News, Pharma-Firmen) Dissemination of research results falls along a continuum: Unavailable: e.g. not published, only available through informal circulation from the author. Available in principle: e.g. published as thesis, a conference abstract, or in a journal with smaller circulation and impact, perhaps not published in English, not indexed by the major databases. Only about half of the abstracts presented at conferences are ever published in full (Scherer 2007). Easily available: e.g. published in a journal indexed in Medline Actively disseminated: e.g. trials distributed by the drug rep or some other interested organisation Only a proportion of studies will ever be published in a way that makes it easy to access and include in your review. Quelle: Matthias Egger 9

21 Reporting-Bias Verbreitung von Forschungsergebnissen wird von Art und Richtung der Ergebnisse beeinflusst Statistisch signifikante (‘positive’) Ergebnisse werden eher publiziert… …und werden daher mit höherer Wahrscheinlichkeit in einem Review berücksichtigt Dies führt zur Überschätzen von Effekten Da große Studien sehr wahrscheinlich publiziert werden, sind v.a. kleine Studien betroffen Für Ihren Review sind die nicht-signifikanten Ergebnisse genauso wichtig wie die signifikanten We now that these differences in dissemination aren’t random. They are influenced by the results of the studies. Studies with more positive results, and significant findings, are more likely to be published and widely disseminated, which in turn makes it much more likely that they will be incorporated into your systematic review. Small studies are more likely to be affected by this problem. Large studies are more likely to be published in any case, regardless of their findings, and small studies are the most vulnerable. So, if we find an excess of small, positive studies in a review, one of the possible explanations is that we have failed to find published records of the balancing, neutral or negative studies. If we can only find and include the positive, significant findings in our review, the risk is that we are misrepresenting the true effect of the intervention. For our review to be accurate and reliable, we need to make sure we include all the evidence, including the negative and statistically non-significant results.

22 Evidenz für Reporting-Bias
Anteil nicht publizierter Studien Signifikant Nicht-signifikanter Trend Null There is evidence to demonstrate this effect at work. In this study, Stern & Simes looked at a cohort of clinical studies to see how long it took for the results to be published. The answer varied strongly depending on the results of the study. The red line shows studies with significant results – they were the fastest to be published, and after 10 years less than 20% remained unpublished. Studies with non-significant results, but with a discernible trend, were slower to be published over time, and nearly half remaining unpublished after 20 years. The slowest to be published were studies with a null result – that is, the intervention had no discernible effect. Almost none of these studies were published within 5 years, and after 10 years more than 70% remained unpublished. Jahre seit Durchführung Quelle: Stern JM, Simes RJ. Publication bias: evidence of delayed publication in a cohort study of clinical research projects BMJ 1997;315:

23 ‘Positive’ Studien werden …
eher zur Publikation eingereicht und akzeptiert (Publikationsbias) schneller publiziert (Time-Lag Bias) in mehreren Artikeln publiziert (Multiple publication bias) auf Englisch publiziert (Sprach-Bias) in indexierten ‘high-impact’ Journalen publiziert (Location-Bias) von anderen zitiert (Zitationsbias) Geplant Durchgeführt Eingereicht Zitiert Publiziert Reporting biases can occur at many stages of the dissemination process. File drawer problem (Rosenthal 1979) Journals are filled with 5% of the studies that show Type I errors, while file drawers are filled with the 95% of studies that show non significant results. Authors may not spend the time to write up and submit manuscripts on disappointing results. This may be especially true of industry-funded research. Publication bias: Once submitted, papers may be less likely to interest journal editors, or receive less favourable peer review, leading to less chance of publication. Time lag bias: Another Cochrane methodology review (Hopewell 2007a) has shown that non-significant results may take 2 to 3 years longer to be published compared to studies with significant results (Royal Prince Alfred Ethics Committees, Stern and Simms 1997; and HIV multicentre studies, Ioannidis 1998). This means that at the point a systematic review is written, the literature may be dominated by positive studies, with years going by before the balancing studies are published. Duplicate/multiple publication bias: It is relatively common for trials to be published multiple times and difficult to determine when this has occurred (may be different authors, different population sizes etc..). Positive studies are more likely to be published more than once, which means they are more likely to be located and included in the review. If multiple publications are included without being recognised, participants will be double-counted and the treatment effect will be even more exaggerated. Language bias: There is some evidence (although not conclusive) that positive results are more likely to be submitted to and published in English-language journals. This highlights the importance of not limiting your search to papers published in English or databases that largely index the English literature. Location bias: Studies with positive findings are more likely to be accepted for publication in high-impact, high-distribution journals, and importantly, the limited proportion of journals that are indexed by the major databases. Studies published in non-indexed journals are harder to find for your review. Selective outcome reporting: As discussed in relation to our risk of bias assessment for individual studies, within studies that do make it to publications, outcome measures showing positive findings are more likely to be reported. Citation bias: Studies with positive findings are more likely to be cited by other papers, which again makes them easier to find if citations are used as part of the search strategy for the review. And, since authors tend to cite papers that agree with their findings, additional citations reinforces existing the bias. And don’t forget the importance of selective outcome reporting, or the selective publication of positive or significant findings within papers, while leaving out or altering the reporting of those outcomes with negative or non-significant findings. This issue is addressed as part of the ‘Risk of bias’ assessment of included studies. Auch ‘positiv’ Endpunkte werden bevorzugt berichtet (Outcome Reporting Bias) Quelle: Julian Higgins

24 Beispiel: Alpha-Blocker bei Bluthochdruck
Nur 10 Studien gefunden, die jedoch verschiedene Dosierungen verwendet haben Medikamente wurden von Behörden zugelassen. Daher mussten auch Studien durchgeführt und Ergebnisse eingereicht worden sein Aber nur wenige Studien wurden gefunden Für viele Dosierungen, die von Behörden akzeptiert wurden, gab es keine publizierte Evidenz Für einige Dosierungen gab es überhaupt keine publizierten Daten In determining whether reporting bias is impacting on your review, perhaps causing funnel plot asymmetry or perhaps not, you will need to consider the context of your intervention, and susceptibility to different biases for example through conflict of interest. Sometimes there is real world information that can help us work out if publication bias is likely – it doesn’t always depend on statistical tests and small-study effects. This example from a Cochrane review of alpha blockers for hypertension. A total of 10 trials were found in the review, although measuring several different doses of the drugs, so they could not be pooled together, and there weren’t enough trials in a meta-analysis to generate a funnel plot. Nonetheless, these drugs were known to be approved for use by regulators (e.g. the FDA in the US), so we know there had to have been trials completed and submitted for that approval to be successfully given. However, as so few trials were available, and not enough to support all the doses that were approved – in fact none to support some doses - we can conclude that there are missing trials that the drug companies have not made public. This might lead us to conclude that publication bias is likely, although it does not give us a clear idea of how great its impact might be in this particular case. Quelle: Nancy Santesso and Holger Schünemann. Based on Heran BS, Galm BP, Wright JM. Blood pressure lowering efficacy of alpha blockers for primary hypertension. Cochrane Database of Systematic Reviews 2009, Issue 4

25 Beispiel: Antidepressiva
Here is another example, identified by Moreno and colleagues in a BMJ paper looking at a set of trials on anti-depressants, and comparing the published literature to the set of trials submitted for FDA approval. On the left is a funnel plot based on all the FDA data. [CLICK] Here on the right, we have the results of all the publications that could be found reporting the same studies. You can see that we have many fewer studies (50 studies instead of 73), and it’s clear from this plot that the studies that are missing are those that reported non-significant results. Not all cases will be so clear-cut, of course. The role of companies with a strong financial interest in the outcomes of the research is always a powerful conflict of interest that should be considered. Trial registration and standardising the reporting of outcomes in a field can be reassuring about reporting bias, as can the inclusion of data from pharmaceutical regulators in the review. Quelle: Moreno, S. G., A. J. Sutton, et al. Novel methods to deal with publication biases: secondary analysis of antidepressant trials in the FDA trial registry database and related journal publications. BMJ 2009, 339.

26 Folgen des Publikationsbias
The effect of reporting biases, while important, may be less than risk of bias related to study design, such as blinding and allocation concealment. This is another Cochrane methodology review, assessing the impact on the results of meta-analysis of including grey literature, finding between a 4% and 28% increase in odds ratios. Identifying grey literature will not always make a dramatic difference to your results, and may bring its own issues: the studies may be at higher risk of bias (which we assess as we do for all studies), and it may be that even the grey literature we find is more likely to be positive than grey literature overall, e.g. as authors are more likely to respond to requests for unpublished data. It’s important to keep this in perspective. Hopewell S, McDonald S, Clarke MJ, Egger M. Grey literature in meta-analyses of randomized trials of health care interventions. Cochrane Database of Systematic Reviews 2007, Issue 2.

27 Was bedeutet das für meinen Review?
Vorbeugen Eine umfangreiche Suche in mehreren Quellen Suche von ‘grey literature’, nicht-englischsprachiger Literatur; Handsuche Studienregister Erkennen ‘ Small-study effects ’ sollten gesucht werden Sensitivitätsanalyse, um ihre möglichen Auswirkungen zu untersuchen Publikationsbias ist nicht die einzige Erklärung Es gibt kein Allheilmittel Gefundene ‘Small-study effects’ sollten weiter untersucht werden Im Review sollte zur Wahrscheinlichkeit von Reporting-Bias Stellung genommen werden So, in practice, what should you do in your review? In relation to reporting bias in particular, the best thing we can do to prevent it is to do our best to find all the studies that have been conducted, by running a comprehensive search, attempting to find unpublished and grey literature, contacting authors in the field, etc. Trials registries are an important initiative – as they grow internationally, and more journals require registration before publication, registries have the potential to make an important difference in publication bias (although there are still limitations on the completeness of the data in the registered trials, and the application of requirements by journals for registration). Still, we may not be completely successful in preventing reporting bias. Thinking more broadly about small-study effects, we can use the tools available for diagnosis. Funnel plots and statistical tests can help us identify small-study effects, and sensitivity analyses, such as comparing fixed- and random-effects meta-analyses, can help us measure how great the impact of the small studies might be. Even where we do identify small-study effects, we have to remember the range of possible causes of these effects. If we do explore those effects and conclude that reporting bias is the most likely cause, there is no cure. Nonetheless, authors will be expected to comment on both of these issues – small-study effects and the possibility of reporting biases - in their review.

28 Was sollte im Protokoll geschrieben werden
Wie Reporting-Bias bewertet wird (‘Assessment of reporting biases’) Optionale Verwendung von Funnel plots oder statistischer Asymmetrie-Tests Bringing all this back to your protocol – in the Methods section of the review, under the collective heading ‘Data and analysis’, there is a specific subheading on ‘Assessment of reporting biases’. In this section you should include how to you plan to consider reporting biases in your review, including the option to include specific methods such as funnel plots and statistical tests, but remembering that small-study effects have many possible causes.

29 Fazit In Ihrem Review sollten Sie nach sog. ‘Small-study effects’ suchen Zahlreiche mögliche Ursachen in Betracht ziehen Mögliche Auswirkungen von Reporting-Bias beachten Wenn unsicher, Rat von StatistikerIn einholen

30 Quellen Sterne JAC, Egger M, Moher D (editors). Chapter 10: Addressing reporting biases. In: Higgins JPT, Green S (editors). Cochrane Handbook for Systematic Reviews of Interventions Version [updated March 2011]. The Cochrane Collaboration, Available from Egger M et al. Bias in meta-analysis detected by a simple, graphical test. BMJ 1997; 315: 629 Sterne JAC, Sutton AJ, Ioannidis JPA et al. Recommendations for examining and interpreting funnel plot asymmetry in meta-analyses of randomised controlled trials. BMJ 2011;342:d4002 doi: /bmj.d4002 Danksagung Zusammengestellt von Miranda Cumpston Basierend auf Unterlagen von Jonathan Sterne, Matthias Egger, Julian Higgins, David Moher, Nancy Santesso, Holger Schünemann, Cochrane Bias Methods Group, des Australasian Cochrane Zentrums und Cochrane Applicability and Recommendations Methods Group Englische Version freigegeben vom Cochrane Methods Board Übersetzt in Kooperation zwischen dem Deutschen Cochrane Zentrum (Jörg Meerpohl, Laura Cabrera, Patrick Oeller), der Österreichischen Cochrane Zweigstelle (Barbara Nußbaumer, Peter Mahlknecht, Isolde Sommer, Jörg Wipplinger) und Cochrane Schweiz (Erik von Elm, Theresa Bengough)


Herunterladen ppt "Small-study effects und Reporting-Bias"

Ähnliche Präsentationen


Google-Anzeigen