Sunday , July 21 2019
Home / Uncategorized / How to avoid three common mistakes in the meta-analysis

How to avoid three common mistakes in the meta-analysis



Meta-analysis has become a popular tool for summarizing data from a body of work that investigates a common research question. Because this provides an objectively objective means of summarizing a set of data, it has become the gold standard for testing when health guidelines are developed.

However, there are several problems in the meta-analysis that can contribute to inaccuracies. Because of these limitations, the meta-analysis – or "mega-silliness", to borrow a term from an early detractor – has been criticized for almost the time that the method has been available.

There are several tools to ensure the quality of meta-analyzes, which will be explained later in this post. But first, let's consider this considering this (fictitious) extract from the meta-analysis result section I will use to briefly illustrate three common meta-analysis problems:

As hypothesized, a meta-analysis that summarizes eight dimensions of effect from seven studies suggests that the use of social media is positively associated with self-reported anxiety levels [r = .073, 95% CI ( .004, 0.142); p = .038; Fig. 1A]. Egger's regression test did not suggest any evidence of publication bias (p = .19; Fig. 1B). "

Figure 1. The relationship between the use of social media and self-reported anxiety. The forest plot (a) displays point estimates for a set of hypothetical studies with full squares. The diamond reflects the size of the estimated summary effect. The funnel chart (b) illustrates each dimension of effect with respect to its standard error

When this imaginary author – who is also releasing a book on the dangers of social media – initially looked for studies, he actually found nine studies that met their inclusion criteria. However, the size of the summary effect of this meta-analysis of nine studies[[[[r = .062, 95% CI (- .004, 0.127)]he was associated with a p-value of .061 (Figure 2).

figure 2. The original series of studies for the evaluation of the relationship between the use of social media and self-reported anxiety. After understanding that the removal of the study would have resulted in a statistically significant result, our fictitious researcher removed this study using a post hoc justification. The pre-registration of the meta-analysis would have prevented a post hoc adjustment of the masked exclusion criteria first

Because this result would hinder the possibilities of publication (and book sales), they gave a closer look at the study six, which reported a negative association between the use of social media and anxiety levels. . This study was the first study included in the meta-analysis.

Study 6 did not include the use of Instagram (which became popular in 2012), then adjusted the eligibility criteria to include only studies published from 2013 onwards, justified by an (post hoc) hypothesis that # 39; Use of Instagram was more associated with anxiety.

Is the use of Instagram associated with anxiety? Probably not, but our fictitious author has used this idea as a post hoc justification to remove a study from their meta-analysis

With the removal of the study, the meta-analysis became statistically significant. While this invented scenario might seem extreme at first glance, nearly 40% of over 2,000 psychologists interviewed admitted that they had excluded the data after examining the impact of that.

Pre-registration of the analysis plans

The pre-registration of the meta-analysis provides a remedy for analytical flexibility as it provides a first recording of the researcher's intentions of analysis. In the previous example, the exclusion criteria would have been specified prior to the execution of the analysis.

A common criticism of pre-recorded analyzes is that they offer no flexibility for analysis. However, this is not correct because the pre-registration does not block the researcher in a specific plan, but instead helps to clearly outline the pre-programmed and exploratory analysis.

In the case above, the researcher would be free to report his analysis with the removal of the study, but should declare his reasoning, which is rather thin. As an indication, the document Articles preferred for systematic reviews and meta-analysis protocols (PRISMA-P) provides a list of elements that should be treated in a meta-analysis protocol.

There are several options available for pre-registration of the meta-analysis. First of all, a meta-analysis protocol with timestamp following the PRISMA-P guidelines can be loaded on the Open Science Framework.

Secondly, if the meta-analysis investigates a result related to health, it can be pre-registered on PROSPERO. The PROSPERO website guides the researcher through a step-by-step registration form, which is used to generate a pre-registration record. These records can be edited, but any changes are timed.

Third, a meta-analysis protocol can be presented for peer review at a number of journals, such as systematic reviews. While the peer review process obviously requires more time to load a protocol, it can provide important feedback that can improve meta-analysis.

Fourthly, a meta-analysis can be presented using the recorded report format, in which the meta-analysis is examined in two phases. In the first phase, the introduction sections and meta-analysis methods are peer-reviewed before data collection. If the motivation of the study and the methodological approach are deemed appropriate, the journal will provide the provisional acceptance of the final document.

An overview of the format of registered reports (Source: Center for Open Science)

Once the data are collected and analyzed, the document is sent again for the revision of the second phase, which includes the same introduction and the methods of the first phase, but with the addition of results and conclusions. In the second phase of the review, the focus is on the fact that the results and the interpretation are in line with the pre-approved method.

It is important to stress that the acceptance of the card does not depend on the statistical significance of the results. As of August 2018, there are 42 journals that offer a metanalysis of Registered Report, with most of these journals publishing research on the bio-behavioral sciences.

Publication bias

Publication bias is a well-recognized problem in meta-analysis. Since statistically significant studies are more likely to be published, it is likely that there are several non-significant studies left to languish in the research files of researchers for each research area, which can not contribute to meta-analyzes.

How much data should be forgotten in the drawers or in the laboratory archives?

Therefore, a dimension of the meta-analysis summarizing effect may not best represent the "true" effect dimension. A common approach to assess publication risk is to construct a funnel chart, which traces the size of the effect relative to a variance measure (eg, standard errors). A symmetrical funnel chart is indicative of a low risk of publication bias. Despite the inherent subjectivity of visually inspecting a funnel diagram to evaluate asymmetry tests, many researchers still rely on this tool.

Now let's go back to our imaginary studio. Since our fictitious researcher was aware of problems with subjective evaluations of the funnel plot, they objectively evaluated the asymmetry by constructing a funnel plot. Because the researchers are not able to identify the asymmetry from the funnel charts, they followed the recommendations and performed the Egger regression test to integrate the funnel diagram (Figure 1A).

It can be difficult to decide in a subjective way if a channelization plot appears asymmetric, especially if there is a conflict of interest

A statistically significant test is indicative of the asymmetry of the funnel weft. In this case, the test was not statistically significant (p = .19). Therefore, our researcher concluded that there was no risk of publication bias. A closer look at the data, however, reveals that this conclusion was probably wrong.

A common misconception of funnel charts and associated tests is that they measure the distortion of the publication when they actually measure small study bias. This type of injury could include publication bias, but it also includes other types of injury commonly associated with small studies, such as the quality of the study.

A modification of the relatively simple funnel diagram, called a funnel-shaped frame optimized for the contour, can help to better understand the risk of publication errors compared to small study errors. For a given range of size and variance of the effect, it is possible to calculate the levels of statistical significance at each point. As a result, statistical significance intervals (e.g. p = .05 to .01) on a funnel chart.

Comparing our original funnel graph (Figure 3A) with a funnel-optimized graph for the contour (Fig. 3B) shows that, despite no apparent evidence of asymmetry of the funnel weft, half the size of the effect included had pthe values ​​were very close to 0.05. It is highly unlikely that half of the available pthe values ​​would be so close to 0.05. Both of these significant studies were also the result of selective analyzes or there are several unpublished studies that exist.

Figure 3. Comparison between conventional meta-analysis funnel plots and funnel plots with improved contours. Compared to the original funnel graph (a), which is also presented in Fig. 1B, an improved funnel graph for the contour (b) displays statistical significance intervals. The gray shaded region corresponds to p values ​​between .10 and .05, while the dark gray shaded region corresponds to p values ​​between 0.05 and 0.01.

Several measures have been developed to correct the biases in the meta-analysis, such as the p curve, the p uniformity, and the three-parameter selection model, with the latter showing superior performance. But without access to unpublished research these methods will always be estimated, at best. A plausible solution to this scenario is to conduct a meta-analysis that includes only pre-recorded empirical studies, such as Gronau's recent meta-analysis on "posing power" and colleagues.

The idea that the adoption of an "expansive" body posture makes people feel more powerful makes a good story (and TED talk) but the evidence to support this effect is wobbly

Now to our last common problem in meta-analysis: dependence on the size of the effect. Note that in the previous example of meta-analysis there were eight dimensions of effect but only seven studies? This was due to the fact that two dimensions of the effect were extracted from study four (Figure 1A). This is a common event because newspapers sometimes report different outcome measures.

However, because these dimensions of effect derive from the same study population, they are statistically dependent. Because typical meta-analysis procedures assume that the dimensions of the effect are statistically independent, the combination of dependent meta-analyzes can lead to distorted results.

There are different approaches to tackling statistical dependence in the meta-analysis. First of all, you can simply choose a size of the studio effect. While it's simple, you need a pre-specified system to help select the size of the chosen effect, otherwise it might be tempting to simply choose the larger effect. Furthermore, this approach excludes the size of the effect that is potentially important from the analysis. To include more than one effect from a study, it is possible to aggregate the effects statistically. But to do this, it is necessary to include a correlation measure within the study, which is hardly ever reported in the newspapers.

If the paper contains raw data, an event that is almost as rare, the correlations within the study can be calculated. Therefore, researchers almost always need to estimate the correlations within the study if this approach is to be taken. Alternatively, a robust variance estimate can be used to take into account the size dependency without the need to know the correlations within the study.

Meta-analysis is an important tool for a better understanding of the often disparate areas of research. This method can provide a more objective synthesis of a research area compared to the simple "counting of votes" of the number of significant and non-significant results.

But like almost all statistical tools, meta-analysis can facilitate inaccurate inferences if used inappropriately. With this post, I hope I have provided a brief overview highlighting some common problems in meta-analysis and practical solutions to these problems.


Source link

Leave a Reply

Your email address will not be published.