Common pitfalls and mistakes in the set-up, analysis and interpretation of results in network meta-analysis: what clinicians should look for in a published article.

Chaimani, Anna; Salanti, Georgia; Leucht, Stefan; Geddes, John R; Cipriani, Andrea (2017). Common pitfalls and mistakes in the set-up, analysis and interpretation of results in network meta-analysis: what clinicians should look for in a published article. Evidence-Based Mental Health, 20(3), pp. 88-94. BMJ Publishing Group 10.1136/eb-2017-102753

[img] Text
Chaimani EvidBasedMentHealth 2017.pdf - Published Version
Restricted to registered users only
Available under License Publisher holds Copyright.

Download (773kB)

OBJECTIVE

Several tools have been developed to evaluate the extent to which the findings from a network meta-analysis would be valid; however, applying these tools is a time-consuming task and often requires specific expertise. Clinicians have little time for critical appraisal, and they need to understand the key elements that help them select network meta-analyses that deserve further attention, optimising time and resources. This paper is aimed at providing a practical framework to assess the methodological robustness and reliability of results from network meta-analysis.

METHODS

As a working example, we selected a network meta-analysis about drug treatments for generalised anxiety disorder, which was published in 2011 in the British Medical Journal. The same network meta-analysis was previously used to illustrate the potential of this methodology in a methodological paper published in JAMA.

RESULTS

We reanalysed the 27 studies included in this network following the methods reported in the original article and compared our findings with the published results. We showed how different methodological approaches and the presentation of results can affect conclusions from network meta-analysis. We divided our results into three sections, according to the specific issues that should always be addressed in network meta-analysis: (1) understanding the evidence base, (2) checking the statistical analysis and (3) checking the reporting of findings.

CONCLUSIONS

The validity of the results from network meta-analysis depends on the plausibility of the transitivity assumption. The risk of bias introduced by limitations of individual studies must be considered first and judgement should be used to infer about the plausibility of transitivity. Inconsistency exists when treatment effects from direct and indirect evidence are in disagreement. Unlike transitivity, inconsistency can be always evaluated statistically, and it should be specifically investigated and reported in the published paper. Network meta-analysis allows researchers to list treatments in preferential order; however, in this paper we demonstrated that rankings could be misleading if based on the probability of being the best. Clinicians should always be interested in the effect sizes rather than the naive rankings.

Item Type:

Journal Article (Original Article)

Division/Institute:

04 Faculty of Medicine > Pre-clinic Human Medicine > Institute of Social and Preventive Medicine (ISPM)
04 Faculty of Medicine > Pre-clinic Human Medicine > Department of Clinical Research (DCR)

UniBE Contributor:

Salanti, Georgia

Subjects:

600 Technology > 610 Medicine & health
300 Social sciences, sociology & anthropology > 360 Social problems & social services

ISSN:

1362-0347

Publisher:

BMJ Publishing Group

Language:

English

Submitter:

Tanya Karrer

Date Deposited:

15 Feb 2018 14:52

Last Modified:

20 Feb 2024 14:17

Publisher DOI:

10.1136/eb-2017-102753

PubMed ID:

28739577

Uncontrolled Keywords:

Anxiety Disorders Clinical Trials Mental Health

BORIS DOI:

10.7892/boris.111391

URI:

https://boris.unibe.ch/id/eprint/111391

Actions (login required)

Edit item Edit item
Provide Feedback