Statistics from Altmetric.com
We thank Wei et al1 for their comments on our paper on comparative efficacy of non-steroidal anti-inflammatory drugs (NSAIDs) in the treatment of ankylosing spondylitis.2 Here we address their questions regarding study heterogeneity, effect modification and the fit of our models. Wei et al questioned if heterogeneity among studies could have affected our results. In all meta-analyses, an assessment of study heterogeneity is important. Heterogeneity was first addressed by our inclusion criteria, which limited the types of primary studies that were eligible for inclusion in the meta-analysis to those of patients with the same diagnosis (ankylosing spondylitis), similar trial durations and no co-interventions. The second approach we took to assess possible heterogeneity was to examine the characteristics of study participants and outcome measures at baseline, which were comparable among studies, as shown in table 1.2 The third approach we took to understand the potential role of heterogeneity was to perform sensitivity analyses that examined two aspects of trial design and analysis by excluding crossover studies that did not include a washout period, and by excluding trials without an intention-to-treat analysis. The fourth approach we took to examine possible heterogeneity was to analyse the subset of studies that used high doses of NSAIDs, as specified in the NSAID index to be judged to have similar clinical effects.3 Results of these analyses were quite similar, suggesting that study heterogeneity was unlikely to have impacted our results or conclusions.
Wei et al also questioned if effect modification could have affected the results. Effect modification is an important consideration and can occur when an association between an exposure and an outcome (in our case, a particular NSAID and pain or morning stiffness) is different in the presence or absence of an extraneous (or third) variable, such as between men and women, or between patients whose serum C reactive protein level is elevated or normal. Effect modification is best assessed by comparing associations between subgroups of patients, but this requires individual patient data, which we did not have. Effect modification can also be tested, although less rigorously, if multiple studies are available for each medication, and these studies differ in mean levels of the third variable. In our meta-analysis, only four NSAIDs were studied in more than four trials, which did not permit testing for possible effect modification.
Wei et al questioned if we had tested the fit of the model. We evaluated the fit of the model using the Deviance Information Criterion (DIC) which provides a measure of model fit that accounts for model complexity.4 We also examined the posterior mean of the residual deviance. The DIC results were reasonably low, given the sample sizes. We checked the consistency assumptions by doing a full node-splitting analysis, where we looked at each pairwise comparison and compared it with the network results.5 However, this has limited effectiveness in our case since several medications did not have direct comparisons among the component studies, and relatively few studies contributed to the comparisons. We thank Wei et al for highlighting these points.
Competing interests None declared.
Provenance and peer review Commissioned; internally peer reviewed.