Article Text

Can we improve the performance and reporting of investigator-initiated clinical trials? Rheumatoid arthritis as an example
  1. Robert B M Landewé1,2,
  2. Josef S Smolen3,
  3. Michael E Weinblatt4,
  4. Paul Emery5,6,
  5. Maxime Dougados7,
  6. Roy Fleischmann8,
  7. Daniel Aletaha9,
  8. Arthur Kavanaugh10,
  9. Désirée van der Heijde11
  1. 1Amsterdam Rheumatology & immunology Center, Academic Medical Center/University of Amsterdam, The Netherlands
  2. 2Department of Rheumatology, Atrium Medical Center Heerlen, The Netherlands
  3. 3Division of Rheumatology, Department of Medicine Three, Medical University Vienna & Second Department of Medicine, Hietzing Hospital, Vienna, Austria
  4. 4Division of Rheumatology, Immunology and Allergy,
    Brigham and Women's Hospital
    Harvard Medical School
    , Boston, USA
  5. 5Leeds Institute of Rheumatic and Musculoskeletal Medicine, University of Leeds, Chapel Allerton Hospital, Leeds, UK
  6. 6NIHR Leeds Musculoskeletal Biomedical Research Unit, Leeds Teaching Hospitals NHS Trust, Leeds, UK
  7. 7Rheumatology Department, Paris Descartes University, Cochin Hospital, Assistance Publique-Hôpitaux de Paris, INSERM (U1153): Clinical Epidemiology and Biostatistics, PRES Sorbonne Paris-Cité, Paris, Fance
  8. 8Department of Internal Medicine, University of Texas Southwestern Medical Center, Dallas, USA
  9. 9Division of Rheumatology, Department of Medicine Three, Medical University Vienna, Austria
  10. 10Rheumatology, Allergy, Immunology Division, University of California, San Diego, USA
  11. 11Department of rheumatology, Leiden University Medical Center, Leiden, The Netherlands
  1. Correspondence to Professor Robert B M Landewé, Department of Rheumatology & Clinical Immunology, Academic Medical Center, Meibergdreef 9, Amsterdam 1100 DD, The Netherlands; Landewe{at}rlandewe.nl

Abstract

Investigator-initiated trials, some of which have been referred to as comparative effectiveness trials, pragmatic trials, or strategy trials, are sometimes considered to be of greater clinical importance than industry-driven trials, because they address important but unresolved clinical questions that differ from the questions asked in industry-driven trials. Regulatory authorities have provided methodological guidance for industry-driven trials for the approval of new treatments, but such guidance is less clear for investigator-initiated trials. The European League Against Rheumatism (EULAR) task force for the update of the recommendations for the management of rheumatoid arthritis has critically looked at the methodological quality and conduct of many investigator-initiated trials, and has identified a number of concerns. In this Viewpoint paper, we highlight commonly encountered issues that are discussed using examples of well-known investigator-initiated trials. These issues cover three themes: (1) design choice (superiority vs non-inferiority designs); (2) statistical power and (3) convenience reporting. Since we acknowledge the importance of investigator-initiated research, we also propose a shortlist of points-to-consider when designing, performing and reporting investigator-initiated trials.

  • Treatment
  • Rheumatoid Arthritis
  • Anti-TNF

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

The conduct of clinical trials in rheumatoid arthritis has evolved importantly over the last two decades. The development of sets of pivotal outcome measures (‘core sets’),1 ,2 the definition of improvement criteria,3 remission criteria, functional and quality-of-life measures, the development of composite scores to assess disease activity and response to therapy4–7 and validated scoring methods for joint damage8 have greatly assisted therapeutic studies and drug development in rheumatoid arthritis worldwide. Considerations on clinical trial analyses and reporting have been provided in guidance documents of the regulatory authorities (the US Food and Drug Administration (FDA) and the European Medicinal Agency (EMA) (Committee for Proprietary Medicinal Products (CPMP)).9–11 These developments have allowed a more realistic and valid assessment of the efficacy of new and established therapies in comparison with placebo or active control medications. These clinical trials are generally studies that lead to drug approval, or that pertain to postmarketing requirements and appear to have high internal validity. But largely, industry-driven trials leave some relevant clinical questions unresolved, such as the question about the effectiveness of combination therapy with conventional synthetic disease modifying antirheumatic drug (csDMARD).12 In combination with the csDMARD methotrexate (MTX) all registered tumour necrosis factor (TNF)-inhibiting biological DMARDs (bDMARD) have shown superiority over MTX monotherapy in many industry-sponsored trials. But the important remaining clinical question whether such combinations (with or without glucocorticoids) can achieve the same results as the far more expensive bDMARDs has not been answered in these trials. This question will likely never be addressed in industry-driven randomised controlled trials (RCTs), as regulatory authorities do not require such trials for filing a new treatment, and such RCTs do not necessarily serve the commercial interest of an industry sponsor.

Over the last two decades, there have been RCTs taking unresolved clinical questions as a starting point for more pragmatic studies, thus trying to bridge formal proof of efficacy and efficiently ‘fitting’ new drugs in the treatment armamentarium for rheumatoid arthritis (RA).13–21

Some of these ‘effectiveness trials’ have been referred to as cornerstone studies and are regarded by many rheumatologists as being of greater interest and importance when compared to industry-driven efficacy trials.13 ,14 ,22 It is clear, however, as will be shown, that some of these otherwise highly valuable effectiveness and strategy trials have methodological constraints.

The purpose of this Viewpoint article is to comment on the place of effectiveness and strategy trials for RA treatment, and to provide comments on the design and interpretation of the results of these studies.

Industry trials

For efficacy trials, the regulatory authorities have defined trial design, study populations, study endpoints, statistical power, dropout handling and data imputation. The FDA, for example, states ‘Failure to recruit an adequate number of patients is a major reason why an effective product may fail to meet established statistical criteria for efficacy, independent of whether the purpose was to show superiority or comparability of treatment effect.’9 EMA (CPMP) provides a similar text.10 With respect to radiographic assessments, the FDA document also makes a clear statement: ‘All randomized patients should have films at both time points [….] Pre-specification of the handling of dropouts is especially important in these trials.’ And with respect to blinding: ‘Because most RA outcome measures have a high degree of subjectivity, the highest confidentiality in patient and assessor blinding should be sought to achieve a credible inference.’9 Finally, the regulators also address dropouts: ‘The effects of dropouts should be addressed in all trial analyses to demonstrate that the conclusion is robust…. A phenomenon frequently observed in RA, as well as in other conditions, is that patients who stay in trials do better than those who drop out: responders do better than non-responders…. This phenomenon is attributable to preferential dropout of worsening patients.’9

These guidelines pertain to trial designs with an underlying hypothesis that the new treatment is superior to placebo or to a comparator treatment. Such a design, in which the a priori expectation is that the new treatment is better than the comparator, is referred to as a ‘superiority design’; it is the default design in a regulatory context and provides sufficient and reliable information about whether or not a (new) treatment is effective in the short term.

Industry-driven clinical trials generally have adhered to these recommendations in the studies of synthetic and biological agents, which have been approved by demonstrating efficacy and acceptable safety.23–33 They all had a double-blind trial design, were analysed by intention-to-treat (ITT), and applied appropriate imputation methods for missing clinical and radiographic data or dropouts.

Examples of prototypic industry trials published over the last 15 years are found in online supplementary table S1A: (1) the studies were double-blind (with one exception where the assessor was still blinded); (2) the primary endpoint remained unchanged; (3) all randomised patients were assessed for the original primary endpoint irrespective of early discontinuation and (4) in all studies the number of patients analysed was as projected in the original power calculation or even exceeded that number (on average 6% more). Many of these studies were used for regulatory filing and met the requirements established by the regulatory authorities, and market access was granted based on their results.

More recently, several industry-sponsored strategic trials were published asking questions related to common clinical practice. They either have compared different bDMARDs in a ‘head-to-head’ manner, or have comparatively investigated the withdrawal of biological therapy (‘comparative effectiveness trials’). Importantly, these trials adhered to similarly stringent approaches regarding study design and conduct as is done for registration trials.34–38

Investigator-initiated trials

Several investigator-initiated trials conducted over the same period of time (see online supplementary table S1B) can also be included under the concept of ‘comparative effectiveness research’; they often compared different types of therapy, and also explored different treatment strategies.13 ,15–19 ,21 ,39 ,40 In many of these trials, the underlying rationale was to explore the effectiveness and safety of one treatment strategy versus another (eg, use of cheaper vs more expensive medications) in the short and the long terms.

It is perhaps noteworthy that prior to the design and conduct of these RCTs the opinion of at least some of the trial developers was that csDMARD strategies could be as effective as new biological treatments. Evidence consistent with this viewpoint can be seen in the introduction section of some of the respective reports. Such an opinion might indicate a potential bias that may have subconsciously and inadvertently affected the interpretation of these trials. Thus, as will be discussed below, the results may have become, at least partly, a self-fulfilling prophecy.

A RCT that asks the question as to whether a particular treatment or strategy is as effective as a comparator treatment or strategy addresses whether that regimen is ‘non-inferior’ to the comparator regimen. Therefore, a non-inferiority trial design should be used. A non-inferiority design allows a statistically correct conclusion about whether a particular treatment or strategy is not worse than the other.

If there is agreement in the scientific community to apply a high level of scientific rigour to efficacy/safety questions in the context of superiority (efficacy) trials, then there should also be agreement to apply a similar level of scientific rigour to effectiveness questions in the context of non-inferiority trials. Alternatively, one would have to declare the latter as being only somewhat meaningful and having less internal validity (ie, providing less evidence).

A critical appraisal of investigator-initiated trials

A number of methodological issues related to investigator-initiated trials in RA will now be discussed.

Issues of design choice: is the trial a superiority or a non-inferiority trial?

This is an important question when reading trial reports of investigator-initiated RCTs (and/or their synopsis in trial databases, such as http://www.clinicaltrials.gov), and the answer often remains obscure. It is not commonly understood that the main outcomes of superiority trials are often interpreted as if these trials were conducted as non-inferiority trials, resulting in misconceptions about the efficacy of certain regimens.

An example of this is the NEO-RACo trial.19 The COBRA and FIN-RACo-trials had shown in carefully conducted superiority designs that glucocorticoid-containing combinations of csDMARDs are more efficacious than monotherapy with csDMARDs.14 ,41

The investigators of the NEO-RACo trial have subsequently tested if the addition of Infliximab to combination csDMARD could further improve the treatment outcomes. NEO-RACo employed a superiority design, but was highly ‘underpowered’ and the ‘benchmark design’ (that prescribed treatment intensification during the trial if the treatment target of remission was not reached yet) favoured a similar clinical outcome. While the results, nevertheless, showed a 13% higher remission rate in the infliximab group than in the control group and a statistically significant difference regarding inhibition of radiographic progression (their second primary outcome measure), the authors’ conclusions only reflect their opinion that the differences between both arms were small and not relevant: an inappropriate and misleading non-inferiority conclusion derived from an underpowered superiority trial.

A second example is the BeST trial.13 This RCT does not clearly provide information about the real nature of the trial, but was contextually designed as a superiority trial. Nevertheless, of the multiple conclusions drawn from many analyses of this trial, the most important is likely that treatment in arm 3 (the ‘COBRA-arm’ with MTX plus sulfasalazine plus glucocorticoids) is ‘as effective as’ treatment in arm 4 (the ‘biologic arm’ using an anti-TNF plus MTX). While all results presented from the BeST trial are consistent with such an opinion, the trial was originally not designed to investigate this as a non-inferiority comparison. Similarly, BeST arms 1 and 2 (sequential csDMARD monotherapy compared with step-up combination therapy with csDMARDs) have been reported as being similarly effective. Again, a non-inferiority conclusion from a superiority design.

The SWEFOT trial is a third example.16 This open-label trial was designed as a superiority trial to compare the efficacy of a TNF-inhibiting biological plus MTX with that of ‘triple csDMARD-therapy’ (MTX plus sulfasalazine plus hydroxychloroquine) in patients who did not reach low disease activity within a 3-month course of MTX monotherapy. The conclusion at 2 years was that triple csDMARD-therapy is as effective as MTX plus TNF-inhibitor. The results of the trial, however, revealed significant superiority of the biological plus MTX over triple-csDMARD-therapy at 12 months in its primary outcomes, but not anymore at 24 months, although the difference was still numerically in favour of the biological plus MTX and there was a statistically significant difference in radiographic outcomes at 2 years favouring the biological plus MTX.

SWEFOT did not enrol its originally calculated patient numbers (table S1B). The lack of demonstration of a statistically significant difference could be due to lack of power (Type-2 error) to show a statistically significant treatment effect in favour of TNFi plus MTX at 24 months. Suboptimal ACR50 (15%) and ACR70 (7%) response rates to triple csDMARD-therapy after 1 year42 may explain why five times as many patients have withdrawn triple csDMARD-therapy in comparison to biological therapy leading to the unusual situation that 2-year data differ from 1-year data.

In the discussion of the SWEFOT paper, many methodological limitations are discussed and explained (eg, ‘[…] forced by circumstances’). Nevertheless, it was concluded that the effects of triple csDMARD-therapy ‘were so good that –given the differences in price- triple csDMARD therapy should be considered before TNF-inhibitor plus MTX’.

In our opinion, such a suggestion is not fully justified and, hence, could be misleading.

Issues concerning the power of a trial

RCTs with a superiority design and those with a non-inferiority design require a rigorous trial conduct in order to allow a meaningful conclusion. Many superiority trials would never meet non-inferiority requirements. It is generally understood, for example, that non-inferiority trials often require larger sample sizes than superiority trials. The sample size, however, is entirely dependent on the level of the preset non-inferiority margin. Failure to randomise the required number may seriously impact the validity of a trial result. Of note, enrolling the preplanned number of patients, and including in the analysis a sufficient number of subjects that have completed the trial (and provided outcome data), are required for valid trial results.

The predetermined sample size of the TEAR-trial, a double-blind superiority trial designed to compare immediate versus delayed step-up of DMARD treatment in RA, was 750 patients, and 755 patients were indeed randomised. But in the end, the primary outcome measure at week 102, the DAS28 (erythrocyte sedimentation rate), was only obtained in 476 of the 755 enrolled patients. In spite of this serious methodological problem, the authors firmly concluded that ‘triple therapy… and MTX plus etanercept resulted in comparable clinical outcomes’, and that these data suggest ‘the cost-effectiveness of less expensive triple therapy may be positive relative to that of anti-TNF therapy’. This unequivocal conclusion mistakenly suggests that TEAR had a rigorous and well-pursued non-inferiority design, which was not the case. In fact, data that the authors presented suggest that their conclusions may be inappropriate: The ACR70 response rates (regarded by many as a valid targeted outcome in patients with early RA)43 were indeed statistically significantly higher in the MTX plus biological group than in the triple csDMARD-therapy group, and there was less radiographic progression in the MTX plus biological group. But insufficient patient numbers completing this trial may have prevented other true treatment effects becoming statistically significant. A more careful conclusion that gives better consideration to the possibility of superiority of MTX plus biological in comparison to triple csDMARD-therapy, would have been more appropriate.

Another consideration is that even though some of these investigator-initiated effectiveness trials do use a non-inferiority design, they do not adhere to the rigorous principles of non-inferiority methodology,44 which makes their conclusions less robust. Important principles of non-inferiority methodology include (among others) a clear description of a truly meaningful non-inferiority margin before the start of the trial (which should not be amended during the trial), adherence to the appropriate patient number at the trial start as well as at the primary endpoint of the trial, a close-to-perfect trial conduct (because imperfect trial conduct in a trial comparing two not equally effective treatments may ‘help’ mask a real treatment effect and, thus, spuriously lead to a declaration of non-inferiority45) as well as a conclusion that ‘fits the results’.

The RACAT trial was a non-inferiority trial comparing ‘triple csDMARD combination therapy’ with MTX plus a TNF-inhibiting biological in patients with established RA who had active disease despite MTX.18 Because of ‘unexpectedly low enrollment’ the primary outcome was changed during the trial for statistical reasons, with subsequent implications for the non-inferiority margin. The consequences for the change of the non-inferiority margin with respect to the interpretation of the trial results were not clear. However, even the sharply reduced target number of 450 patients (the original sample size was 600) was not reached: ‘Funding constraints mandated ending enrollment before the revised sample-size target of 450 was reached’; only 309 patients were ultimately assessed in accordance with the primary trial protocol, patients were mandated to switch to the alternate treatment strategy in the other arm at 24 weeks of the trial in case of inefficacy (and 27% of the patients in fact did switch). Such a ‘cross-over escape’ works in favour of demonstrating non-inferiority.44 In spite of these two clear ‘methodological incentives’ to arrive at non-inferiority, the primary outcome measure (change in DAS28 at 48 weeks) missed statistical significance favouring superiority of MTX plus TNF-inhibitor only marginally (with p=0.06). Of importance, at 24 weeks, many secondary outcome measures of the RACAT trial strongly suggested that MTX plus TNF-inhibitor is (at least statistically) more effective than triple csDMARD-therapy: an ACR70 response was obtained in 17% of the MTX plus TNF-inhibitor group versus in only 5% of the triple csDMARD-therapy group (17% ACR70 response rate is close to responses obtained in other TNF-inhibitor trials, while 5% is close to placebo responses in such trials)25 ,46 Additionally, the difference in HAQ-score improvement (–0.64 vs –0.46) between baseline and 48 weeks was also very close to statistical significance. When accounting for switching, the online supplementary appendix (not the main article) showed that at 48 weeks most clinical outcomes were consistently numerically (and some also statistically) better in the MTX plus TNF-inhibitor-treated patients. The authors, nevertheless, did not mention these ‘superiority signals’ and concluded only: ‘Triple therapy with sulfasalazine and hydroxychloroquine added to methotrexate was non-inferior to etanercept plus methotrexate.’

The recently reported extension of the CONSORT guidelines for non-inferiority trials44 points to this rather peculiar situation of superiority and non-inferiority simultaneously occurring in an RCT that has been set up as a true non-inferiority trial (like RACAT). A possible explanation for this puzzling situation is ‘overpowering’ (a too-large sample size), which will likely not apply to the RACAT trial. Another—more likely—explanation here is a too-wide non-inferiority margin.44

Issues of interim reporting

A frequent limitation of investigator-initiated comparative effectiveness research is ‘convenience reporting’: the authors see interesting interim results and decide to share these usually positive data. Convenience reporting is not illegitimate but may contort the perceptions of readers.

The observer-blinded tREACH trial, in which two arms with ‘triple csDMARD-therapy’ plus glucocorticoids (oral vs intramuscular) were compared with one arm with monotherapy MTX plus oral glucocorticoids, has reported a 3-month interim analysis. In this interim analysis, the authors have noted that the arms with triple csDMARD-therapy clearly showed a better efficacy than the arm with MTX-monotherapy. While the authors have reported resolutely ‘In this study, unbiased for GCs, induction therapy consisting of a combination of DMARDs is better than MTX monotherapy in early RA’, they subsequently presented 1-year follow-up data of this trial20 showing that the efficacy assessments in all three groups were essentially similar from 3 months onward and clear statistically significant differences at 3 months had either disappeared or only maintained borderline significance after statistical adjustment. In the 1-year planned report, different outcomes (AUC-DAS (p=0.0497) and AUC-HAQ (p=0.052)) were presented than in the 3-month interim report (DAS-change). Additionally, data were presented showing that radiographic progression was not different.47 With the knowledge of hindsight, the 3-month interim report of tREACH could have been handled more carefully. The authors, confronted with meagre 1-year treatment effects, however, firmly testified of their continuous belief in the superiority of triple csDMARD therapy in the 1-year paper: ‘In our treat-to-target design, treatment goals were attained more quickly and maintained with fewer treatment intensifications, with triple DMARD therapy than with MTX mono-therapy’. In the absence of clear clinical benefits they now already anticipate on potential pharmaco-economic benefits that must result from their cost-utility analysis.

Discussion and orientation

A critical evaluation of many of the ‘comparative effectiveness trials’ in RA reveals a number of issues regarding study methodology. The important question is whether we are ‘splitting hairs’ with respect to raising these issues and, thus, questioning some of the conclusions in these peer-reviewed trials. Can a clinician correctly state: ‘So what? The picture is still crystal clear. You cannot disqualify these important landmark trials that exactly describe how we should treat our RA patients’.

As experienced trialists, we are among the first to acknowledge how difficult it is to bring an investigator-initiated drug trial to successful conclusion, given the limitations in funding, the technical difficulties of many modern treatment designs (think of benchmark strategies), the dependence on volunteer clinical investigators, and the competition with available reimbursed treatments on the market, that requires a significant commitment for science from patients and investigators. This is also a reason why we have been, and will continue advocating, funding of investigator-initiated clinical trials with some success on national and European Union levels,48 ,49 as has meanwhile also been the case in the USA.50 We also readily acknowledge that industry-funded studies can have major limitations and may not address questions that are directly related to the usage of therapy in clinical practice.

We appreciate the value of ‘comparative effectiveness research’ in an effort to make progress in the treatment of RA: These trials address questions that will otherwise not be answered, and are advocated as part of the research agenda of the European League Against Rheumatism (EULAR) management recommendations for RA.51 ,52 We also accept that some variances in study methodology will be inherent in ‘comparative effectiveness research’ and that it is not reasonable to expect that all will be overcome.

However, we strongly feel that the methodological variations described in this Viewpoint article reduce evidence of ‘comparative effectiveness research’ to a level that does not allow firm conclusions to be drawn. By itself, this is not a disqualification. Needless to say, these articles have passed a thorough peer-review process, but investigators, authors, journal editors, members of guideline/recommendation committees and clinical rheumatologists must realise the limitations of the interpretations of ‘comparative effectiveness research’ in order to be able to use their results appropriately.

In our opinion, researchers and authors have major obligations. First, we should optimise methodology wherever possible, but further, we should scrutinise and report all variations and shortcomings of the trials, and describe how this deviates from the original plans, and we should also carefully mitigate the tone of the messages in the title, the abstract and the discussion of our manuscripts and in presentations in order to report only careful and valid conclusions: Not all ‘randomised trials’ are automatically ‘level 1 evidence’.

All of us wish to see the funds of our healthcare systems used most prudently and cost effectively for treating patients. We would prefer to be able to endorse less expensive treatments as being as good as—or more effective than—costly therapies, if the data were clear and methodologically sound. As an example of that, the task force formulating the 2013 update of the EULAR recommendations on the management of RA has virtually abandoned the use of a biologic treatment before a first csDMARD strategy has been tried, which indeed is a recommendation that may save costs.

Our views in this Viewpoint are solely driven by looking into detail at the data presented by the authors of the investigator-initiated trials. Thus, the evidence that we see when we evaluate these trials may be at odds with the interpretations of the authors, as we have shown in numerous examples above.

Finally, we have summarised some points to consider when performing and presenting comparative effectiveness research in table 1. These points to consider are intended to support providing a clearer picture of the full context and meaning of a clinical trial rather than a focus on one or two key messages of the trial. Many of these items are definitely not new,44 may not be complete, and call for further expansion by others.

Table 1

Points to consider for designing, conducting and interpreting comparative effectiveness trials in rheumatology

Being convinced of the virtues of ‘comparative effectiveness research’, we strongly believe that readers of this important avenue of research should be able to see the entire picture in a clear fashion, because  ‘the tone of the music is in the entire music’.

References

View Abstract

Supplementary materials

  • Supplementary Data

    This web only file has been produced by the BMJ Publishing Group from an electronic file supplied by the author(s) and has not been edited for content.

    Files in this Data Supplement:

Footnotes

  • Handling editor Tore K Kvien

  • Contributors All authors have contributed to the planning, conception and writing of this Viewpoint article. All authors have given consent to publish this article on their behalf.

  • Competing interests RBMLdiscloses to have received research grants and/or honoraria for giving expert advice and/or speaker fees from Abbott/AbbVie, Amgen, Astra-Zeneca, Bristol-Myers-Squibb, Glaxo-Smith-Kline, Janssen, Novartis, Novo-Nordisk. Merck/MSD, Pfizer, Rhoche, Schering-Plough and UCB. He is director of Rheumatology Consultancy BV, a company under Dutch law. JSS discloses he has obtained research grants from Abbvie, Janssen, MSD, Pfizer, Roche and UCB and that he has given expert advice to and/or received speaker fees from Abbvie, Amgen, Astra-Zeneca, Celgene, Glaxo, Janssen, Lilly, Medimmune, MSD, Novartis-Sandoz, Novo-Nordisk, Pfizer, Roche, Samsung, Sanofi and UCB. MFW discloses that he is consultant for Abbvie Ablynx, Adheron Therapeutics, Amgen, Antares, Astra-zeneca, Augurex, Bristol-Myers-Squibb, Canfite, Compugen, Corrona, Crescendo Bioscience, Ensemble, Exagen, Five Prime, Genentech/Roche, Hutchison, Idera, Infinity, Janssen, Lycera, Lilly, Medimmune, Merck, Novo-Nordisk, Pfizer, Regeneron, Samsung, UCB, Vertex, and that he has received research grants from Crescendo Bioscience, Bristol Myers Squibb and UCB. PE discloses that he has undertaken clinical trials and provided expert advice to Abbott/Abbvie, Bristol Myers Squibb, Pfizer, UCB, MSD, Roche, Novartis and Lilly. MD discloses to have received research grants and/or honoraria for attending advisory boards and/or speaker fees from Abbott/AbbVie, Amgen, Astra-Zeneca, Bristol Myers Squibb, Celgene, Janssen (formerly Centocor), Glaxo-Smith-Kline, Lilly, Novartis, Novo-Nordisk, Merck, Pfizer, Rhoche, Sanofi, Schering-Plough, UCB and Wyeth. RF discloses to be a consultant for Abbvie, Akros, Amgen, Antares, Ardea, AZ, Augurex, BMS, Celgene, Covagen, Five Prime, Genentech, GSK, Iroko, Janssen, Lilly, Merck, Pfizer, Regeneron, Resolve, Roche, Sanofi, UCB, Vertex, and has received research grants from AbbVIE, Amgen, Ardea, AZ, BMS, Celgene, Genentech, GSK, Janssen, Lilly, Merck, Pfizer, Regeneron, Novartis, Resolve, Roche, Sanofi, UCB. DA discloses to have received research grants and/or honoraria for attending advisory boards and/or speaker fees from Abbvie, Grunenthal, MSD, Medac, Pfizer, Roche and UCB. AK discloses to be a consultant for AbbVie, Amgen, Bristol Myers Squibb, Janssen Novartis, Pfizer, Roche and has received research grants from Amgen, AbbVie, Genentech, Sanofi and UCB. DvdH discloses to have received research grants and/or honoraria for attending advisory boards and/or speaker fees from AbbVie, Amgen, AstraZeneca, Augurex, BMS, Celgene, Centocor, Chugai, Covagen, Daiichi, Eli-Lilly, Galapagos, GSK, Janssen Biologics, Merck, Novartis, Novo-Nordisk, Otsuka, Pfizer, Roche, Sanofi-Aventis, Schering-Plough, UCB, Vertex. She is director of Imaging Rheumatology BV, a company under Dutch law.

  • Provenance and peer review Not commissioned; externally peer reviewed.