Article Text

Download PDFPDF

Response to: ‘Diagnostic accuracy of novel ultrasonographic halo score for giant cell arteritis: methodological issues’ by Ghajari and Sabour
Free
  1. Kornelis SM van der Geest1,2,
  2. Frances Borg2,
  3. Abdul Kayani2,
  4. Davy Paap1,3,
  5. Prisca Gondo2,
  6. Wolfgang Schmidt4,
  7. Raashid Ahmed Luqmani5,
  8. Bhaskar Dasgupta2
  1. 1Rheumatology and Clinical Immunology, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
  2. 2Rheumatology, Southend University Hospital NHS Foundation Trust, Westcliff-on-Sea, Essex, UK
  3. 3Rehabilitation Medicine, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
  4. 4Medical Centre for Rheumatology Berlin-Buch, Immanuel-Krankenhaus GmbH, Berlin, Berlin, Germany
  5. 5Nuffield Department of Orthopaedics, Rheumatology and Musculoskeletal Sciences, University of Oxford, Oxford, Oxfordshire, UK
  1. Correspondence to Dr Bhaskar Dasgupta, Rheumatology, Southend University Hospital NHS Foundation Trust, Westcliff-on-Sea SS0 0RY, UK; bhaskar.dasgupta{at}southend.nhs.uk

Statistics from Altmetric.com

We thank Ghajari and Sabour for their interest in our work and appreciation of our study.1 We have reported that the extent of vascular inflammation on ultrasound, as quantified by the halo score, is associated with ocular ischaemia in patients with giant cell arteritis (GCA).2 Furthermore, we investigated the diagnostic accuracy of the halo score for a clinical diagnosis of GCA, as well as a positive temporal artery biopsy.2 Here, we discuss the points raised by the authors.

First, the authors propose that our study was focused on ‘test accuracy’. We fully agree with the authors on this point, as we included the term ‘diagnostic accuracy’ in our title and used it throughout our manuscript. Our definition of ‘diagnostic accuracy’ was similar to that reported in the references provided by the authors, that is, the ability of a test to discriminate between patients with the target condition and those without.3 4 It appears that the authors use a slightly distinct definition for ‘diagnostic accuracy’, that is, ‘a test’s added contribution to estimate the diagnostic probability of disease presence or absence’. This is actually the definition of ‘diagnostic yield’, as indicated by the reference provided by the authors.3 Sackett and Haynes have previously described four stages of diagnostic research.5 In essence, our study falls within the third stage of diagnostic research, that is, determining whether the test distinguishes between patients with and without the target condition among those that were suspected to have the condition. We believe that Ghajari and Sabour point to the fourth and final stage of diagnostic research, that is, determining whether patients undergoing the test are doing better than similar untested patients. As emphasised in the conclusions and key messages of our study, we believe our findings warrant further investigation and validation. We agree with the authors that the investigation of the ‘diagnostic yield’ should be part of future research.3

Second, the authors indicate that we might have ‘misinterpreted’ the likelihood ratios (LRs) reported in our study. The authors state that the LRs obtained in our study (eg, 6.41 and 2.00) are ‘clear evidence for inaccuracy of the tests’. The authors refer to a review article, which reports that good diagnostic tests have an LR of >10 or <0.1.4 These particular LR cut-off points appear to be derived from a seminal report by Jaeschke et al.6 We certainly agree that diagnostic tests with such LRs are good, as they have a strong effect on the post-test probability of the target condition. However, tests with an LR closer to 1.0 might still have an important impact on the post-test probability, as also emphasised by Jaeschke et al.6 Diagnostic tests with LRs>2.0 or <0.5 may at least slightly to moderately alter the post-test probability.6–8 For example, a positive test with a positive LR of 6.41 can increase a putative pretest probability of 50% towards a post-test probability of 87%.6–8 As recognised by clinical guidelines for GCA,9 10 it is well known that imaging tests for GCA do not provide absolute evidence for the presence or absence of this condition. The same is actually true for symptoms, physical signs or laboratory tests; none of which have LRs>10.0 or <0.1 for a diagnosis of GCA.11 Overall, we do not agree with the authors’ claim that an LR between 2.0 and 10.0 should be considered as ‘clear evidence for inaccuracy’ of a test. We therefore believe that the term ‘misinterpretation’ is not correct in this context.

The third point raised by the authors suggests that we should have investigated the calibration of the halo score. As described in the reference provided by the authors, calibration is the ability of a test to correctly estimate the risk or probability of a future event.12 Thus, calibration is important for prognostic studies rather than diagnostic studies.12 We presume that the definition of our reference standard, that is, the final clinical diagnosis after 6 months of follow-up, might have caused the impression that we performed a prognostic study. The follow-up in the context of our study, however, was performed to verify that the diagnosis at baseline was correct. Clinicians sometimes have doubt about the clinical diagnosis early in the disease, and alternative diseases explaining the symptoms occasionally become overt during the first months after the initial diagnosis. The reference standard used in our study is therefore common practice in diagnostic research on GCA.

Although we commend Ghajari and Sabour for critically evaluating our work, we believe that the points raised by the authors are not indicative of ‘methodological issues’ or ‘misinterpretation’ in our study. As emphasised in our report, the ultrasonographic halo score awaits further validation by prospective, multicentre studies.

References

Footnotes

  • Handling editor Josef S Smolen

  • Twitter @profbdasgupta

  • Contributors KSMvdG, DP and BD wrote the manuscript. All authors were involved in revising it critically for important intellectual content. All authors provided final approval of the version published.

  • Funding The original study was funded by the Health Technology Assessment Programme of the National Institute for Health Research (NIHR, #HTA-08/64/01).

  • Competing interests KSMvdG reports grants from the Mandema Stipend and the Dutch Society for Rheumatology, and personal fees from Roche, outside the submitted work. WS reports grants and personal fees from GSK, Novartis, Roche and Sanofi; and personal fees from Chugai, outside the submitted work. RL reports grants from Arthritis UK, the Medical Research Council, the University of California San Francisco/Oxford Invention Fund, the Canadian Institutes of Health Research and The Vasculitis Foundation; grants and personal fees from GSK; and personal fees from Medpace, MedImmune and Roche, outside the submitted work. BD reports grants and personal fees from Roche; personal fees from GSK, BMS, Sanofi and AbbVie, outside the submitted work. The other authors have nothing to disclose.

  • Patient and public involvement Advice on design of the original study was obtained from patients through PMRGCAuk. Patient representatives on the Trial Steering Committee and the Data Monitoring Committee provided important advice.

  • Patient consent for publication Not required.

  • Ethics approval The original study was performed in accordance with the Declaration of Helsinki. All patients provided written informed consent. The study was approved by the Berkshire Research Ethics Committee (REC#09/H0505/132).

  • Provenance and peer review Commissioned; internally peer reviewed.

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Linked Articles