Article Text

Download PDFPDF

SAT0480 Evaluation of standardised teaching of modified rodnan skin score assessment in systemic sclerosis
Free
  1. A.H. Low1,2,
  2. S.-A. Ng1,2,
  3. V. Berrocal3,
  4. B. Brennan3,
  5. G. Chan4,
  6. S.-C. Ng1,2,
  7. D. Khanna5
  1. 1Rheumatology and Immunology, Singapore General Hospital
  2. 2Duke-National University of Singapore, Singapore, Singapore
  3. 3Biostatistics, University of Michigan, Michigan, USA
  4. 4Rheumatology, Allergy and Immunology, Tan Tock Seng Hospital, Singapore, Singapore
  5. 5Scleroderma Program, University of Michigan, Michigan, USA

Abstract

Background The modified Rodnan skin score (mRSS) is a standard outcome measure for skin involvement in systemic sclerosis (SSc) clinical trials. Training assessors reduces variability in mRSS measurement.

Objectives Our objective is to report the inter- and intra-observer variability of mRSS scoring using newly developed standardised training guidelines by the Scleroderma Clinical Trials Consortium (SCTC).

Methods Two SSc experts (DK/AL), 2 facilitators, 52 rheumatology trainees and 8 SSc patients fulfilling the 2013 American College of Rheumatology criteria participated in a SSc skin scoring workshop. Eight SSc patients were examined by 2 SSc experts and facilitators together and consensus scores reached. All trainees attended a talk on mRSS skin scoring by an SSc expert (DK), followed by a video and live demonstration by an expert examining a patient exhibiting different aspects of skin scoring. Each trainee subsequently performed mRSS scoring on 4 SSc patients independently. This concluded the teaching session for mRSS scoring. The mRSS scoring for each trainee was compared to the consensus expert mRSS, and a score of ≤5 in 3 out of 4 patients is considered acceptable inter-observer variability, as determined by SCTC guidelines.

Two days after training, 12 trainees, 2 facilitators and 2 experts re-assessed independently the mRSS of 2 SSc patients whom they had examined previously. The repeat day 2 mRSS score for each trainee was compared to the baseline mRSS score, and a score of ≤3 is considered acceptable intra-observer variability.

We computed the inter- and intra-observer variability using a linear mixed model with an intercept term and random effects for patient, rater and patient by rater with the following values representing the degrees of agreement:<0 –poor; 0–0.20 –slight; 0.21–0.40 –fair; 0.41–0.60 –moderate; 0.61–0.80 –substantial; and 0.81–1.00 –almost perfect agreement.

Results For the first group of assessors involving 52 trainees, 65.4% of them achieved acceptable inter-observer variability, with inter-observer variability of 0.71, inter-observer mean of 8.64 and within-patient standard deviation (SD) of 4.25. For the second group of assessors who returned 2 days after training (n=14), compared to the experts’ scores, the inter-observer and intra-observer variability was 0.73 and 0.85 respectively. The inter-observer mean was 7.39 with a within-patient SD of 3.65. The intra-observer mean was 6.92 and within-patient SD was 2.73.

Conclusions There was substantial inter-observer reliability and excellent intra-observer reliability. This is the first study examining the training of assessors using the SCTC training guidelines and our results support the importance of standardised teaching for mRSS.

Disclosure of Interest None declared

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.