Article Text
Abstract
Background Efforts to standardize and catalog outcome measures for registries are important to improve our understanding of conclusions drawn both intra- and inter-registry(ies). A generic framework, the Outcome Measures Framework Model (OMFM),1 has been developed to use across all registries with the aim to identify the major categories of outcome measures. A condition specific framework exists for rheumatology clinical trials defined by the Outcome Measures in Rheumatology group (OMERACT),2 and may be applied to observational studies.
Objectives We undertook a qualitative evaluation of various rheumatoid arthritis (RA) and juvenile idiopathic arthritis (JIA) registries to evaluate the types and details of the outcome measures reported as a mechanism to refine the existing frameworks to guide registry data collection.
Methods Active RA and JIA registries within ClinicalTrials.gov (CT.gov) were identified using a cutoff date of 23 June 2015. The 10 largest registries based on anticipated enrollment were selected for evaluation and supplemented by additional registries, not necessarily in CT.gov but considered important for inclusion by the project team and external stakeholders. Two reviewers, an epidemiologist and rheumatologist, evaluated the registry reported outcome measure details, terminology, and variations qualitatively.
Results Twenty-one registries were evaluated, 11 in common by both reviewers. Registries included populations from the US, the EU, Asia and Latin America and were run by various government, industry, or non-profit organizations. Key findings were that the outcome measures utilized and the time points of their assessment were often not clearly defined in CT.gov. Both the OMERACT framework and OMFM were missing various outcome measures and their definitions to properly categorize all the identified outcomes. Registries varied with respect to detailing how results would be analyzed, e.g., prospective or post hoc analyses. Diagnostic criteria used for inclusion were frequently vague, e.g., investigator judgement vs available classification criteria vs not specified. Published data from earlier investigations were typically not cited, results to date not reported, and links to registry websites not provided. Data not available in CT.gov were often available in publications or websites regarding analytical methods, but these external resources were not always easily located.
Conclusions Current CT.gov registry postings often are missing key details to ensure a clear understanding regarding the populations enrolled, outcomes data to be collected and analyzed, and opportunities to combine data amongst registries. This preliminary work also identified instances to improve the OMERACT framework and OMFM as well as expand the OMERACT outcome measures to address the needs of observational studies beyond clinical trials.
Gliklich et al. J. Comp. Eff. Res. (2014) 3(5), 473–480.
OMERACT handbook. http://omeract.org/pdf/OMERACT_Handbook.pdf. Accessed 14 Oct 2015
Acknowledgement The research for this presentation was supported by the Agency for Healthcare Research & Quality (AHRQ) under Contract Number HHSA290201400004C to provide a searchable, central listing of registries and develop a working framework for registry outcomes measures.
Disclosure of Interest N. Goel Employee of: Quintiles Inc., F. Eisenberg Consultant for: Quintiles Inc., M. Barboza Employee of: Quintiles Inc., D. Campion Employee of: Quintiles Inc., K. Bibeau Employee of: Quintiles Inc.