Article Text

Download PDFPDF

Correspondence on ‘Mining social media data to investigate patient perceptions regarding DMARD pharmacotherapy for rheumatoid arthritis’
Free
  1. Katja Reuter1,
  2. Elena Rocca2
  1. 1Department of Public Health & Preventive Medicine, SUNY Upstate Medical University, Syracuse, New York, USA
  2. 2Centre for Applied Philosophy of Science, Norwegian University of Life Sciences, Ås, Akershus, Norway
  1. Correspondence to Dr Katja Reuter, Department of Public Health & Preventive Medicine, SUNY Upstate Medical University, Syracuse, NY 13210-2306, USA; reuterk{at}upstate.edu

Statistics from Altmetric.com

In their recent article, Sharma et al1 posed an interesting research question where they used social media data to examine patients’ perceptions of disease modifying anti-rheumatic drug (DMARD) pharmacotherapy for rheumatoid arthritis (RA). The authors used the web analytics platform Treato, which provides data from multiple social media sites. They conducted a sentiment analysis towards DMARDs and identified patient beliefs within the positive and negative sentiment.

While we agree with the authors’ view that social media provides a valuable data source to explore patients’ perspectives, the study design leaves uncertainties about the authors’ results and conclusions. Our primary concern is about the claim that the study establishes causality based on research data of uncertain, potentially low quality and correlations that denote the association between two quantitative variables.

First, the type of data that was included in the data analysis is unclear. The authors did not describe a clear data search strategy (eg, keywords, phrases, hashtags), and they did not provide evidence about the relevance of the included data to the research question. However, the importance of a transparent and adequately focused search, filtering and quality evaluation of social media data has been pointed out by Kim et al and others.2–4 Without it, the conclusions researchers draw may be biased or misleading. The complementary use of qualitative research methods designed to describe and interpret complex phenomena such as individuals’ views and beliefs could have helped to generate some evidence in this regard.

Second, the authors relied on Treato’s algorithm to identify patients’ self-reported experiences. Still, they did not offer any evidence that suggests the algorithm’s accuracy to identify patient-generated posts reliably. It is well-documented that human-like automated accounts by commercial and other interest groups pollute public social media data.5–9 In this study, it remains unclear how many of the analysed posts originated from people with RA. Using qualitative research methods and tools such as Botometer10 to identify patient’s accounts and related posts would have strengthened the study’s internal validity.

Third, the authors attempted to validate the results of Treato’s sentiment analysis by manually assigning sentiments to 200 posts and comparing their codings with the algorithm’s results. They used Cohen’s Kappa coefficient for measuring the interobserver agreement. We wondered why the authors considered moderate values between k=0.49 for csDMARDs and k=0.52 for b/tsDMARDs sufficient, considering that the study topic has clinical relevance. Most texts recommend a minimum of 80% inter-rater agreement as acceptable.11 The authors rightly point out that a sentiment analysis standard does not exist. However, we see this as an argument for more cautionary results’ interpretation, especially when a study relies heavily on computer-assisted data collection and analysis methods.

Lastly, the authors intended to describe patients’ beliefs associated with the positive and negative sentiments towards DMARDs, but it is unclear how these ‘beliefs’ were determined. There is no description of a priori codes that may have been used to identify complex concepts, such as ‘efficacy’.12 13 More importantly, the paper claims to have established causality. According to the authors, ‘The two most common reasons for a positive post were DMARD efficacy and lack of side effects. Conversely, the most common reasons for a negative post were lack of efficacy and side effects’. However, as far as we can tell from the method description, the algorithm established correlations of keywords. Deriving causality directly from correlations is invalid across scientific disciplines.14 For instance, a hypothetical statement such as ‘I had frequent headaches with the previous therapy, and this made me passive and depressed, so I am trying now to switch to (drug of interest)’ has all the relevant keywords for side effect, the drug of interest and negative sentiment. Yet, it does not express causality between the drug of interest and the side effect. Without further discussion or additional evidence, the authors’ causality claims are not substantiated.

We acknowledge that the authors are in line with a current trend and focus on big data studies, which pose unique challenges. The focus on big data raises fundamental questions about the characteristics of a reliable dataset and the extraction of valid and meaningful insight.15 We find that the paper would greatly benefit from reflections on these questions.

In closing, we hope that this correspondence will be received as constructive criticism to help move this research field forward. Social media-based patient-generated data have the potential to offer a new type of medical evidence. The ability to take full advantage of it will require technical advance, critical thinking and the re-evaluation of conventional evidence standards. In the meantime, we need to be aware of the limitations and challenges of using social media to capture patients’ perspectives and interpret the data with the appropriate caution.

References

Footnotes

  • Twitter @dmsci

  • Contributors KR and ER conceived and wrote the presented commentary.

  • Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.

  • Competing interests None declared.

  • Patient and public involvement Patients and/or the public were not involved in the design, or conduct, or reporting, or dissemination plans of this research.

  • Patient consent for publication Not required.

  • Provenance and peer review Not commissioned; internally peer reviewed.

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Linked Articles