Predictive analytics in health care: how can we know it works?

J Am Med Inform Assoc. 2019 Dec 1;26(12):1651-1654. doi: 10.1093/jamia/ocz130.

Abstract

There is increasing awareness that the methodology and findings of research should be transparent. This includes studies using artificial intelligence to develop predictive algorithms that make individualized diagnostic or prognostic risk predictions. We argue that it is paramount to make the algorithm behind any prediction publicly available. This allows independent external validation, assessment of performance heterogeneity across settings and over time, and algorithm refinement or updating. Online calculators and apps may aid uptake if accompanied with sufficient information. For algorithms based on "black box" machine learning methods, software for algorithm implementation is a must. Hiding algorithms for commercial exploitation is unethical, because there is no possibility to assess whether algorithms work as advertised or to monitor when and how algorithms are updated. Journals and funders should demand maximal transparency for publications on predictive algorithms, and clinical guidelines should only recommend publicly available algorithms.

Keywords: artificial intelligence; external validation; machine learning; model performance; predictive analytics.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Algorithms*
  • Forecasting
  • Humans
  • Machine Learning* / ethics
  • Mobile Applications
  • Models, Theoretical*
  • Ownership / ethics*
  • Prognosis
  • Software