Health is becoming digital health, encompassing everything from electronic patient records to telemedicine and mobile health — spurred on by the pandemic. But the next evolution will involve artificial intelligence. While the potential of AI is enormous, there are still a number of challenges to delivering impactful solutions for clinical adoption.
While digital health isn’t new, there’s still a big gap in adoption, which has been slow and disjointed. We can already do a lot today, from digital diagnostics and remote patient monitoring to software-based therapeutic interventions. So where does AI fit in?
Artificial intelligence refers to the ability of a computer system to make a prediction or decision on a specific task that historically would require human cognition. Most of the capabilities available today can be categorized as Artificial Narrow Intelligence (ANI), which means it can assist with or take over specific focused tasks, without the ability to self-expand functionality.
On the other hand, machine learning (ML) is a category within AI that allows a computer system to act without being explicitly programmed, acquiring knowledge through data, observations and interactions that allow it to generalize to new settings.
How AI fits into digital health
As part of the AI-driven collaborative discovery process, health-care organizations need to first access data from silos across departments. This data then requires AI-assisted contextualization, exploration and annotation so it can be used for data-driven study design and AI model development. It’s critical to standardize this discovery process, making it repeatable and reproducible. Throughout the process, health-care organizations should consider privacy and potential bias.
Each of these steps, however, has its challenges. In preparing medical imaging data for machine learning, for example, there are challenges with availability of annotated data and potential biases that could affect generalizability of AI algorithms, according to an article in Radiology. New approaches such as federated learning and interactive reporting may ultimately improve data availability for AI applications in imaging.
In the U.S., there’s been a big push for the clinical adoption of electronic health records (EHRs), which starts with digitizing health records to provide insights at a patient level and, eventually, at a population level. Recommendations can then be pushed back to EHRs for clinical decision support.
In Canada, we’re further behind; EHRs aren’t widely used in all aspects of care. One of the biggest barriers to the more widespread use of EHRs is that physicians spend more than half their time on data entry and administrative tasks rather than face-to-face visits with patients, which results in declining quality of care. But these digital tools are becoming more user friendly, particularly as the pandemic accelerates the transition to digital health.
With mobile health, we’re also getting self-serve tools into the hands of patients — so we’re moving from encounter-based medicine to patient-centric care. For example, a monitoring tool on a patient’s wearable device could monitor blood pressure 24/7 in between appointments.
But these types of digital tools produce a lot of data. And there are related challenges. What’s the context of that data? What’s the quality of that data? Are there inherent biases? Digitalization of big data creates new challenges when it comes to interpreting data and making predictions or decisions.
The challenges ahead
Humans can only consider five to 15 metrics when making a decision. So with three months’ worth of data and millions of data points, it’s beyond the capacity of a single individual to make an informed recommendation. AI is trained on specific data and ‘learns’ from new data, providing a level of automation that’s narrow in scope but extremely high speed.
That’s the promise of AI: to offload the manual data crunching and provide high-speed recommendations on multiple variables, ultimately providing more patient-centric care. But we’re not there yet. Health-care institutes have an abundance of new data, but they’re unsure of its value. And it could reduce health equity because not everyone has access to it.
While the quality and quantity of AI research in health care is rapidly growing, the number and scope of usable products are still limited. When we consider how much of that research is being translated into physician use or patient care, we’ve seen a very limited number of FDA-approved algorithms. Of those, the majority have a very narrow spectrum of utility. And they’ve already been flagged for risks because there’s a known lack of complete data, meaning they’re not diverse enough for the real world.
While we’re seeing interesting applications of AI across industries, in health care it’s not only lagging but there are fundamental issues that still need to be addressed. According to an article in Digital Medicine about the “inconvenient truth” of AI in health care, to realize the potential of AI across health systems two fundamental issues need to be addressed: the issues of data ownership and trust, and the investment in data infrastructure to work at scale.
Data ownership, trust and scale
We need to strengthen data quality, governance, security and interoperability for AI systems to work. But the infrastructure to realize this at scale doesn’t currently exist. Data health components are sitting in silos; they’re not interoperable and they’re of varying quality. Because of the variability that exists, it’s difficult for physicians to ‘mine’ that data and make equitable, patient-centred decisions.
A deep learning health-care system first requires a digital knowledge base (including patient demographics and clinical data, as well as past clinical outcomes), followed by AI analyses for diagnosis and treatment selection, as well as clinical decision support where the recommendations are discussed between patient and clinician. This data is then added to the knowledge base to continue the process.
But there are several issues with this process. On the data side, scientific data isn’t ‘fair,’ which means AI models have an inherent bias toward the data set from the institute where the parameters were applied — without a mechanism to ‘train’ the AI somewhere else or let different systems learn from each other. As a result, it’s not able to overcome the inherent biases in the model.
From a business perspective, it’s also hard to sell institutional transformation. Most health-care institutes are relegated to using a software-as-a-service solution with a pre-trained model, which applies to a very limited data set. These algorithms have a utility in a particular setting — but that’s where the buck stops. And that means there’s no resulting structural or long-term change within health-care institutes.
Adopting a deep learning approach
For organizations to truly adopt a deep learning approach, it needs to be deeply embedded in their infrastructure to answer multiple narrow questions in a scalable way. Data needs to be accessible, searchable and usable, whether on-premise or in the cloud. It requires quality control, structuring and labelling.
But each of these steps is slow and labour-intensive. To train a single AI model, it’s necessary to first ingest relevant data, process it to make it accessible and searchable, and then allow users to annotate and contextualize it. While there are AI-assisted mechanisms that can speed up this process, those mechanisms need to be part of the infrastructure.
On top of that, data in health-care institutes is typically low quality; it’s not anonymized and has no context, so it’s not always usable. When designing an AI model at a single site where they’re de-risking institutional or geographical bias, they need a way to repeat this process at other institutes and allow the model to train and learn from that diversity.
It’s a big challenge, to say the least. While each of these tasks can be addressed by technology, if they’re not standardized and interoperable — across technologies and institutes — then they’re not scalable. And that’s where we are today, which means many of these FDA-approved algorithms are failing in the wild.
Overcoming bias in AI models
So how do you overcome these limitations and ensure you’re not introducing bias into your models? Our approach is to provide a collaborative framework, bringing the tools to the experts and allowing the AI models to overcome current limitations or friction points at each step in the process.
At each health-care institute, we provide a data hub that ingests and indexes data, making it searchable and accessible. We use language processing to sift through the data, contextualize it and make it easier and faster for clinicians to search for an appropriate group of patients they want to use in their studies.
When clinicians are looking at this data, they’re also being asked for their expertise on a particular use case — and that knowledge becomes available to everyone else. This allows institutes to leverage their expertise and translate the knowledge of domain experts, while at the same time speeding up data maturation.
And because this entire process is standardized and reproducible across different institutions, two different hospitals — even in two different languages — are able to benefit. If we allow the learnings to be exchanged, rather than the data itself, we’re able to maintain patient privacy and data ownership, solving two critical issues with AI in health-care settings.
Through these learnings, health-care institutes can develop a meta model that performs a task in a way that allows them to see the variability of a patient population. This meta model not only understands bias, but it can be redeployed with parameters that can be adjusted for a particular practice.
This, in turn, can help to address the issue of digital health equity. Clinical trials are typically run out of a few Centres of Excellence, which means data is only collected on people within a certain radius of those centres. In a distributed learning framework, infrastructure is provided to all institutes, reducing the bias of those Centres of Excellence. That means if health data is captured in Nunavut, for example, it can be included in the learnings, even without AI experts based in Nunavut.
The future of AI in healthcare
When it comes to AI, there’s still a big delta between the most advanced institutes and the average institute. But the pandemic has brought to light many of the inefficiencies in our health-care system. Many departments still have to manually calculate the best way to deal with supply and demand, optimize schedules and deal with backlogs.
We’re already seeing the use of statistical or machine learning models by insurance companies to predict things such as hospital readmission risks or understand high-risk patients based on socioeconomic factors. This can help to ensure patients get the specialty care they need and don’t get bounced around until they land on the right set of care providers.
We’re just starting to tap the potential of AI in health-care settings. In an emergency room, for example, it can be used to triage high-risk patients faster. During appointments, it can be used by clinicians to get a more complete picture of exam results to ensure nothing is missed. This reduces risk, and also helps us move toward more personalized health care.
Artificial intelligence is not meant to replace clinicians, but rather to help them focus on what matters: patients, rather than manual data entry and administrative tasks. When properly implemented, it can help clinicians better serve their patients while reducing burnout.
But AI in health care isn’t a magic bullet. It’s more of a digital elixir: a medical solution that brings together data science, machine learning and deep learning that can help clinicians transform data into better patient care.