Exclusive: AI Integration in Healthcare Takes Off

Potential pitfalls reveal risks with speed to adoption

Artificial Intelligence and Machine Learning (AI/ML) have emerged as revolutionary technologies in various sectors – and healthcare is no exception.

In the health industry, AI/ML is being increasingly used to streamline back-end processes, enhance diagnostics, improve patient care, and drive medical research.

And while the integration of AI/ML systems holds tremendous potential and opportunities for transforming healthcare – it doesn’t come without risks.

AI’s Growth and Role in Hospitals:

Global healthcare is adopting AI/ML at a rapid pace – especially after the COVID-19 pandemic.

Structural issues such as a shortage of healthcare providers, higher medical costs, and greater demand for value-based services have all driven AI’s growth.

For instance,

  • According to GrandView Research – the global artificial intelligence in healthcare market size was valued at $15.4 billion in 2022 and is expected to expand at a compound annual growth rate (CAGR) of 37.5% from 2023 to 2030.
  • MarketsAndMarkets also predicts a dramatic increase – projecting that AI in healthcare will grow at 48% CAGR from 2023 to 2028.

Hospitals continue leveraging AI/ML and its potential to further enhance patient care and outcomes.

Major areas where AI/ML is being used include:

  • Administrative
  • Virtual Assistants and Patient Monitoring
  • Medical Imaging and Diagnostic
  • Predictive Analytics
  • Surgical Assistance
  • Drug Discovery and Development

It’s clear that AI/ML has benefits in healthcare and will continue to play a growing support role.

But there are also risks involved – especially short term – as these technologies are implemented at a dramatic rate – often faster than the potential downsides can be assessed.

Risks Associated with AI/ML Implementation

As the wave of AI/ML further proliferates throughout the global healthcare system, it’s important to focus on the downside costs of these technologies and potential liabilities for providers.

One major concern is data privacy and security amid the adoption of cloud-based technology and AI/ML in healthcare.

These systems rely on extensive data collection, analysis, and complexity – making it crucial to establish robust safeguards to protect sensitive information. That maintains HIPPA compliance and helps reduce risk of lawsuits.

As healthcare providers increase their reliance on information technology, the threat of cyberattacks in order to steal patient information remains a clear and present danger.

  • According to a recent JAMA Health Forum study – the annual number of ransomware attacks in the healthcare system more than doubled from 43 to 91 between 2016 and 2021 – exposing the personal health information of nearly 42 million patients.

In fact, some estimates – such as from the credit rating agency Experian – indicate that the value of medical data is worth about 160 times more than credit card numbers and offers a longer timespan for abuse.

Cyberattacks are also debilitating for patients even when there’s no actual data theft.

For instance, such attacks have been “substantially disruptive” for healthcare providers due to electronic health records being disabled, appointments and surgeries delayed or cancelled, and even complete closures of practices that were too damaged to restore for a time.

According to the World Health Organization (WHO), “Precipitous adoption of untested systems could lead to errors by health-care workers, cause harm to patients, erode trust in AI and thereby undermine (or delay) the potential long-term benefits and uses of such technologies around the world.

AI-based solutions must be developed carefully – ensuring that training data sets are representative of the population where they are used, that mechanisms exist to monitor that algorithms are performing as expected over time, and that new technologies do not exacerbate social, gender or economic inequities.”

The WHO further says, “Anticipating the potential risks – ethical, technical, and clinical – and being proactive by building in safeguards help protect users and beneficiaries of these systems.”

Because of this ever-growing threat, the global health care industry is ramping up its cybersecurity efforts.

  • According to GrandView Research – the global healthcare cyber security market size was valued at $14.7 billion in 2022 and is expected to expand at a CAGR of 18.4% between 2023 and 2030.

And while it’s increasing, it’s important to note two things.

First, the increased cybersecurity required represents a significant added cost for healthcare providers.

And second, that growth in healthcare cybersecurity is lagging behind AI/ML and cloud-based computing implementation.

Steven Cramer – an IT professional working with healthcare – says that “while AI isn’t exactly new in healthcare, there’s a growing bifurcation between large scale health providers and middle-to-smaller providers with their EMR (electronic medical records) investment and standards, which adds complexity that didn’t exist before.”

He adds, “It’s become very costly for health providers to invest in automation and AI, which require greater vigilance to ensure that systems have reliable outcomes and limit potential liabilities.”

Another risk from expanded AI/ML algorithms is the issue with bias and discrimination –known as “algorithmic bias” – and this has profound implications for healthcare.

AI/ML analysis tools are only as unbiased as the data they are trained on. If the training data includes biases or inequalities, the AI system may perpetuate them, potentially leading to discriminatory outcomes in healthcare.

According to a 2019 paper published in the National Library of Medicine – there are three challenges health systems face in addressing algorithmic biases:

·        A lack of clear standard fairness

·        A lack of contextual specificity

·        The ‘black box’ nature of deep learning methods.

The takeaway here is more effort must be made to ensure that AI models are trained on diverse and representative datasets, promoting fairness and equitable healthcare delivery.

There’s also the risk of the lack of human oversight and accountability in healthcare amid greater dependence on AI/ML.

Overreliance on AI without proper human oversight can lead to errors and misinterpretations. The challenge is to strike a balance between AI-driven automation and the involvement of qualified healthcare professionals to ensure patient safety and mitigate potential risks.

For instance, a study published in 2021 highlighted the issue with using AI models to detect sepsis.

Findings showed that the AI sepsis model – known as Epic Early Detection of Sepsis model (EEDS) – identified just 7% of patients with sepsis who had not received timely antibiotics treatment. And the tool did not detect the condition in 67% of those who developed it, but generated alerts on thousands who did not.

This study reflects the challenges that healthcare faces with greater adoption of AI/ML tools – such as large-scale diagnostic errors – and the need for human staff to monitor them on an ongoing basis.

While these tools can be adjusted to improve efficiency, there remains a human cost in the interim.

Then there’s the regulatory and ethical challenges AI/ML face within the healthcare system.

While AI/ML is the fastest growing frontier in medicine, it also remains the most lawless.

The rapid advancement of AI in healthcare has outpaced regulatory frameworks and ethical guidelines. Big questions remain around liability, accountability, and the transparency of AI decision-making processes that must be addressed.

For instance, many AI tools do not have to undergo Food and Drug Administration (FDA) review before being put into use. And there’s no formal system in place for monitoring their safety or performance.

A 2021 study from Health IT Analytics found that FDA evaluations of medical AI devices were often “retrospective” (look at past data) and aren’t typically conducted in multiple clinical sites.

  • The review showed that 126 of the 130 AI devices underwent only retrospective studies at their submission. None of the 54 high-risk devices were evaluated by prospective studies (followed overtime collecting data).

This suggests a collaborative effort among policymakers, healthcare professionals, and AI developers will be necessary to establish comprehensive regulations, safety, and ethical frameworks that govern AI/ML use in healthcare.

Artificial Intelligence is revolutionizing the healthcare landscape, offering immense potential to improve diagnostics, treatment, and patient care in hospitals.

However, healthcare providers and investors alike will be wise to approach AI implementation with caution, addressing the risks and liabilities healthcare professionals may face amid threats to data privacy, algorithmic bias, and lack of human oversight.

AI-enabled products have sometimes resulted in inaccurate diagnoses, and even potentially harmful, recommendations for treatment.

There are also many ‘unknown-unknowns’ – risks that come from situations that are so unexpected that they would not be considered – as greater AI/ML adoption in healthcare continues.

Peter Lee, vice president of research and incubation at Microsoft, speaking at the HIMSS Global Health & Technology Conference: “There are tremendous opportunities here. . . But there are also significant risks, and risks we probably don’t even know about yet.”

Total
0
Shares
Related Posts