The Paradox of Healthcare AI

Paradox of Healthcare AI

The concept of AI making healthcare decisions is difficult to accept, driven by a paradox made up of two statements:

STATEMENT 1:

AI is never 100% accurate. This is a rule of artificial intelligence. Because if a system was truly 100% accurate, then it must have 100% of possible scenarios solved, and therefore a modeling or prediction system (machine learning / AI) would not be required in the first place. In the event that 100% of the scenarios of a problem have been solved, logic or even a simple hash map (a map of all possible scenarios and their outcomes) would suffice.

STATEMENT 2:

Healthcare has an extremely high penalty for error. If a mistake is made, there may be a patients life at stake. Because of this, healthcare is frequently cited as the most regulated industry. There are zero systems that humans trust their lives with today without any human oversight, no matter the field. In transportation, this is why we still have airline pilots despite the reliability of airplane systems.

Solving The Healthcare AI Paradox

There are a few ways to skirt both these statements that permit the proliferation of artificial intelligence in the medical domain. We will run through four cases, explain them, and provide at least one example of each. Only one needs to be true, but the more that hold true for a given use case the better.

healthcare AI suggestions

1) Use cases that are highly explainable and/or reviewable

Most significant are AI use cases where the AI can explain to the user, in a clear and consistent way, the reasons for its conclusions, OR one that can have its output reviewed before being approved. By passing these through a second layer of human review large amounts of error risk as well as AI liability are removed, while improving the overall system accuracy.

These systems are probably the most sought after at present, as they can be used in high penalty-for error situations such as those in patient care. However they come with the heavy risk of over-reliance by a practitioner on decision making, or alternatively the feeling of being redundant to a practitioner that already performs at the same level as the given AI solution.

Example: The use of an AI to assist in diagnosing, or the summarization of patient notes. Both of these are in common use by practitioners, but always are reviewed by the practitioner and are opt-in tool sets that the practitioner themselves may decide to rely on. In the end, it is the practitioner that holds the liability. But note, both of these, especially summarization, suffer the risk of a practitioner simply agreeing to the output by clicking a box without reading the summary should they become or over-reliant on the technology.

low penalty for error

2) Use cases with a low penalty for error

There are many situations in healthcare where there is a low penalty for error, despite error occurring. These situations permit automated AI so long as the frequency of this low-penalty error result is within acceptable limits.

Example: Prior authorizations are increasingly being automated to release prescriptions for patients in a short period of time and have them covered by insurance. A model constructed for automatically approving these releases based on an AI models findings has a low penalty for error. Such a model would have a human still review a case of a denial. This benefits the insurance company by expediting care and minimize touch, and it benefits the patient by also expediting care. The penalty of an error of such a model is simply overpayment by the payer - there is no risk to the patient.

Example: Another high-focus area is patient scheduling AI. Scheduling systems today will often recommend a patient visit based on known data about a facility, a patient, weather, traffic, and time of day to intelligently select time slots that are least likely to result in patient no-shows, which can be very costly to clinics. While the impact is high when the needle is slightly moved, the penalty for choosing a less optimal time is near zero. This tech can similarly be used for filling cancelled appointments with patients on a waiting list.

Healthcare AI variables

3) Use cases with a lot of variables and/or short time requirements

Tasks that require an extremely high number of variables to determine an outcome, or that are otherwise very timely in nature. Considering the volume of data in healthcare, this is becoming increasingly relevant with big data tasks or research tasks.

Example: Tenasol, as a regular part of its operations processes millions of records in all healthcare data formats to seek medical evidences used for payment or treatment. This is the kind of volume of data that is not reasonable to be performed by a human, but offers significant value when used by a human to reach a conclusion quickly.

Example: Another example of this is rare disease research. Most famously Zak Kohane and his team at Harvard University have used medical records combined with AI approaches to determine additional information about co-morbidities, traits, and occurrences of exceptionally rare diseases - a task that would otherwise be daunting through manual review.

Healthcare AI cost reduction

4) Use cases with high financial impact

AI use cases with a high ROI, direct or indirect, no matter how difficult are often the most intriguing as they offer differentiated products, better patient services, or more efficient healthcare ecosystems for doing so.

High financial impact models usually have traits of at least one of the first three cases described, however in cases where the financial opportunity is high enough, what would normally be considered a higher penalty error scenario may be dwarfed by the financial reward of successful uses of such a model.

Example: AI use cases involving pharmaceuticals are increasingly common as a single AI model can completely change the landscape of what is possible. Detection of adverse events using natural language processing, or alternatively the construction of new drugs based on simulated environments permit opportunities for large gains. In this scenario, there is of course intense review of model output.

AI Product Categories

Not all AI is created equal. A high percentage of companies use AI for purely branding aesthetic. All AI-based products fall somewhere on the following triangle:

AI Product types

Marketing Asset: These are products that incorporate AI for the purposes of purely marketing. Callaway AI golfclubs are a clear example of this as the presence of AI has no change in the overall functionality of the product, but rather depicted as part of the design process. These use cases are very common in healthcare.

Differentiated Feature: The Tesla brand of car, most acclaimed for its promise of self-driving capability, is an example of a feature that cannot exist without AI, however the overall product could easily persist and maintain market share without it.

IBM Watson (the jeopardy version, not the cloud advertising campaign) falls evenly between new product and marketing asset. It purely exists for marketing purposes, but would not have been able to exist if it weren’t for the underlying AI model used.

New Product: This is a product that cannot exist without AI modeling techniques. ChatGPT, as well as the vast majority of Tenasol’s product suite both fall into this category. These are exceptionally rare in healthcare.

Guardrails

Should an AI model attempt to reach out the bounds of safe, the US government and international agencies are quickly building out legislation to protect patients to make sure that these systems are not overstepping their bounds or taking advantage of patients, in what is considered the most regulated industry.

Furthermore, collaboration between AI developers, healthcare providers, and regulators is essential to ensure that AI systems are both compliant and practical in real-world clinical settings. By involving clinicians in the design and validation process, AI tools are more likely to align with medical workflows and gain acceptance from practitioners.

Conclusion

Artificial intelligence is poised to revolutionize healthcare, offering the potential to improve patient outcomes, streamline workflows, and drive innovation across the industry. However, the adoption of AI in this domain comes with significant challenges, rooted in the paradox that AI is inherently imperfect while healthcare demands near-zero tolerance for error. Navigating this paradox requires thoughtful application of AI to specific use cases that balance risk and reward effectively.

The most promising applications of AI in healthcare fall into four key areas: explainable and reviewable systems, low-risk tasks, high-variability scenarios, and use cases with substantial financial impact. By prioritizing these areas, AI can complement human expertise, improve efficiency, and expand access to care without compromising patient safety. For example, AI tools that assist clinicians in diagnosis or summarize patient notes empower practitioners while leaving critical decisions in their hands. Similarly, automation in low-risk tasks like prior authorizations reduces administrative burdens while maintaining safeguards for more complex cases.

Regulatory frameworks also play a crucial role in facilitating responsible AI adoption. Collaboration between AI developers, healthcare providers, and regulatory bodies ensures that AI tools align with clinical needs and meet rigorous safety standards. Ultimately, the integration of AI in healthcare is not about replacing humans but empowering them. By addressing challenges with transparency, accountability, and a focus on patient-centered care, AI can deliver on its promise to transform healthcare for the better.

If you are interested in learning more about capabilities in healthcare AI, reach out to our product team.

Previous
Previous

Understanding Electronic Medical Records

Next
Next

Understanding Prior Authorization