AI Ethics in Healthcare

Artificial Intelligence (AI) toolsets in healthcare represent a significant leap forward in our capabilities of patient and population care. However, the integration of artificial intelligence (AI) into the healthcare sector raises a myriad of ethical concerns. Furthermore, regulators struggle where to begin when enacting guidance, policy, and laws to curb negative effects of automated decision making in the wrong places.

It stands today that there are little in material laws to prevent the production use of inaccurate and/or unethical AI systems - only that punish the poor effects of their deployment. This only underlines the importance of transparency, safety, security, as well as traceability of its use.

This blog aims to cover key guidance documents issued as of March 2024.

[February 2019] Executive Order 13859 - Maintaining American Leadership in Artificial Intelligence (United States) [link]

This executive order released by the White House roughly outlined how that United States should grow and protect its AI assets. It designated roles and responsibilities in carrying out these tasks to different branches of the federal government.

The order also required NIST to establish technical standards for AI technologies. A response from NIST on this order came out later that year in a document that describes the sub-catagories NIST was focusing in regards to this.

This was the first executive order on Artificial Intelligence.

[December 2020] Executive Order 13960 - Promoting the Use of Trustworthy Artificial Intelligence in the Federal Government (United States) [link]

The second executive order on artificial intelligence was released a year later, and focused on narrower guidance for AI models, and reporting from federal agencies on where it was being used.

It specified that AI must respect US law and values, as well as be accurate, stable, secure, safe, resilient, sufficiently understandable by subject matter experts/users/others where necessary, responsible, traceable, routinely monitored, transparent (in how it is utilized), and accountable.

This was the first case of the US government stating that it wanted full traceability of AI, which was previously not specified. It gave a maximum for 240 days for all agencies to report on where AI was being used or planned on being used, and then 120 days thereafter to meet the listed criteria, or be dismantled. More broadly, it also urged collaboration on these efforts between agencies.

[October 2022] Blueprint for an AI Bill of Rights [link]

The proposed AI Bill of Rights was released by the White House through the Office of Science and Technology Policy (OSTP) and despite its strange name is probably one of the best documents out there. It specifies guidance on operational behaviors of production AI systems in detail such as:

  • Consumers should not be subjected to unsafe / ineffective systems

  • Equitable use and protection from discriminatory biases

  • Built-in data protections

  • Notice to user on how the system is being used

  • Opt-out capability and human support

This document is great as it not only goes directly into ethical concerns for the first time, it does it in a very detailed manner, providing direct examples of real situations where cases of harm as a result of these models has occurred.

[October 2023] Executive Order 14110 - Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence [link]

The third executive order released by the White House narrowly focused on safety and security as the name implied.

These safety / security aspects are broken into:

  • generalized safety / security

  • domestic and international collaboration

  • effects on labor force and job quality

  • human equity and civil rights

  • consumer protections

  • privacy and civil liberties

This order also widened NISTs responsibilities in federal uses of AI. It ordered federal agencies within a short window of time to adopt these policies to protect citizens of harm.

[March 2024] The Artificial Intelligence Act (European Union) [link]

The EU often leads in tech law relative to the United States. This one is the most detailed of anything so far released on such a scale, the AI Act seeks to place AI use cases into binned categories of risk, that must comply with regulations accordingly. These include:

Artificial Intelligence Act Risk Ranking

Unacceptable risk: These are banned, crossing the lines of safety and privacy.

High risk: These pose high threats due to high penalty for error by operating in such fields as healthcare, education, critical infrastructure, law, justice, and other fields where person may be effected without their knowledge harmfully. Models in this category require close evaluation prior and post market implementation.

General purpose AI: There are foundational models that are high impact and beyond a quantifiable computation (10^25 FLOPS). They also require evaluation.

Limited Risk: Systems that permit a user to know that they are using AI and that they are opting in. These rarely need to be regulated.

Minimal risk: Systems that are non-invasive in nature and cannot reasonably cause harm, such as video game AI and spam filters.

[March 2024] United Nations Resolution - Seizing the Opportunities of Safe, Secure and Trustworthy Artificial Intelligence Systems for Sustainable Development (United Nations) [link]

While it is the most recent, the UN resolution comes with no foreseeable effects as it is nonbinding in nature. Nor does it mean that it was voted on - consensus in the UN means the nature of a document draft is broadly agreed on regardless of opposition to its contents.

So while it does not say much, it does speak to the global recognition of the topic.

Medical Ethics at Tenasol

Tenasol takes its AI-based systems very seriously. Beyond our code of conduct, we benchmark our solutions, and validate how our technology is being used to make sure it complies with the strictest of federal and local regulations. Reach out to us if you have any questions or comments on our systems and how they are used.

Previous
Previous

How to Test a Medical AI System

Next
Next

A Comprehensive Guide to Healthcare Data Sources