A.I. raises ethical issues in health care

Some nurses are concerned that the scarcity of laws governing the use of artificial intelligence in hospitals means a lack of protections for patients. Credit: Getty Images/Tempura
For nurse Judy Schmidt, the beeping monitors hooked up to patients at Community Medical Center in Toms River, New Jersey, were just a normal part of the busy intensive care unit.
But looking back on her work about a decade ago, Schmidt said she realizes those machines were using early versions of artificial intelligence to help analyze and track the patients’ health.
Artificial intelligence has been used in health care for years, even before the public became familiar with the technology, said Schmidt, chief executive of the New Jersey State Nurses Association, a professional organization.
Now, some electronic health records are programmed to alert providers when patients could be having symptoms of a major illness. And in medical education, professors are depending more on mannequins, such as those programmed to mimic a birth, she said.
But the fast development of these systems — to the point where robotics are being used in surgery — raises practical and ethical questions for providers, Schmidt said.
Some experts say A.I. technology can improve the health care industry by automating administrative work, offering virtual nursing assistance and more. A.I. systems can predict whether a patient is likely to get sicker while in the hospital. And more health care providers could start using robotics in the examination room.
A lack of protections
But some nurses are concerned that the scarcity of laws regarding A.I.’s use in hospitals and beyond means a lack of protection for individuals who could suffer from the technology’s mistakes.
“In the long run, whatever artificial intelligence we use, it’s still the human — the person — that has to take that data, and the interpretation of that data in some respects, and apply it to the real person that’s in the bed, the nursing home or the home of that person,” Schmidt said.
State legislators are lagging on creating regulations for the use of A.I., said Richard Ridge, an assistant professor of nursing at the University of Virginia. As the technology becomes more advanced, most health care workers are relying on policies set by their own hospital or practice, which can vary.
Legislators not only need to educate themselves about A.I., but also to consider protections for patients within systems that use the technology, said Ridge, who added that nurses should be a part of those conversations.
“The value nurses bring to the table in any health care discussion is helping policymakers and decision-makers see things from the patient’s point of view,” Ridge said.
Lawmakers in several states have introduced bills on A.I. in health care, but a Stateline survey found only one that has been enacted: a Georgia law that allows the use of artificial intelligence devices in eye exams.
One Pennsylvania bill would require insurers to disclose whether they are using A.I.-based algorithms when reviewing claims to determine whether medical procedures are necessary.
Pennsylvania state Rep. Arvind Venkat, a Democrat sponsoring the bill and a physician, said the growth of A.I. means it can be used to determine whether treatments or medications aren’t covered by a patient’s insurance.
“One of the problems we’ve seen with A.I. is that the data goes into the A.I. platform, it makes a decision, and it gets spit out, but that decision is only as good as the data being used to train the platform,” Venkat said. “Existing biases are being reinforced by the use of artificial intelligence, and especially in the area of health insurance.”
The American Nurses Association’s code of ethics, followed by all nurses in the country, states that advanced technologies, including A.I., do not replace nursing skills or judgment.
In a position statement, the organization said nurses “are responsible for being informed about and ensuring the appropriate use of A.I.” for their patients. It also said it’s essential for nurses to be part of efforts to advocate for A.I. governance that holds technology developers accountable.
Albert Fox Cahn, executive director of the Surveillance Technology Oversight Project, a nonprofit that advocates for privacy rights in new technologies, said that in the absence of federal rules, he hopes state and local policymakers create policies modeled after those of the European Union.
European Union legislation
The EU Artificial Intelligence Act, which is set to become the world’s first set of laws to govern artificial intelligence, could become the global standard. It attempts to set rules for regulating the technology across the EU.
While acknowledging that the technology has major benefits, the legislation establishes rules for public and private entities — including health care — to use risk assessments, testing and more to ensure A.I. systems work properly and protect user rights.
The EU’s artificial intelligence liability directive, proposed in 2022, would ease the burden of proof for victims to show damage caused by an A.I. system.
It’s an alarming moment for policy, Cahn said. There are new A.I. systems being deployed across industries, but without laws to protect individuals if something goes wrong, he said.
That doesn’t mean A.I. systems should be scrapped, Cahn said, but ignoring the dangers of these systems would be a mistake. Policymakers should look at the impact of A.I. from every standpoint, he said, including data used to train A.I. that could hold implicit biases and lead to discrimination.
New revelations in Gilgo documentary ... Keeping pets safe from ticks ... LI Works: Taking a spin at Adventureland ... Get the latest news and more great videos at NewsdayTV
New revelations in Gilgo documentary ... Keeping pets safe from ticks ... LI Works: Taking a spin at Adventureland ... Get the latest news and more great videos at NewsdayTV