Hospital bosses love AI. Doctors and nurses are worried. (2023)

Mount Sinai has become a laboratory for AI, trying to shape the future of medicine. But some healthcare workers fear the technology comes at a cost.

Hospital bosses love AI. Doctors and nurses are worried. (1)

By Pranshu Verma

Updated August 10, 2023 at 10:54 a.m. EDT|Published August 10, 2023 at 7:00 a.m. EDT

Hospital bosses love AI. Doctors and nurses are worried. (2)

Listen

8 min

Share

Comment

NEW YORK — Every day Bojana Milekic, a critical care doctor at Mount Sinai Hospital, scrolls through a computer screen of patient names, looking at the red numbers beside them — a score generated by artificial intelligence — to assess who might die.

On a morning in May, the tool flagged a 74-year-old lung patient with a score of .81 — far past the .65 score when doctors start to worry. He didn’t seem to be in pain, but he gripped his daughter’s hand as Milekic began to work. She circled his bed, soon spotting the issue: A kinked chest tube was retaining fluid from his lungs, causing his blood oxygen levels to plummet.

Advertisement

After repositioning the tube, his breathing stabilized — a “simple intervention,” Milekic says, that might not have happened without the aid of the computer program.

Milekic’s morning could be an advertisement for the potential of AI to transform health care. Mount Sinai is among a group of elite hospitals pouring hundreds of millions of dollars into AI software and education, turning their institutions into laboratories for this technology. They’re buoyed by a growing body of scientific literature, such as a recent study finding AI readings of mammograms detected 20 percent more cases of breast cancer than radiologists — along with the conviction that AI is the future of medicine.

Researchers are also working to translate generative AI, which backs tools that can create words, sounds and text, into a hospital setting. Mount Sinai has deployed a group of AI specialists to develop medical tools in-house, which doctors and nurses are testing in clinical care. Transcription software completes billing paperwork; chatbots help craft patient summaries.

But the advances are triggering tension among front-line workers, many of whom fear the technology comes at a strong cost to humans. They worry about the technology making wrong diagnoses, revealing sensitive patient data and becoming an excuse for insurance and hospital administrators to cut staff in the name of innovation and efficiency.

Most of all, they say software can’t do the work of a human doctor or nurse.

“If we believe that in our most vulnerable moments … we want somebody who pays attention to us,” Michelle Mahon, the assistant director of nursing practice at the National Nurses United union, said, “then we need to be very careful in this moment.”

Hospitals have dabbled with AI for decades. In the 1970s, Stanford University researchers created a rudimentary AI system that asked doctors questions about a patient’s symptoms and provided a diagnosis based on a database of known infections.

In the 1990s and early 2000s, AI algorithms began deciphering complex patterns in X-rays, CT scans and MRI images to spot abnormalities that the human eye might miss.

Several years later, robots fueled with AI vision began operating alongside surgeons. With the advent of electronic medical records, companies incorporated algorithms that scanned troves of patient data to spot trends and commonalities in patients who had certain ailments, and recommend tailored treatments.

As higher computing power has turbocharged AI, algorithms have moved from spotting trends to predicting whether a specific patient will suffer from an ailment. The rise of generative AI has created tools that more closely mimic patient care.

Vijay Pande, a general partner at venture capital firm Andreessen Horowitz, said health care is at a turning point. “There’s a lot of excitement about AI right now,” he said. “The technology has … gone from being cute and interesting to where actually [people] can see it being deployed.”

In March, the University of Kansas health system started using medical chatbots to automate clinical notes and medical conversations. The Mayo Clinic in Minnesota is using a Google chatbot trained on medical licensing exam questions, called Med-Palm 2, to generate responses to health care questions, summarize clinical documents and organize data, according to a July report in the Wall Street Journal.

Some of these products have already raised eyebrows among elected officials. Sen. Mark R. Warner (D-Va.) on Tuesday urged caution in the rollout of Med-Palm 2, citing repeated inaccuracies in a letter to Google.

“While artificial intelligence (AI) undoubtedly holds tremendous potential to improve patient care and health outcomes, I worry that premature deployment of unproven technology could lead to the erosion of trust in our medical professionals and institutions,” he said in a statement.

Advertisement

Thomas J. Fuchs, the dean for AI at Mount Sinai’s Icahn School of Medicine, said it is imperative that research hospitals, which are staffed with pioneering physicians and researchers, act as laboratories to test this technology.

Mount Sinai has taken the premise literally, raising over 100 million dollars through private philanthropy and building research centers and on-site computing facilities. This allows programmers to build AI tools in-house that can be refined based on physician input, used in their hospitals and also sent to places that don’t have the money to do similar research.

“You cannot transplant people,” Fuchs said. “But you can transplant knowledge and experience to some degree with these models that then can help physicians in the community.”

But Fuchs added that “there’s enormous amount of hype” about AI in medicine right now, and “more start-up companies than you can count who … like to evangelize to sometimes absurd degrees” about the revolutionary powers the technology can hold in medicine. He worries they may create products that make biased diagnoses or put patient data at risk. Strong federal regulation, along with physician oversight, is paramount, he said.

David L. Reich, the president of The Mount Sinai hospital and Mount Sinai Queens, said his hospital has been wanting to use AI more broadly for a few years, but the pandemic delayed its rollout.

Though generative chatbots are becoming popular, Reich’s team is focusing mostly on using algorithms. Critical care physicians are piloting predictive software to identify patients who are at risk of issues such as sepsis or falling — the kind of software used by Milekic. Radiologists use AI to more accurately spot breast cancer. Nutritionists use AI to flag patients who are likely to be malnourished.

Reich said the ultimate goal is not to replace health workers, but something more simple: getting the right doctor to the right patient at the right time.

But some medical professionals aren’t as comfortable with the new technology.

Advertisement

Mahon, of National Nurses United, said there is very little empirical evidence to demonstrate AI is actually improving patient care.

“We do experiments in this country, we use the clinical trial, but for some reason, these technologies, they’re being given a pass,” she said. “They’re being marketed as superior, as ever present, and other types of things that just simply don’t bear out in their utilization.”

Though AI can analyze troves of data and predict how sick a patient might be, Mahon has often found that these algorithms can get it wrong. Nurses see beyond a patient’s vital signs, she argues. They see how a patient looks, smell unnatural odors from their body and can use these biological data points as predictors that something might be wrong. “AI can’t do that,” she said.

Some physicians interviewed by Duke University in a May survey expressed reservations AI models might exacerbate existing issues with care, including bias. “I don’t think we even really have a great understanding of how to measure an algorithm’s performance, let alone its performance across different race and ethnic groups,” one respondent told researchers in the study of caregivers at hospitals including the Mayo Clinic, Kaiser Permanente and the University of California San Francisco.

At a time of severe nursing shortage, Mahon said hospital administrators’ excitement to incorporate the technology is less about patient outcomes and more about plugging holes and saving costs.

“The [health care] industry really is helping people buy into the all the hype,” she said, “so that they can cut back on their labor without any questions.”

Robbie Freeman, Mount Sinai’s vice president of digital experience, said the hardest parts of getting AI into hospitals are the doctors and nurses themselves. “You may have come to work for 20 years and done it one way,” he said, “and now we’re coming in and asking you to do it another way.”

“People may feel like it’s flavor of the month,” he added. “They may not fully be … bought into the idea of adopting some sort of new practice or tool.”

Advertisement

And AI is not always a surefire method for saving time. When Rebecca Brown, a 45-year-old heart patient from Corning, N.Y., was flagged as one of the sickest patients in Mount Sinai’s critical care ward on a May morning, Milekic went to her room to run an examination.

Milekic quickly saw nothing was out of the ordinary, letting Brown continue eating her peanut butter and jelly sandwich.

Asked whether she would want AI to care for her over a doctor, Brown’s answer was simple: “There is something that technology can never do, and that is be human,” she said. "[I] hope that the human touch doesn’t go away.”

correction

A previous version of this story misstated David L. Reich's position. He is the president of The Mount Sinai hospital and Mount Sinai Queens.

FAQs

Do people trust AI in healthcare? ›

A University of Arizona Health Sciences-led study found that more than 50% of people don't fully trust AI-powered medical advice, but many put faith in AI if it's monitored and guided by human touch.

What is the problem with AI in healthcare? ›

When it comes to applying AI in the healthcare industry, privacy is a significant concern. Patient data consists of highly sensitive Personally Identifiable Information (PII) such as medical history, identity information, and payment information, which is protected by regulation guidelines provided by GDPR and HIPAA.

What are the arguments against AI in healthcare? ›

AI-based systems raise concerns regarding data security and privacy. Because health records are important and vulnerable, hackers often target them during data breaches. Therefore, maintaining the confidentiality of medical records is crucial [13].

Will AI replace doctors and nurses? ›

Already, AI is being used to streamline administrative tasks, answer patient questions, and for machine learning; its likely that in the near future there will be more AI scribes and virtual nursing assistants. As the technology continues developing, AI will be used to supplement care but it can't replace doctors.

How do people feel about AI in healthcare? ›

Overall, 38% think that AI in health and medicine would lead to better overall outcomes for patients. Slightly fewer (33%) think it would lead to worse outcomes and 27% think it would not have much effect.

Is AI more accurate than doctors? ›

The AI responses also rated significantly higher on both quality and empathy. On average, ChatGPT's responses scored a 4 on quality and a 4.67 on empathy. In comparison, the physician responses scored a 3.33 on quality and 2.33 on empathy.

Why AI cannot replace humans in healthcare? ›

AI can't replace doctors completely because it lacks empathy, creativity, and ethical judgment. These are essential skills for medical professionals who need to understand their patients' emotions, find innovative solutions to complex problems, and make decisions that respect human dignity and values.

What are 3 disadvantages of AI? ›

Top 5 disadvantages of AI
  • A lack of creativity. Although AI has been tasked with creating everything from computer code to visual art, it lacks original thought. ...
  • The absence of empathy. ...
  • Skill loss in humans. ...
  • Possible overreliance on the technology and increased laziness in humans. ...
  • Job loss and displacement.
Jun 16, 2023

What are the benefits and disadvantages of AI in healthcare? ›

AI can identify patterns and anomalies that may be difficult for human clinicians to detect, leading to earlier detection of diseases and improved treatment outcomes. A disadvantage of AI in healthcare is the potential for ethical and privacy concerns.

What profession will not be replaced by AI? ›

Complex Decision-Making and Critical Thinking Jobs

However, it still requires human expertise to interpret the results accurately. Analysts and scientists are jobs that AI can never replace as they require domain knowledge and critical thinking skills to derive insights and identify patterns.

Which jobs are safe from AI? ›

Jobs That AI Can't Replace
  • Surgeons. Would you ever let an AI operate on you? ...
  • Nurses. Nurses play a vital role in healthcare, providing patient care, education, and emotional support. ...
  • Therapists. ...
  • Social workers. ...
  • Elementary School Teachers. ...
  • Research Scientists. ...
  • Physical Therapists. ...
  • Chefs.
May 12, 2023

Will nurses be taken over by robots? ›

In the literature, there is a consensus among authors that artificial intelligence robots will not directly replace nurses in the near future [22, 28–32].

What percentage of people trust AI? ›

In KPMG's 2023 global research study, the UK was reported to have the same or similar levels of trust in AI as France (31%), Canada (32%), Australia (34%), and Germany (35%) and a slightly lower level of trust than the United States (40%).

How is AI biased in healthcare? ›

A landmark 2019 study published in the journal Science found that an algorithm used to predict health care needs for more than 100 million people was biased against Black patients. The algorithm relied on health care spending to predict future health needs.

How many people don t trust AI? ›

The details: 62% of people said they are somewhat or mostly "concerned" about AI compared to 21% who said they are somewhat or mostly "excited" about it, among the 1,001 polled from July 18-21. 72% prefer slowing down the development of AI compared to just 8% who would rather speed it up.

Would 60% of Americans be uncomfortable with provider relying on AI in their own health care? ›

Six-in-ten U.S. adults say they would feel uncomfortable if their own health care provider relied on artificial intelligence to do things like diagnose disease and recommend treatments; a significantly smaller share (39%) say they would feel comfortable with this.

References

Top Articles
Latest Posts
Article information

Author: Prof. Nancy Dach

Last Updated: 12/04/2023

Views: 6361

Rating: 4.7 / 5 (77 voted)

Reviews: 84% of readers found this page helpful

Author information

Name: Prof. Nancy Dach

Birthday: 1993-08-23

Address: 569 Waelchi Ports, South Blainebury, LA 11589

Phone: +9958996486049

Job: Sales Manager

Hobby: Web surfing, Scuba diving, Mountaineering, Writing, Sailing, Dance, Blacksmithing

Introduction: My name is Prof. Nancy Dach, I am a lively, joyous, courageous, lovely, tender, charming, open person who loves writing and wants to share my knowledge and understanding with you.