Nothing artificial about the future of AI, but who decides its intelligent use in health care?

0

A majority of Americans would feel “uncomfortable” with their doctor relying on AI in their medical care, according to recent polling, but despite those misgivings it is likely you have already encountered the results of artificial intelligence in your doctor’s office or local pharmacy.

The true extent of its use “is a bit dependent on how one defines AI,” said Lloyd B. Minor, dean of the Stanford University School of Medicine, but he said some uses have been around for years.

Most large health care providers already use automated systems that verify dosage amounts for medications and flag possible drug interactions for doctors, nurses and pharmacists.

“There’s no question that has reduced medication errors, because of the checking that goes on in the background through applications of AI and machine learning,” Minor said.

Hundreds of devices enabled with AI technologies have been approved by the FDA in recent years, mostly in the fields of radiology and cardiology, where these fancy formulas have shown promise at detecting abnormalities and early signs of disease in X-rays and diagnostic scans. But despite new applications for AI being touted every day, a science fiction future of robot practitioners taking your vitals and diagnosing you isn’t coming soon to your doctor’s office.

With the recent public launch of large language model chatbots like ChatGPT the buzz around how the health care industry can ethically and safely use artificial intelligence is reaching a crescendo, just as the public is starting to get familiar with how the technology works.

“It’s very obvious health and medicine is one of the key areas that AI can make a huge contribution to,” said Fei-Fei Li, co-director of Stanford’s Institute for Human-Centered Artificial Intelligence (HAI). Her group has joined forces with the Stanford School of Medicine to launch RAISE-Health (Responsible AI for Safe and Equitable Health), a new initiative to guide the responsible use of AI across biomedical research, education and patient care.

Fei-Fei Li, Stanford computer science professor and co-director of the newStanford Institute for Human-Centered Artificial Intelligence (Drew Kelly/Stanford)
Fei-Fei Li, Stanford computer science professor and co-director of the newStanford Institute for Human-Centered Artificial Intelligence (DrewKelly/Stanford) 

But the concerns are also front of mind. AI and algorithms “could replicate and amplify disparities … unless they’re recognized and responsibly addressed,” Minor said. For example, he cited data collected from the clinical trials that the FDA uses to approve drugs, the participants of which have “typically been Whites of European descent.”

“If you train AI on a narrow demographic group, you are going to get results that really only apply to that narrow group,” Minor said.

Stanford is one of many institutions tackling the challenges and promises of artificial intelligence in the health care industry, and many of its researchers and experts have been at the forefront of these discussions for years.

Sonoo Thadaney Israni was co-chair of the Working Group of Artificial Intelligence in Healthcare for the National Academy of Medicine (NAM) when the group published a report in 2019, titled “Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril.”

“The wisest guidance for AI is to start with real problems in health care,” Israni and her colleagues wrote, like the lack of access to providers for the poor and uninsured, or the ballooning costs of care.

Will using AI provide better health, lower costs, improve patients’ experience and clinicians’ well-being, and promote equity in health care? Those are the key questions, Israni says.

But there is no central regulatory agency overseeing the boom in AI, and as a World Health Organization report points out, the laws and policies around the use of AI for health “are fragmented and limited.”

Israni thinks “the real question becomes not what should be the regulation, but what should be the values underlying those regulations,” and politicians already have their sights set on the topic. The Biden administration has said addressing the effects and future of AI is a “top priority.”

Lloyd B. Minor, dean of Stanford University School of Medicine speaks during a ceremony to officially open Stanford Health Care's new outpatient clinic in Emeryville, Calif., on Thursday, March 2, 2017. The 90,000 square foot facility will provide both primary and specialty care in the East Bay. (Anda Chu/Bay Area News Group)
Lloyd B. Minor, dean of Stanford University School of Medicine, speaks during a ceremony to officially open Stanford Health Care’s new outpatient clinic in Emeryville, Calif., on Thursday, March 2, 2017. The 90,000-square-foot facility will provide both primary and specialty care in the East Bay. (Anda Chu/Bay Area News Group) 

Systems that detect and prevent errors — checking for a misplaced decimal point in a dosage of medication, or automatically checking possible adverse drug interactions — are “clearly making health and health care safer,” Minor said.

And other examples are already being used in local hospitals, and showing promising results.

“Our team developed an algorithm in order to identify patients who were at high risk for clinical deterioration while they were hospitalized,” said Dr. Vincent Liu, a senior research scientist at the Kaiser Permanente Northern California Division of Research. The system could prevent over 500 deaths at Kaiser’s 21 hospitals in Northern California each year, Liu and his colleagues estimated in a paper published in 2020 in the New England Journal of Medicine.

Liu said the risk score they developed is “the gold standard of early warning systems, and one of the few cases in which an AI or machine learning tool has rigorously shown that patients benefit.”

Despite his embrace of new technologies and applying those to providing better health care, Liu is also skeptical.

“I’m not an AI-phobe,” he said, “but … I have some pretty broad concerns, as well.” Those mainly focus on privacy and security, data quality and who is represented in the data, he said.

Many experts and reports have warned how bias and disparity can be reflected and amplified when algorithms are used without careful consideration.

For example, a 2019 paper published in Science found that a commonly-used algorithm to identify the sickest patients in hospitals underestimated the care needs of Black patients even when they were sicker than White patients.

Stay connected with us on social media platform for instant update click here to join our  Twitter, & Facebook

We are now on Telegram. Click here to join our channel (@TechiUpdate) and stay updated with the latest Technology headlines.

For all the latest Technology News Click Here 

Read original article here

Denial of responsibility! Rapidtelecast.com is an automatic aggregator around the global media. All the content are available free on Internet. We have just arranged it in one platform for educational purpose only. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials on our website, please contact us by email – [email protected]. The content will be deleted within 24 hours.
Leave a comment