Will ChatGPT Wreck Or Rekindle The Doctor-Patient Relationship?

0

Medicine is like no other profession—at least, no other legal profession.

In quiet and brightly lit rooms, doctors ask patients to disrobe, confess their secrets, and grant permission to be stuck with needles and knives. These are crimes in any other context. In medicine, they lay the foundation for deep interpersonal trust.

In this profession forged by intimate human relationships, medicine now faces an unprecedented challenge: generative AI. Patients are growing worried about artificial intelligence tools like ChatGPT and their emergence in the medical field.

According to a recent Pew Research poll, the majority of patients fear:

  • Their healthcare provider will rely too much on AI to diagnose disease and recommend treatments (60%).
  • That AI will cause them to lose their personal connection with their healthcare provider (57%).

Earlier this month, I examined some of the technological fears patients expressed about AI, including issues of security, privacy and bias. This follow-up article explores ethical questions that will arise as doctors become increasingly reliant on new AI technologies.

Man vs. machine vs. ethics

Researchers can objectively measure many areas of clinical performance. They can, for example, test how accurately radiologists interpret mammograms and chest X-rays for pneumonia. They can then compare those results against computer applications performing the same tasks.

These scientific measures are important, but the practice of medicine isn’t always quantifiable. Patients frequently go to healthcare professionals with problems that have no objective or “right” answer.

The moral principles that govern individual behavior and the ethics that shape the decisions of doctors are (and always will be) hotly debated. In fact, the American Medical Association maintains a vast code of ethics, which spans 11 chapters and contains 161 opinions on the proper conduct of modern practitioners.

In those pages—and in the profession, itself—reasonable people disagree on a wide range of medical-ethical topics: Should physician-assisted death be legal? How best to allocate organs to those in need of transplants? What are the indications for the use of unproven treatments for life-threatening diseases?

These questions have no clear-cut right or wrong answer. Patients rely on their doctors to help them navigate these ethical uncertainties. Now, they fear that generative AI will erode and undermine that personal connection.

This fear is understandable. However, I’m optimistic that the application of generative AI will improve medical care, facilitate ethical decision making and even strengthen the bond between doctors and patients. Getting to that point, however, will require a shift in thinking.

Machine: bad. Human: good.

As people, we tolerate mistakes made by other humans but are far less forgiving of equivalent errors made by machines.

To give you an example, imagine that we could flip a switch and, suddenly, all cars in the United States become autonomous self-driving vehicles. No car on the road would have a steering wheel, gas pedal or brake pedal—no way for humans to seize control.

Now, let’s assume that in year one of this grand experiment, technological failures result in 10,000 deaths. Under these circumstances, do you think the media and most Americans would view the shift from human-driven to automated cars as a success? We can’t be sure, of course, but it’s not hard to picture the fallout. Based on the coverage of self-driving car accidents to date, we can predict the internet and our TVs would be littered with gory images of the crashes. Fear would spread. Millions of Americans would demand that humans reclaim control of every vehicle.

Lost in the uproar would be an important fact: Each year, roughly 40,000 to 50,000 Americans die in their vehicles as a result of human error.

These deaths are well known to organizations like Mothers Against Drunk Driving (MADD) and they’re mourned by grieving families. But unless the victim is someone we know, the human-generated carnage goes largely ignored. We blame crashes on reckless individuals—drunk drivers and distracted teens—and assume we’d never harm a person ourselves. But, when we read about a driverless car striking a pedestrian, we conclude the technology is fatally flawed.

This same pro-human, anti-tech bias is likely to surface as generative AI plays a bigger role in medicine.

‘But I love my doctor’

When pollsters ask patients who they trust, nurses (71%), healthcare workers they know (70%) and doctors (67%) top of the list. By contrast, only 34% to 44% of the U.S. public expresses confidence in the medical system, overall.

As humans, we place our confidence in people, not systems. Unfortunately, the data contradict our trust in medical professionals. Nearly 1 in 4 hospitalized patients experience a medical error and tens of thousands die unnecessarily each year as a result. Similarly, omissions in prevention and the suboptimal treatment of chronic disease result in hundreds of thousands of avoidable heart attacks, strokes and cancer deaths in the U.S. each year.

Despite objective evidence that clinicians are far from perfect, we tolerate human-generated medical errors in the same way we accept human-generated traffic deaths. We assume our own doctors wouldn’t commit the same errors as others (and we do so without any supporting data or evidence). But we fear that the introduction of generative AI will harm us.

Getting over these fears will take both a shift in thinking and time. But the future of medicine will be much brighter. Here’s what we have to look forward to:

1. Patching cracks in the doctor-patient relationship

Whereas doctors of the past were well-known and widely respected members of their close-knit communities, the relationship between physicians and patients today is intermittent and far-less personal. That’s problematic, given the medical problems patients face today.

More than 60% of adults live with chronic diseases, which impact their health daily. And, more than ever, patients would benefit from continuous monitoring and faster medical intervention. But because of the overwhelming demands placed on doctors, patient care is discontinuous—with return visits scheduled every four to six months.

Generative AI offers a solution. ChatGPT will be able to serve as a medical assistant—not in the physician’s office but in the patient’s home, helping to make care more continuous. AI technologies, combined with home-monitoring devices, will provide daily oversight, alerting patients and doctors when readings fall outside the clinical plan. AI will help patients get the care they need quickly and conveniently.

2. More consistent diagnoses and treatments

Studies indicate there are huge gaps between the quality of care patients require and the care they most often get in the United States.

As an example, half of U.S. adults live with high blood pressure, also known as hypertension, which puts them at heightened risk for heart attack and stroke. For the overwhelming majority of these patients, the problem can be successfully controlled through affordable, available medications.

And yet, nationwide, only 60% of patients with hypertension have their blood pressure appropriately controlled. Compare that with some healthcare groups where more than 80% of patients have their elevated blood controlled. The difference? Successful health groups incorporate advanced IT systems and use evidence-based approaches to improve screening, monitoring and medical treatment. So, rather than fearing the future of generative AI, Americans can look forward to fewer strokes, heart attacks and failed kidneys.

Already, tools like Glass AI 2.0 are helping doctors create differential diagnoses and clinical plans in seconds, increasing the time they can spend with patients. Moreover, as AI technology becomes more adept at voice-enabled documentation, it will further free up clinicians—so that they can spend the majority of their time focusing on the patient (rather than the computer) in front of them.

Finally, when your physician is uncertain about a diagnosis or treatment, ChatGPT will be a reliable and rapid source of medical expertise. Medical knowledge doubles every 73 days, such that no doctor can keep up with the pace of change. But generative AI can. When patients have a rare and unusual problem, physicians will be able to immediately access the most recent journal article or case report on the disease.

3. Individualized care with more ethical considerations

When it comes to helping people make end-of-life choices or decide whether to undergo a risky procedure, physicians frequently do a poor job guiding them. Many clinicians aren’t comfortable discussing palliative care or acknowledging the futility of additional treatment. For them, telling a patient there’s nothing they can do feels like failure.

Research shows that patients want the truth—and nothing but the truth—about their illnesses and prognoses. And studies show people actually live longer when they elect palliative care, rather than futile end-of-life treatments.

Generative AI won’t replace physicians in counseling patients through painful decisions, but it will help them to consider a broader range of possibilities, expand their analytic frameworks and reach better conclusions than if they’d worked alone.

Defusing the existential threat

In a frequently cited study, AI researchers and other experts were asked “What probability do you put on human inability to control future advanced A.I. systems causing human extinction?” The median answer was 10%.

Disruptive technologies are always met with grave concern, and ChatGPT is no exception. Generative AI will impact the doctor-patient relationship and replace humans for many tasks. But it will also give doctors more time with patients and as a result, reduce physician burnout and increase patient satisfaction. And future generations of ChatGPT will improve quality outcomes, make healthcare access more convenient for patients and lower medical costs. Yes, this technology has created risk, but it has also made the promise of better health exponentially greater for everyone.

Stay connected with us on social media platform for instant update click here to join our  Twitter, & Facebook

We are now on Telegram. Click here to join our channel (@TechiUpdate) and stay updated with the latest Technology headlines.

For all the latest Health & Fitness News Click Here 

Read original article here

Denial of responsibility! Rapidtelecast.com is an automatic aggregator around the global media. All the content are available free on Internet. We have just arranged it in one platform for educational purpose only. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials on our website, please contact us by email – [email protected]. The content will be deleted within 24 hours.
Leave a comment