By Dr. Daniel Smith, WCI Columnist
Unless you’ve been living under a rock for the past year or so, you’ve seen the news rife with mentions of Artificial Intelligence (AI) and how iterative improvements in machine learning will disrupt whole industries. In fact, publicly available tools like OpenAI’s ChatGPT can generate cogent, comprehensible, and articulate answers to questions posited in vernacular ranging from plain English to technical jargon. For those of us less familiar with programs like ChatGPT, Bard, and DALL-E, they are AI software programs designed to interact or produce content like a human.
Based on large amounts of data, history, news, and even scientific publications that have been fed to it by the programmers, AI can synthesize information and make predictions on what a “correct” response would be. The AIs can be broad and sweeping like ChatGPT and can converse like a (somewhat stilted) human, or they can be very narrow in scope and spit out information and amalgamations of technical data, a la Lost in Space.
Much like people, AI can only filter and synthesize information based on what they’ve been taught. Naturally, data-driven professions have felt the creeping edge of worry about whether their job might be supplanted by a tireless machine without the need for pay or subject to the frailties of human health. Given that the medical profession prides itself on data-driven care and best practices distilled from large amounts of empirical and experimental data, the natural next question is, “Can a machine replace me?” To wit, I have a radiologist acquaintance who is currently taking as much volume as he can right now because “when they replace the first one of us, it’s over.”
Testing the Waters
When I was introduced to ChatGPT in person, my first inclination was to ask it Google-like queries like “What’s the square root of 693,427? Who was the fourth Roman Emperor? Why aren’t Kraft American Singles considered real cheese?” Realizing that the point was not so much to ask knowledge-based facts, I started asking philosophical questions like, “What would you recommend to a man who just lost his job? Which Baldwin is the better actor? How many licks does it take to get to the Tootsie Roll center of a Tootsie Pop?”
ChatGPT sometimes provides answers that are unintentionally humorous and idiosyncratic—kind of like that one person in every friend group (P.S. If you don’t know who that is, it’s you). After getting a few laughs (apparently engineer students from Purdue studied and published the answer to the Tootsie Pop question, which is 364), I asked, “What is the diagnosis for a high school football player who sustained a hit to the outside of the knee? He felt a pop and had a large amount of swelling above the kneecap, and now he feels a buckling and shifting sensation when he walks.”
I tried to ask the question like how a patient might ask the internet, and its answer is copied verbatim below.
“I am not a medical professional, but the symptoms you described could potentially be indicative of a serious knee injury, such as an anterior cruciate ligament (ACL) tear. However, a definitive diagnosis can only be made by a qualified healthcare professional after a thorough examination, including imaging tests like an MRI.
The pop sensation, significant swelling, and the feeling of buckling and shifting in the knee are commonly associated with ACL injuries. Other possible injuries that could cause similar symptoms include meniscus tears, collateral ligament injuries, or damage to other structures within the knee joint.”
It’s pretty spot on; I described a reasonable mechanism and presentation for an ACL tear. I tried to make ChatGPT give me a percentage of certainty, but the algorithm is designed specifically not to do that and reiterated the need to be seen and examined. While I appreciate the deferral to qualified medical professionals, the question that needs to be asked is whether this AI could passably do my job as a non-surgical sports doctor.
More information here:
I Asked the AI Bot That’s Taking the Internet by Storm for Financial Advice — Here’s How It Went
Is Anybody Else Getting Nervous About an AI Bubble in the Stock Market?
Current Usage
Currently, AI is being developed at all levels—from providers to individual patients, insurance companies, telehealth practices, and hospital groups. There are articles on AI in the NEJM, Future Healthcare from the Royal College of Physicians, The Journal of Family Medicine and Primary Care, etc. While the current opinion is that AI is good for refining information and generating a list of possibilities, diagnosis and treatment are murkier waters.
For example, Docus.AI is a model that offers a combination of AI with physician oversight. For individuals, it will let you upload labs, imaging, symptoms, health history, etc., and it will generate possibilities based on current symptoms or tell you what screening may be indicated. For docs, it basically functions as a targeted differential generator and suggests treatments, more questions to ask, and further workup.
Our current EMR, ironically monikered “Modernizing Medicine,” is developing an AI scribe tool to help with things like History of Present Illness, and it might find its way into transposing the clinical exam. Currently, it’s being fed data (with patient consent) to see how providers word their histories and exams in order to suggest possible next steps in what you’re going to chart. This is much like Google’s AI tools in its G Suite, which takes word and phrase inputs and predicts your next several.
There are also AI “solutions” for employers and insurance providers. You can bet your bottom dollar that insurers are going to feed custom AI data and synthesize questions for doctors about the care they provide to patients.
“Dr. Smith, we see that on patient X you ordered an MRI of the knee after dictating these five clinical factors. In 72% of your previous patients, you began with physical therapy prior to ordering this imaging. Before we authorize this imaging, please send your clinical decision-making to our Combined, Reconciled Analytics of Providers (CRAP) tool.”
Limitations of AI
As it stands, AI can’t introspect. It will simply generate the outcome or answer it feels is most correct given the information provided. I asked ChatGPT several clinical scenarios with fairly sparse clinical history (and no exam), and it generated the most common causes of pain or problems in those areas. It tellingly did not ask for more information.
It can also be intentionally misled. At its heart, all AIs are software programs cycling through information given to them by the user and reconciling that against the information they’ve internalized. If given a set of fairly objective and verifiable data, like the mechanism for a pulmonary embolism along with history and exam findings, someone could deliberately and falsely input the appropriate symptoms (or at least misunderstand the symptoms) and come to the incorrect conclusion.
I asked ChatGPT about its own limitations, and it appeared to corroborate my suspicions.
AI is just like any other technology out there in that it creates job opportunities in some regions or sectors and eliminates them in others. Will some specialties lose out? Assuredly. I can see specialties like pathology becoming more niche, a future where there are more intraoperative biopsies that are time sensitive and fewer cases in which time is less of the essence. I can see something similar happening to radiology where diagnostic radiologists lose out but interventional sees more volume.
I doubt anyone with a hands-on career will lose their job in the near future—good news for patient care techs, phlebotomists, surgeons, and physical therapists. Medical billing and coding might see a decrement in job numbers given that an intelligent AI can be taught to search out and denote phrasing and information required by insurers to bill a certain code. AI may also suggest changing verbiage to better suit a certain level of billing or CPT code as well as warn providers that an order, like advanced imaging, may be denied based on what is documented in the note.
I’m in a fairly safe specialty. Patients still struggle to put feelings and sensations into words, and given that a physical exam is an integral part of my interaction with my patients, I don’t see that going away. Will I hate it when some AI-supercharged insurer denies my request for an MRI based on some excerpt it’s read in the Journal of Applied Forces in Human Systems out of Bangladesh? Absolutely. Will it be any more malevolent than the current cadre of claim-denying bean counters employed in droves by payors? Hopefully less, much less.
More information here:
Crystal Ball Prediction
Knowing how much Dr. Jim Dahle likes crystal ball statements, I’ll submit these predictions for reflection in a few years’ time. I anticipate that EMRs will incorporate AI as a basic integration into their software in a few years. This is not out of any desire to make physicians more expedient or patients’ care more accurate but because EMRs are glorified billing software—the companies of which are remunerated as a function of revenue generated or codes billed, either directly like Athenahealth or indirectly via the salespeople touting “better” billing.
Second, I anticipate that payors will ask AI about the most expensive and “low yield” interventions and imaging studies that physicians order, and they will apply pressure at even the practice level to lower costs. How else will Andrew Witty make another $23.5 million in 2024?
Last, I do anticipate that physicians’ diagnoses will become at least somewhat more accurate. The unfortunate siloing of physicians into ever more niche practices means that our depth outside of our subspecialty shallows significantly without regular, intentional education. To wit, while my subspecialty is in sports medicine from family medicine, I have begun to see shoulder pain as musculoskeletal, radicular, or vascular nearly exclusively. I can’t say I’ve actively entertained referred gallbladder pain in the differential for some time unless someone mentioned stomach pain as well.
AI could be that bit of perspective needed to keep our differentials broad and our egos in check. I’m hoping that at least there will be fewer errors for malpractice attorneys to bite on!
Are you worried about AI and what will become of your job? Or do you think the potential greatness of the technology overrides those concerns? How else can AI be helpful (or harmful) to your career? Comment below!
(Source)