AI for Rheumatology: Enhancing Imaging Efficiency and Insights
Announcer:
You’re listening to Living Rheum on ReachMD. On this episode, Dr. Amanda Nelson will discuss the role of artificial intelligence in rheumatology imaging. Dr. Nelson is a Professor of Medicine of Rheumatology, Allergy, and Immunology and an Adjunct Professor of Epidemiology at the University of North Carolina at Chapel Hill. She presented this topic at the 2024 American College of Rheumatology Convergence. Here’s Dr. Nelson now.
Dr. Nelson:
So artificial intelligence is a very exciting new frontier for rheumatology. We get a lot of different imaging studies in our patients, so they might be ultrasound, MRI, x-ray, or CT, and one of the real benefits of AI is its ability to reduce time, so increasing efficiency and reducing time. So the scan might take 30 minutes, but it might take 3 hours for some human to sit and read through every single image and try to make all of those interpretations, and those are usually directed toward clinical outcomes. This is most likely consistent with early rheumatoid arthritis, for example. But they’re not going to give you a research-based score, like a Sharp score for RA or a Kellgren/Lawrence score for osteoarthritis. That might be more useful in research.
So if we think about sort of reducing the time, maybe generating some of these kinds of automated scores, maybe doing specific measurements— how deep is the effusion? how long is the tear? anything like that—AI can be a really good tool for those kinds of things. It’s also interesting that it can access other types of imaging. So we may have in rheumatology a photograph of a rash that might be lupus, or it might be dermatomyositis, and those photographs are often within our electronic health system. So could we pull those out and use AI to categorize those rashes or to look at a rash over time? Same thing for pathology slides—so not traditional imaging, but certainly if there’s a picture in the EHR, could we pull up a skin biopsy and the clinical image of the rash and compare those or put those into a clinical registry? Could we grade lupus nephritis, which takes a lot of expertise and a lot of time by a very expert nephrologist?
So AI may help us over time, but it’s certainly not going to replace rheumatologists. There’s so much that goes into a diagnosis of a rheumatic disease, and there are so many different aspects, with a lot of it being the patient and interacting with the patient and not just interpreting the labs and the imaging. But if you had a sort of prescreened AI table or a little output that said this is what the x-ray showed and this is what the labs showed, I’m going to summarize all of this for you and give you some of these kind of research-based measurements to provide to the rheumatologist, that might improve their efficiency in being able to get through all of that without having to spend hours and hours reviewing the chart and digging up that information—again, some of these tasks like measurement and categorization that don’t necessarily require expert input at all times and could be streamlined a bit. And obviously, all of the advancements with notes and trying to deal with inbox messages and those kinds of things, anything that’s going to reduce the burden on rheumatologists while allowing them to do what they’re really trained to do and be able to actually treat the patients and use their time to do that rather than some of these other tasks I think is really the goal of it. It is not there to do the diagnosis and treat the patient for us. It’s not going to achieve that.
I think that the ability to really integrate AI is a challenge, right? So there are some algorithms already that are FDA approved to do some of these tasks for imaging, but they have to be not just through the FDA but also through each institution and through the department, and everyone has to kind of agree that they’re going to be beneficial and lack risks, and we’re just not quite there yet. A lot of the issues really revolve around how the algorithms are developed, whether they’re in the population of interest, and whether there are biases in who is included and who is excluded from those data sets. So if we think about even just healthcare access, the people that are in the health system, getting all of their scheduled appointments, and getting all of the imaging tests that might be used to train an AI algorithm may be very different from those who are perhaps experiencing fractured care and who may be going to different doctors or hospital systems for different things, and those data are going to be thrown out when we’re training in the AI algorithm because it really needs complete data to really go through and do all of the things that it does.
And that brings us straight to black box, right? So what is it doing? How is it doing that? Being able to explain that not only to clinicians who might use the tool, but also to patients who might hear, “I used an algorithm to help me make a decision about your care.” Well, what does that mean? What did the algorithm do? What did it tell you? So both on the developer end, the clinician end, and also the patient education end, how do we bring these into practice in a way that we’re reducing those biases, making sure that everyone feels comfortable with them but still being able to take advantage of some of the benefits that they can offer to us without either just dismissing it all together or buying it wholesale and not really realizing a lot of these limitations.
Announcer:
That was Dr. Amanda Nelson discussing artificial intelligence and rheumatology imaging, which she spoke about at the 2024 American College of Rheumatology Convergence. To access this and other episodes in our series, visit Living Rheum on ReachMD.com, where you can Be Part of the Knowledge. Thanks for listening!
Ready to Claim Your Credits?
You have attempts to pass this post-test. Take your time and review carefully before submitting.
Good luck!
Recommended
Jasvinder Singh, MD
Erin Chew, MD
Gates B. Colbert, MD
Robin Dore, MD