Emerging Tech Development & Consulting: Artificial Intelligence. Advanced Analytics. Machine Learning. Big Data. Cloud
The radiology department of an average healthcare facility is likely searching for improvements. Even before COVID-19, 45% of radiologists experienced burnout at one point in their career. They felt overwhelmed with the administrative burden and the large number of images they had to check manually, which could reach up to a hundred scans per day. Additionally, radiology practice is lacking non-invasive methods for tissue classification. Invasive procedures take time and cause stress to patients.
Luckily, AI healthcare solutions are coming to the rescue. The global AI radiology market was valued at $21.5 million in 2018, and it is forecast to reach $181.1 million in 2025, growing at a staggering CAGR of 35.9%.
However, despite the numerous advantages of AI in radiology, there are still challenges preventing its wide deployment. How to properly train machine learning to aid radiology? Where does AI stand when it comes to ethics and regulations? How to make a strong business case for investing in artificial intelligence in radiology?
Computer-aided detection (CAD) was the first application of radiology AI. CAD has a rigid scheme of recognition and can only spot defects present in the training dataset. It can’t learn autonomously, and every new skill needs to be hardcoded.
Since that time, AI has evolved tremendously and can do more to help radiologists. Some of the medical digital image platforms enable users to manage different types of images, manipulate them, connect to third-party health systems, and more.
So, what are the advantages AI brings to radiology?
1. Classifying brain tumors
Brain cancer, along with other types of nervous system cancers, is the 10th leading cause of death in the US.
Conventionally, prior to the operation, patients suffering from a brain tumor are left in the dark along with their surgeons. Both of them don’t know which kind of tumor is there and what treatment the patient will have to undergo.
The first step is to remove as much infected brain mass as possible. A tumor sample is obtained from this mass and analyzed to classify the tumor. This intraoperative pathology analysis lasts around 40 minutes as the pathologist processes and stains the sample. In the meanwhile, the surgeon is idle. After receiving the results, they must quickly decide on the course of action.
Introducing AI in radiology to this mix reduces the tumor classification time to about three minutes and can comfortably be done in the operating room. According to Todd Hollon, Chief Neurological Resident at Michigan Medicine, “It’s so quick that we can image many specimens from right by the patient’s bedside and better judge how successful we’ve been at removing the tumor.”
As another example, a recent study conducted in the UK discovered a non-invasive way of classifying brain tumors in children using machine learning in radiology and diffusion-weighted imaging techniques. This approach uses the diffusion of water molecules to obtain contrast in MRI scans. Afterward, the apparent diffusion coefficient (ADC) map is extracted and fed to machine learning algorithms.
This technique can distinguish three main brain tumor types in the posterior fossa part of the brain. Such tumors are the most common cancer-related cause of death among children. If surgeons know which variant the patient has in advance, they can prepare a more efficient treatment plan.
2. Detecting hidden fractures
The FDA started clearing AI algorithms for clinical decision support in 2018. Imagen’s OsteoDetect software was among the first the agency approved. This program uses AI to detect distal radius fractures in wrist scans.The FDA granted its clearance after Imagen submitted a study of its software performance on 1,000 wrist images. The confidence in OsteoDetect increased after 24 healthcare providers using the tool confirmed that it helped them detect fractures.
Another use of AI in radiology is spotting hip fractures. This type of injury is common in elderly patients. Traditionally, radiologists use X-ray to detect this type of injury. However, such fractures are hard to spot as they can hide under soft tissues.
A study published in the European Journal of Radiology demonstrates the potential of employing Deep Convolutional Neural Network (DCNN) to help radiologists spot fractures. DCNN can identify defects in MRI and CT scans that escape the human eye. Researchers conducted an experiment where human radiologists attempted to identify hip fractures from X-rays while AI was reading CT and MRI scans of the same hips. As a result, the radiologists could spot 83% of the fractures. DCNN’s accuracy reached 91%.
3. Recognizing breast cancer
Breast cancer is the second leading cause of death among women in the US.
Despite the severity of this disease, doctors miss up to 40% of breast lesions during routine screenings. At the same time, only around 10% of women with suspicious mammograms appear to have cancer. This results in frustration, depression, and even invasive procedures that healthy women are forced to undergo when wrongly diagnosed with cancer.
Radiology AI simulation tools can improve this situation. A study conducted by Korean academic hospitals used an AI-based tool developed by Lunit to aid radiologists in mammography screenings. The study found that radiologists’ accuracy increased from 75.3% to 84.8% when they used AI. The algorithm was particularly good at detecting early-stage invasive cancers.
Some women with developing breast cancer don’t experience any symptoms. Therefore women, in general, are advised to do regular mammogram screenings. However, due to the pandemic, many couldn’t do their routine checkups. According to Dr. Lehman, a radiologist at the Massachusetts General Hospital, about 20,000 women skipped their screenings during the pandemic. On average, five out of 1,000 screened women exhibit early signs of breast cancer. This equates to 100 undetected cancer cases.
To remedy the situation, Dr. Lehman and her colleagues used radiology AI to predict which patients are likely to develop cancer. The algorithm analyzed previous mammogram scans available at the hospital. It combined the scans with relevant patient information, such as previous surgeries and hormone-related factors. The women whom the algorithm flagged as high risk were persuaded to come for routine screening. The results showed many of them had early signs of cancer.
4. Detecting neurological abnormalities
Artificial intelligence in radiology has the potential to diagnose neurodegenerative disorders such as Alzheimer’s, Parkinson’s, and amyotrophic lateral sclerosis (ALS) by tracking retinal movements. This analysis takes around 10 seconds.
Another approach to spotting neurological abnormalities is through speech analysis, since Alzheimer’s changes patients’ language patterns. For instance, people with this disorder tend to replace nouns with pronouns. Researchers at Stevens Institute of Technology developed an AI tool based on convolutional neural networks and trained it using text composed by both healthy and affected individuals. The tool recognized early signs of Alzheimer’s in elderly patients solely based on their speech pattern with a 95% accuracy.
Such software helps doctors identify which patients with mild cognitive impairment will go on to develop degenerative diseases and how severely their cognitive and motor skills will decline over time. This gives the endangered patients an opportunity to arrange for care facilities while they still can.
5. Offering a second opinion
AI algorithms can run in the background offering a second opinion when radiologists disagree on a problematic medical image.
This practice decreases the decision-making-related stress level and helps radiologists learn to work with AI side-by-side and appreciate its benefits.
Mount Sinai Health System, New York City, used AI for reading radiology results alongside the human specialist as a “second opinion” option for detecting COVID-19 in CT scans. They claim to be the first institution to combine AI and medical imaging for the novel coronavirus detection. Researchers trained the AI algorithm on 900 scans. And even though CT scans are not the primary way of COVID-19 detection, the tool can pick on mild signs of the disease that human eyes can’t notice. This AI model provides a second opinion when the CT scan shows negative results or nonspecific findings that radiologists can’t classify.
After viewing the exciting AI applications in radiology, one could assume that the sky's the limit for this technology. It is indeed very promising, but there are difficulties in applying machine learning in radiology.
Availability of training datasets
To function properly, machine learning algorithms in radiology need to be trained on large amounts of medical images. The more, the better. But in the medical field, it is difficult to gain access to such datasets. For the sake of comparison, a typical non-medical imaging dataset can contain up to 100,000,000 images, while medical imaging sets rarely exceed about 1,000 images.
Another problem is producing labeled datasets for supervised training. Medical image annotation is a very time-consuming and labor-intensive process. Radiologists and other medical experts must do this task manually assigning appropriate labels for the given AI application. There is potential for automatically extracting structured labels from radiology reports using natural language processing. But even then, radiologists will most likely need to review the results.
Opting for existing algorithms instead of developing custom ones can also be problematic. Many successful deep learning models available on the market are trained on 2D images, while CT scans and MRIs are 3D. This extra dimension poses a problem, and the algorithms need to be adjusted.
Finally, AI technology itself is leaving room for doubt. Computer power has been doubling every two years. However, according to Wim Naude, a business professor from the Netherlands, this established pattern is diminishing. Consequently, we may not have the necessary power and multitasking abilities to take over the broad range of tasks that an average radiologist performs. AI’s silicon-based transistors will have to be replaced with technology such as organic biochips, which is still in its infantry to achieve such capabilities.
One of the most common remarks radiologists make is about their rough experience with medical imaging software. It takes many clicks, long waiting times, and a thorough manual study to accomplish even a simple task. Medical programs focus on the technical aspect of performing the job, but their interface is counter-intuitive and not user-friendly.
One of the most significant barriers to deploying artificial intelligence in radiology is convincing decision-makers that AI is a worthy cause. This technology requires hefty investments upfront, but it will not pay back quickly. Radiologists will take time to learn how to use it. Furthermore, the examples of successfully adopting AI in clinical practices are still limited.
Suppose you concentrate solely on the radiology department. In that case, AI tools will help radiologists to read scans and produce reports faster, which is only a matter of efficiency and does not qualify for such a big investment.
The solution here is to broaden your perspective and look at the overall picture. As Hugh Harvey, the Managing Director at Hardian Health, said, “Radiology is not stand alone. Almost all departments use radiology, so the return on investment may not be only in radiology. Health economics studies must be across departments and stakeholders.”
Note that reimbursement opportunities are opening up. In September 2020, the Center for Medicare and Medicaid Services made its first approval of reimbursing AI-augmented medical care. Reimbursement will influence AI’s ability to affect radiology. It is easier to pour investments into AI knowing that a part of it can be subject to refund.
Often, researchers and practitioners don’t fully understand how AI algorithms learn and make decisions. This is known as a black-box problem.
When ML continues learning independently, it can take into account some irrelevant criteria. When the tool doesn’t explain its decision logic, radiologists can’t spot these self-added factors. For example, algorithms will learn that implanted medical devices and scars are signs of health issues. This is a correct assumption, but then the algorithm might assume that patients lacking these marks are healthy, which is not always true. Another example comes from Icahn School of Medicine. A team of researchers developed a deep learning algorithm to identify pneumonia in X-ray scans. They were baffled to see this software's performance considerably declining when tested with scans from other institutions. After a lengthy investigation, they realized the program was considering how common pneumonia is at each institution as a factor in its decision. This is obviously not something the researchers wanted.
Biased training datasets also present a problem. If a particular tool is mainly trained on medical images of a specific racial profile, it will not perform as well on others. For example, software trained on white people will be less precise on people of color. Also, algorithms trained and used at one institution need to be treated with caution when transferred to another organization as the labeling style will be different.
A study by Harvard discovered that algorithms trained on CT scans can even become biased to particular CT machine manufacturers.
When radiologists don’t receive an explanation of a particular AI decision, their trust in the system will decline.
Loose ethics and regulations
There are several ethical and regulatory issues coined with the use of AI in radiology.
Machine learning algorithms are challenging to regulate because their outcome is hard to predict. For example, a drug mostly works in a similar way, and we can anticipate its outcome. In contrast, ML tools tend to learn on the fly and adapt their behavior.
Who is responsible?
Another issue up for debate is who carries the final responsibility if AI led to a wrong diagnosis and the prescribed treatment caused harm. Due to AI’s black-box nature, the radiologist often can’t explain the recommendations delivered by artificial intelligence tools. So, should they follow these recommendations, no questions asked?
Permissions and credit sharing
The third hurdle is the use of patient data for AI training. There is a need to obtain and reobtain patient consent and offer a reliable and compliant data storage facility. Also, if you trained AI algorithms on patient data and then sold it and made a profit, are the patients entitled to a part of it?
Now, we rely on the goodwill of AI software developers and researchers who train these tools to deliver an unbiased, reliable product that meets the appropriate standards. Instead, healthcare facilities adopting AI need to arrange for regular audits of the product to make sure it is still useful and compliant.
Many are wondering how will AI affect radiology and whether it will take over this field and replace human physicians. The answer to that is NO. In its current capacity, AI is not powerful enough to solve all the complex clinical problems radiologists are dealing with daily. As Elad Walach, the CEO of the Tel Aviv-based startup Aidoc puts it, “AI solutions are becoming very good at doing one thing very well. But because human biology is complex, you typically have to have humans who do more than one thing really well.”
Radiologist specialization will not go extinct, but the scope of their work will change. AI will take over routine administrative tasks, such as reporting and will advise radiologists on decision making. According to Curtis Langlotz, a radiologist at Stanford, “AI won’t replace radiologists, but radiologists who use AI will replace radiologists who don’t.”
To make radiologists comfortable using AI, education policymakers will need to implement some changes. It would be helpful to teach radiology students how to integrate AI into their clinical practice. This topic must be a part of their curriculum.
Another prediction for artificial intelligence in radiology is augmenting the abilities of doctors in developing countries. For example, researchers at Stanford University are building a tool that will enable physicians to take pictures of an X-ray film using their smartphones. Algorithms underlying this tool will scan the film for tuberculosis and other problems. This app's benefit is that it works with X-rays and doesn’t require advanced digital scans that are lacking in poor countries. Not to mention that hospitals in these countries might not have radiologists at all.
Artificial intelligence's future in radiology is promising, but the collaboration is still in its infantry. John Banja, professor in the Center of Ethics at Emory University, said: “It remains anyone’s guess as to how AI applications will be affected by their integration with PACS, how liability trends or regulatory efforts will affect AI, whether reimbursement for AI will justify its use, how mergers and acquisitions will affect AI implementation, and how well AI models will accommodate ethical requirements related to informed consent, privacy, and patient access.”
If you are a decision-maker at a medical tech company and you want to develop AI-based radiology solutions, here are some steps that you can take during the fundraising, development, and support stages that will improve your chance of success.
Coming up with a strong business case is a challenge. Focus on the long-term benefits of AI to the whole clinic, not only to the radiology department.
Consult experienced radiologists on the rules you want to hardwire into your algorithms, especially if your developers don’t have a medical background.
Diversify your training data. Use medical images from different population cohorts to avoid bias.
Customize your training datasets to the location where you want to sell your software. If you are targeting a particular medical institution, gather as many details as possible. Information, such as the type of CT scanners they are using, will help you deliver more effective algorithms.
Overcome the black-box problem by offering some degree of decision explanation. For example, you can use rule extraction, a data mining technique that can interpret models generated by shallow neural networks.
Work on the user experience aspect of your tool. Most radiology software available on the market is not user-friendly. If you can pull it off, your product will stand out among the competition.
Suggest organizing regular audits after clients deploy your tools. Machine learning algorithms in radiology continue to learn, and their behavior adapts accordingly. With audits, you will make sure they are still fit for the job.
Monitor updates on relevant regulations and new reimbursement options.
If you want to learn more about AI applications in radiology and how to overcome deployment challenges, feel free to contact our AI experts.
Previously published at https://itrexgroup.com/blog/artificial-intelligence-in-radiology-use-cases-predictions/
Create your free account to unlock your custom reading experience.