Site content in development. Please email support@radiologyai.com for assistance.

Elementor #6868

Section 1: Technical Background

Artificial Intelligence

Can machines think and act like humans? That’s the question Artificial Intelligence (AI) seeks
to answer. AI has been defined in different ways. It’s been described
as the branch of computer science focusing on creating machines that can
solve problems that would otherwise need human intelligence to tackle.
It’s been differentiated from Natural Intelligence (NI)—found
in humans and animals—as lacking in consciousness and emotionality. At
its core, AI attempts to simulate human and rational thinking and acting
in machines.

Depending on the scope and complexity of tasks that it can handle, AI has been categorized as ‘weak’ or ‘strong’. Weak AI,
also known as Narrow AI, specializes in single tasks. While it can do
any one task extremely well, such AI is operating under extremely
limited conditions and cannot cross over to functions it’s not
programmed to perform. Examples of weak AI are search engines and image
recognition software.

Strong AI—or
Artificial General Intelligence (AGI)—is what someone would call true
intelligence in machines on par with humans. This is the elusive goal
that AI scientists are after. Machines that think, communicate and
behave like humans. And like humans, such AI is not limited to a
specific set of tasks but can apply its ‘intelligence’ to solve any
number of problems and where it’s presented with a novel problem, it can
learn and adapt like human intelligence to deal with it.

Understanding the Technology

Machine Learning

Machine
learning endeavors to create artificial systems that can learn and
evolve autonomously from experience and exposure to data without being
restricted by rigid sets of instructions or programs. In this regard,
machine learning is an attempt to mimic intuitive human learning in
computers. Computers have access to large volumes of data on which they
train and learn automatically and improve their problem-solving ability
over time. There are minimal external constraints in terms of protocols
or instructions. The system is supposed to learn and improve by itself.

In supervised machine learning,
the training data set presented to the system is known and labeled. The
learning algorithms then enable the machine to analyze this data to
make predictions about future outputs. On the other hand, unsupervised machine learning involves
unlabeled data. The underlying algorithms here are more advanced in the
sense that they are handling uncategorized data and learning how to
explore hidden structures and patterns in it.

Algorithms

Algorithms
are sets of mathematical instructions that instruct machines on what to
do and how to do it. What an AI system is capable of achieving would
depend on the type of algorithms it’s built on. Algorithms are akin to
computer programs that tell a computer what to do step by step. Of note
here, algorithms have evolved over time from a rigid set of parameters
to models with inherent flexibility and fluidity that enables computers
to learn and discern on their own via a process known as machine learning. Machine learning algorithms have been categorized as related to classification, regression and clustering.

Some Examples of Machine Learning Algorithms

Algorithm Type

Application

Examples

Classification

Supervised learning

To compute data category

Logistic regression

Decision tree

Random forest

Support vector machines

Regression

Supervised learning

To forecast or predict

Linear regression

Ensemble methods

Neural networks

Clustering

Unsupervised learning

To group similar item clusters

K-means

Neural networks

Deep Learning

Deep
learning is an advanced form of machine learning and the new frontier
in AI research. Its machine learning that utilizes the immense
computational power of neural networks.
Neural networks—that is, interconnected webs of neurons—are what drive
the human brain. Deep learning is an attempt to simulate the brain’s
workings in computer systems, using similar networks called artificial neural networks.

Based
on such a sophisticated design, deep learning is capable of achieving
what traditional machine learning can never aspire to do. Deep learning
is hierarchical and layered—hence the term ‘deep’—so that it can perform
computations at multiple levels at the same time. This is in contrast
to traditional machine learning which is built to handle analyses in a
linear approach. The end result is that its performance tends to plateau
off over time. Deep learning, on the other hand, has immense
scalability and just grows better as it is fed more data. In fact, deep
learning makes the best use of two resources that have become the most
important fuels of AI growth: big data and
computational power. Big data is the term applied to the availability
and access to enormous volumes of data that was never available to this
magnitude in the past. Since AI systems learn and train on data, the
more data available the better learning systems become. Similarly,
computer power has grown exponentially over the years. Combined, such
factors provide a fertile breeding ground for artificial neural networks
to thrive and deep learning to prosper.

Another
way deep learning is more sophisticated than traditional machine
learning is that it is extremely capable of unsupervised learning on
unlabeled datasets, so it requires minimal human input. Some areas where
deep learning is making its mark are natural language processing (NLP),
speech recognition and ecommerce.

Neural Networks

Neural
networks—also referred to as artificial neural networks (ANNs) or
simulated neural networks (SNNs)—provide the functional infrastructure
that permits deep learning in AI systems. In this sense, they are a
prerequisite for deep learning in machines. Neural networks are
reminiscent of the webs of neurons found in the human brain and the
countless connections between them, called synapses. Replicating that
structure allows artificial neural networks (ANNs) to simulate the
workings of the human brain. Consequently, ANNs lead the way in most AI
applications today.

An ANN can
be thought of as an interconnected net of nodes—where a node is
analogous to a neuron in the brain. The nodes are arranged in layers and
the greater the number of layers, the more powerful the network
becomes. This hierarchical organization gives rise to the term ‘deep’
learning.

Figure 1: A neural network in the brain (A) vs. an artificial neural network (B).

[The
images are to serve as cues for your designer/illustrator. They have
been taken from the web and may be subject to copyright.]

Neural
networks are the powerhouses of AI systems. They enable the performance
of complex tasks by computers such as speech and image recognition.
They can be trained on big data to become increasingly efficient and
accurate over time.

A popular type of neural networks is the feedforward neural networks, or multi-layer perceptrons (MLPs).
The description given above is mainly about such networks. They can
solve extremely complex problems such as natural language processing
(NLP) and computer vision. They also provide the basis for other neural
networks. Convolutional neural networks (CNNs) are
somewhat similar to feedforward networks. They are highly capable of
tasks such as image and pattern recognition. This makes them the
preferred networks for AI applications in fields such as radiology. Recurrent neural networks (RNNs) comprise
feedback loops that are particularly suited to predict future outcomes.
Their applications include stock market forecasting.

A Glance at the Timeline

1950

Alan
Turing explores the possibility of thinking machines and artificial
intelligence in his paper “Computing Machinery and Intelligence.” He
proposes the now famous Turing test, which describes a machine as having
achieved intelligence on par with humans if an observer cannot discern
its responses from that of a real person.

1956

John
McCarthy and Marvin Minsky host the historic Dartmouth Summer Research
Project on Artificial Intelligence (DSRPAI) conference. This was the
event where McCarthy originally used the term ‘artificial intelligence.’
The conference spurred substantial interest and activity in the field.
It is considered a seminal event on the AI timeline.

1974-1980

The
AI pursuit sees cycles of achievements and setbacks. While mathematical
models and algorithms are there, computing power has yet to catch up.
Computers are just not powerful enough yet. This leads to a slowing down
of AI research during these years and funding cuts across the board.
The time period is called the “First AI Winter.”

1982

Japan
funds its ambitious Fifth Generation Computer Systems (FGCS) project
with the aim to achieve supercomputer performance that could boost
efforts at AI development. The UK and USA respond with enhanced funding
for their own programs. The net effect is a stimulation of activity in
AI research.

1987-1993

Different
initiatives and programs across the globe fail to achieve desired goals
leading to the “Second AI Winter.” Government funding in AI research
drops in different countries. Supercomputers seem to be too costly to
use at a widespread level. In the meantime, alternative affordable
computing technologies continue to emerge.

1997

IBM’s Deep Blue computer makes history when it defeats world chess champion Gary Kasparov.

2008

Google
makes available speech recognition for smartphones, which is considered
a major step forward in bringing the power of AI into common use.

2016

Google
DeepMind’s AlphaGo beats Lee Sedol, a world champion, at the ancient
Chinese game of Go, underscoring the immense progress AI research has
made over the years.

Section 2: Practical Applications

Artificial Intelligence in Radiology

As
we discussed above, the prowess of AI in image and pattern recognition,
the amazing ability of deep learning architectures to adapt and evolve,
and the unique proficiency of convoluted neural networks at image-based
tasks translate into a huge scope for AI in radiology. Let’s explore
further how AI solutions are impacting every aspect of the radiology
workflow.

Applications in the Clinical Radiology Workflow

The clinical radiology workflow can be divided into a series of steps, as shown below (Fig. 2):

Figure 2: Clinical Radiology Workflow.

Let’s review the scope and application of AI, particularly deep learning architectures, in each of these steps:

Acquisition

The
clinical radiology workflow begins with acquisition of the images. This
is achieved through different types of hardware, for example, computed
tomography (CT) and magnetic resonance imaging (MRI) scanners. Such
hardware is driven by software for image reconstruction.
Over time, scanning hardware has become increasingly efficient in terms
of quality and resolution. Image reconstruction algorithms, on the
other hand, still need to catch up. Deep learning research in this
context has been aimed at providing the mechanisms to achieve image
reconstruction transformations that match the sophistication of the
hardware.

Preprocessing

An important step in the preprocessing of acquired medical images is image registration.
It is a process that aligns medical images spatially or temporally,
bringing them into a single coordinate system. The result is that fusion
images are created and quantitative analyses can be performed. Image
registration algorithms so far have relied on predefined feature-based
criteria. This limits both their scope and scalability.

Deep
learning techniques have demonstrated that they can overcome these
shortcomings since they are non-rigid, consistent and much faster.
Furthermore, deep learning is multimodal so it can handle multimodal
imaging such as hybrid PET scans for cancer. Deep learning nets that are
based on recurrent neural networks (RNNs) are particularly suited to
temporal image registration solutions.

Image-based tasks

Clinical image-based tasks include detection, characterization and monitoring:

Figure 3: Image-based tasks.

Detection

In
routine radiology practice, the detection of anomalies on medical
images is a manual and labor-intensive process that requires the
presence of a trained and experienced professional—the radiologist.
Radiologists have to go through a large volume of image slices as part
of their daily workflow to detect anomalies on the basis of minute
differences of texture, intensity or pattern from normal tissue. This is
a time-consuming, subjective and error-prone process.

Computer-aided detection (CAD) has
been an area of active research and development for quite some time
now. However, CAD systems developed so far have exhibited subhuman
performance and have not been able to transition to routine radiology
practice in a significant way. Once again, with the latest boost in deep
learning science, there has been a renewed interest in the possibility
of CAD systems that match or outperform experienced radiologists. The
results have been promising. In a recent study that involved the
detection of lesions on mammograms, CAD built on deep learning
architecture performed better than traditional machine learning CAD
systems and comparably to radiologists.

Characterization

Once
an anomaly has been detected on a medical image, it has to be
characterized. Characterization of the disease involves clinical steps
such as segmentation, diagnosis and staging. Each of these steps
requires the meticulous attention of a radiologist who has to
painstakingly go through each image to discern subtle changes of texture
and structure.

  • Segmentation refers
    to delineating and demarcating anomalies on medical images. This is
    required for disease monitoring and treatment planning, such as for
    radiation therapy. Automated segmentation solutions have been around for
    several decades. Segmentation algorithms utilize the principles of
    clustered imaging intensities, region growing around seed points and
    probabilistic atlases.
  • Diagnosis includes determining whether the lesion is benign or malignant. Computer-aided diagnosis systems
    have traditionally been fed with predefined tumor radiographic criteria
    such as related to size, sphericity, margin and texture. They have
    mostly served to assist with the work of radiologists and are still not
    advanced enough to act as standalone solutions. Since deep learning is
    not dependent on predefined input and additionally it is more noise
    tolerant, computer-aided diagnosis solutions built around it promise to
    be more effective and generalizable.
  • Staging involves
    ascertaining the extent of the disease. A classic example is the TNM
    (tumor, node and metastasis) staging system that categorizes patients on
    the basis of tumor characteristics, involvement of regional lymph nodes
    and spread (metastasis) to other body areas. Such complex staging
    systems rely on expert opinion and lie beyond the realm of traditional
    machine learning. However, deep learning, with its ability to integrate
    several layers of information simultaneously, has the inherent ability
    to power accurate automated staging solutions in the near future.

Monitoring

Monitoring
disease is necessary to gauge treatment response and pick up
recurrence. Imaging software algorithms assist with monitoring tasks by
corresponding relevant images across multiple scans over time and
highlighting subtle changes in lesion parameters such texture, size,
heterogeneity or cavitation. The process depends on the efficiency of
the system at image preprocessing and registration as well as the
predefined features. Deep learning methods based on recurrent neural
networks (RNNs) are particularly efficient at temporal monitoring tasks
and are being actively researched for this purpose.

Reporting

Reporting
is an essential component of the radiology workflow as it is the means
to communicate all pertinent clinical findings in the patient. What
other physicians and clinical departments would determine about the
patient would depend on the contents of the report. There is still no
single universally accepted format for radiology patient reports. Most
are text-based and created from the consultant’s dictation. Such
variability in reporting gives rise to gaps in clinical communication
that hamper effective patient management. Furthermore, creating reports
is often a manual and time-consuming process. Deep learning algorithms
are adept at multitasking. They make possible computer vision, image
discrimination and voice recognition. By leveraging their diverse power,
radiology reporting can become an automated process that is faster as
well as more accurate, consistent and interactive.

Integrated Diagnostics

Finally,
AI can assist in the integration and evaluation of patient data from
multiple sources. Such sources can include radiology reports, laboratory
test results, clinical examination and even data from remote monitoring
devices such as fitness trackers. This would inevitably lead to the
generation of new insights into patients’ health and improve clinical
outcomes.

Clinical Use Cases for Various Specialties

Heart Imaging

The
principles of medical image segmentation that we discussed above,
particularly in light of the role of AI algorithms, have found practical
applications in cardiac imaging as well. A recent method, multi-scale
deep reinforcement learning, has been able to successfully delineate
cardiac anatomical landmarks. When combined with labeled learning
protocols, such systems are able to accurately segment heart structures
such as cardiac chambers, valves and the coronary arteries. Furthermore,
they can assist with the measurement of functional parameters such as
the ejection fraction. AI-based heart imaging solutions have
demonstrated utility in interventional cardiology and for automated
measurements in echocardiography. Hierarchical clustering AI techniques
have been employed in the study of heterogeneous heart diseases
including hypertension and heart failure.

Brain Scanning

AI
is having a clear impact on neuroimaging. Aside from the general ways
in which AI improves image acquisition, reconstruction and registration
as well as anomaly detection and characterization, deep learning-based
architectures have been leveraged to create novel neuroimaging tools
that serve distinct purposes. For example:

  • Enhancing image quality and reducing noise and scan acquisition time significantly for MRI and PET-CT neuroimaging.
  • Detecting acute lesions such as intracranial hemorrhages and cervical spine fractures on head and spine CTs.
  • Aiding in the diagnosis and monitoring of brain diseases such as multiple sclerosis and Alzheimer’s disease.
  • Improving the yield of brain MRIs by assisting with quantitative volumetric analyses.

Mammography

Screening
mammography is an important tool for the early detection and diagnosis
of breast lesions. Yet, it can be challenging to manually discern all
the minute changes of texture, structure and density in images of breast
tissue. AI has been able to assist in identifying breast lesions, such
as microcalcifications. While traditional AI solutions for mammography
have only meant to support radiologists at basic image interpretation,
deep learning AI offers more possibilities in terms of comprehensive
mammography software suites. For example, a recent study showed that
architectures based on convolutional neural networks (CNNs) performed
better than traditional diagnostic systems and comparably to
radiologists in the detection of lesions on mammograms.

Chest Radiography

Detecting
and characterizing pulmonary nodules on lung CTs can be a tedious and
meticulous affair. AI can automate the process and aid in determining
whether the nodules are benign or malignant. AI-based radiographic image
biomarkers can help in the detection and surveillance of lung cancer.
Furthermore, AI can assist in differentiating solid lung nodules from
non-solid ground-glass opacity (GGO) nodules, which can be a diagnostic
challenge.

Radiation Oncology

Treatment
planning in radiation oncology rests on the accuracy of segmentation of
tumor and normal tissue. We discussed segmentation as an important step
in the clinical radiology workflow. Traditional segmentation algorithms
have relied on clustered imaging intensities or region growing.
Examples of advanced segmentation systems are probabilistic atlases.
Deep learning techniques have vastly improved the efficiency and
accuracy of segmentation of medical images. This has been of great help
to radiation oncologists. In addition, AI can aid with the tracking of
treatment response after radiation therapy cycles.

Abdominal and Pelvic Imaging

Deep
learning has demonstrated a promising role in abdominal organ
segmentation, including organs such as the liver, pancreas, stomach,
kidneys, spleen and prostate. Convolutional neural networks (CNNs) have
been shown to enhance the segmentation of abdominal organs in T2-weighted
MR images. AI can be used to explore incidental findings on abdominal
scans such liver lesions. It can also be used to characterize and track
colonic polyps.

How does AI help Radiology and Radiologists?

We
just discussed many practical applications of AI in radiology. Let’s go
over the key ways in which AI helps radiology and radiologists:

Improved Workflow Efficiency

AI
can be likened to any other tool. It enhances our performance and
productivity. There is only so much that radiologists can do manually on
a given work day. On the other hand, imaging data increases
exponentially each year. In the US alone, millions of medical images are
generated each year. It takes many years to train a radiologist—as
opposed to deep learning architectures that can be trained much faster.
There is an evident mismatch between workload and available human
expertise. We have just seen how AI is able to handle tasks at every
step of the clinical radiology workflow. With the continued integration
of AI in radiology, there is an ongoing improvement in clinical workflow
efficiency for radiologists. Each task that AI takes over is one action
less for the radiologist to perform manually.

Enhanced Workflow Quality

There
are limitations to human perception. Some changes in pixel intensities
are too subtle for radiologists to pick up, yet they can still be
detected by machines. Similarly, radiologists need processed and
prepared images to work on, but machines can utilize the raw data from
image acquisition—which contains more information. Deep learning
systems, once trained and optimized, can integrate cues from multiple
input streams to arrive at inferences through processes that work faster
and at a larger scale than is possible for humans. All these factors
indicate that as radiology AI systems continue to mature, they will
improve image workflow quality in ways hitherto considered unattainable.

Radiomics and Imaging Biomarkers

Radiomics
is a great example of what AI can achieve in medical image analysis.
Radiomics leverages deep learning algorithms to characterize and
quantify medical imaging data and give it clinical context. The
ever-increasing volume of medical imaging data has provided a fertile
ground for the training of radiomics algorithms, increasing their
efficiency over time. Radiomics has made possible the generation of
novel imaging biomarkers. Utilizing such biomarkers, radiomics AI can
assist with disease prognostication and forecasting clinical outcomes,
data visualization, prediction of metastasis risk, assessment of
treatment response and in the planning and administration of radiation
therapy.

Clinical Decision Support

AI
systems provide clinical decision support to radiologists. This begins
right from the triage of patients where such systems can guide on both
the acute need of an imaging procedure and the most suitable scan for a
particular patient. Once imaging data has been acquired, AI-based
clinical decision support systems can aid along each step of patient
management by assisting the radiologist with clinical adjudications such
as disease risk stratification and prognostication as well as treatment
planning and response monitoring. In addition, deep learning algorithms
can detect subtle findings on medical imaging that may potentially be
missed and then present those to radiologists who can then further
decide on their clinical significance.

Cost Savings

AI
solutions achieve cost savings for facilities by improving both
workflow efficiency and quality. Since the radiology department is so
interconnected with other departments in a hospital, the economic
benefit translates to savings for the institution as a whole and not
just for radiology. Let’s look at some of the ways how implementing AI
in radiology leads to cost savings:

  • Faster scan reading times means
    that radiologists can get more work done in less time. This directly
    translates to cost savings in terms of radiologists’ hours utilized and
    compensated. AI algorithms can enable this by narrowing down images with
    pertinent findings that radiologists can look into further, instead of
    going through every single image. In addition, they can reduce false
    positives per image. In one study, the investigators estimated that an
    AI-based CAD software reduced reading time per case by 17%.
  • Better workflow organization is
    achieved when AI systems assist in prioritizing images for radiologists
    to review. The end result is enhanced workflow efficiency as
    radiologists do not need to go through all images to catch a few that
    need their immediate attention. Time is saved and thereby costs as well.
  • Improved risk assessment leads
    to more targeted patient management, with patients deemed at greater
    risk of disease spread receiving the most resources. This promotes
    overall health outcomes and conserves precious healthcare resources by
    facilitating their judicious use.
  • Reduced downstream costs is
    perhaps the most important cost saving aspect of AI in radiology, yet
    one that is often underestimated. Deep learning CAD systems can improve
    the sensitivity of screening scans and detect subtle anomalies that
    could otherwise have been missed. For almost any disease but especially
    for cancers, early detection means more chances of a complete cure with
    fewer resources spent. When these cases are missed and present later
    with advanced or complicated disease, the economic burden they represent
    is in orders of magnitude higher. Since such cost savings are hard to
    measure prospectively, they are often underestimated.

The Fear: Could AI replace Radiologists?

AI
has got a long way to go before it catches up with radiologists, let
alone replaces them. Most current applications in the field can be
categorized as narrow AI that is good at single tasks but lacks in
comprehensive application. A general AI on par with human intelligence
is still an elusive goal. In this context, AI in radiology is still in
its infancy.

Instead of
fearing AI, radiologists should consider it as a tool that is meant to
enhance their performance. AI can take care of mundane repetitive tasks
in radiology. It can assist radiologists in increasing their
productivity as well as accuracy. It can help them tackle imaging big
data that grows by leaps and bounds every year. AI, it seems, is a
friend and not a foe.

While AI
is getting better by the day at image-focused tasks, it’s far away from
giving findings meaningful clinical context. This indicates an expanding
albeit different role for radiologists in the future where AI could
perform most of the imaging data evaluations and radiologists could give
the results clinical meaning and decide how they factor into patient
management.

In addition, the
role of radiologists can expand in novel directions such as training AI
and overseeing its development. Most AI algorithms are developed by
engineers and software scientists. Radiologists can bring a much needed
clinical perspective to the table. Once again, instead of avoiding AI,
radiologists can actually take the driving seat in its development and
make sure that it grows in a direction that benefits patients the most.

Section 3: Industry Overview and Technical Implementation

AI in Radiology—Industry Overview

According
to estimates, the global market for AI in medical imaging stood at USD
21.5 million in 2018 and is projected to reach USD 181.1 million by 2025
and 264.85 million by 2026, representing a compound annual growth rate
(CAGR) of 35.9%.

The market
can be categorized as moderately fragmented. While established players
are driving consolidation through acquisitions and mergers, the sheer
pace of innovation means that new startups are continually popping up
while older systems and solutions fall out of favor. Some recognized
names in the industry are GE Healthcare, Siemens AG, Philips Healthcare,
Samsung Electronics, IBM Watson Health, Medtronic, Enlitic Inc, Nvidia
Corporation, IBM Corporation, Agfa Healthcare, Intel Corporation,
Johnson & Johnson and Microsoft Corporation.

Drivers
of industry growth include the ever-increasing use of medical imaging
in healthcare generating millions of images each year that exponentially
increases workload for radiologists, shortage of trained and
experienced radiologists—particularly in some geographic regions,
improvement in workflow efficiency and quality associated with reliable
AI solutions, and cost savings that come with autonomous systems. On the
other hand, barriers to industry growth include regulatory and ethical
issues, a lack of complete mathematical understanding of how deep
learning architectures work, slow uptake by healthcare facilities such
as hospitals and clinics, and a dearth of practical comprehensive AI
solutions as most current systems solve single tasks. Regarding the slow
uptake by healthcare facilities, there is a need to educate and inform
health care professionals (HCPs), radiologists and hospital
administrators about the availability and benefits of radiology AI
solutions on the market. Increased clinical uptake and utilization can
spur the growth and refinement of radiology AI systems.

Among
geographic regions, while North America, particularly the US, is set to
remain the leader in AI research and development (R&D), future
growth is predicted to occur fastest in the Asia-Pacific region. Key
countries in this regard are China and India with their vast
populations, increasing disposable incomes and growing focus on
healthcare. For example, in China, medical imaging volume expands by 30%
each year while the number of radiologists increases by 4% only. No
wonder, China is at the forefront of AI research in medical applications
with many success stories.

A
mention here of the impact of COVID on the industry is necessary. While
COVID has been responsible for a dampening effect on the development of
medical imaging AI solutions as well as their clinical validation and
uptake by healthcare institutions, it has also triggered a flurry of
activity in finding AI applications that aid with chest imaging
analyses—for the screening, diagnosis and monitoring of COVID-related
lung lesions. COVID primarily targets the lungs and chest X-rays and CTs
are indispensable in its clinical management—for tasks such as
diagnosis, determining disease severity and monitoring treatment
response. Companies and organizations across the globe are putting in
the extra effort to quickly come up with effective AI solutions for
COVID-related medical imaging evaluation. In this context, COVID has
proven a catalyst for the growth of AI in radiology.

AI in Radiology—Technical Implementation

Cloud vs. On-Premises

When
radiology departments decide to try out AI software, they are often
faced with the choice of going with a cloud vs. on-premises solution.
Cloud-based AI software is kept and maintained on the vendor’s own
servers and is accessed by the client via the internet through an
interface such as a browser or dedicated dashboard. The vendor licenses
the application to the client as software as a service (SaaS) paid on a
subscription model that involves recurring payments, often monthly or
annual. Such a license is referred to as a subscription software license.
Conversely, on-premises solutions are installed locally by the client
on their computer systems and usually paid for once and upfront. Costs
involved include the licensing agreement and service charges. The
license in this case is called a perpetual software license.

Big
hospitals with established IT departments may prefer on-premises
radiology AI software that gives them more autonomy and they can take
care of basic troubleshooting as well as maintenance of the hardware.
However, for most clients, it seems that cloud-based services offer the
most benefits. With cloud radiology AI software, the vendor maintains
the hardware such as servers and storage, keeps the software updated and
glitch-free, and ensures optimal uptime. When clients do not have to
take care of all these tasks at their end, it translates to substantial
time and cost savings. Furthermore, without the distraction of IT
maintenance, clients can focus on their core mission—helping patients.
It is important to mention here that modern AI vendors are
HIPAA-compliant and understand the importance of maintaining the privacy
of patient data. In fact, today’s cloud-based AI solutions for
healthcare are as sturdy and secure as local installations, if not more.
Of course, this assumes that the vendor is a reliable and reputable
company.

Purchasing Decisions and Costs

When
radiology practices are contemplating the purchase of medical imaging
AI software, the costs involved are a crucial consideration. Any
expenditure in this context has to be justified from two angles. First
and foremost, does the purchase make a business case, that is, would it
have such a positive impact on the workflow to pay for itself? Secondly,
would a cloud-based service or on-premises installation make more
sense, in terms of both costs and clinical requirements?

With
reference to building a business case, medical imaging AI solutions all
aim at making the radiology clinical workflow more efficient and
accurate, thereby cutting costs. For example, AI software used in
screening mammography clinics to assist with the detection and
characterization of breast lesions substantially reduces costs both
immediately and in the long term. Immediately by speeding up radiologist
reading time per case and reducing false positives per image. In the
long term by decreasing missed cases with future clinical and financial
implications.

As regards cloud
vs. on-premises software, it is easy to assume that a one-time payment
for an on-premises installation, even if bigger initially, would be
cheaper in the long run compared with the recurring payments of a cloud
subscription. What is often overlooked is that an on-premises
installation would require regular software and hardware maintenance,
the presence of dedicated IT staff and the expenses associated with
upgrades. All these represent substantial if not clear-cut costs. A
cloud-based solution, as we described above, does not come with such
‘hidden’ costs. Cloud-based medical imaging AI solutions are scalable
and secure, and maintained and updated by the vendor. More often than
not, they mean more bang for your buck.

Integration and Interoperability

Integrated
solutions improve workflow efficiency, reduce redundancy and ensure an
overall smooth user experience. For a new radiology AI software, a big
challenge for the developers is its seamless integration into the
existing workflow. Standalone AI systems fragment the workflow. They
need independent workstations and then someone to perform the extra
steps of feeding them the data that needs to be analyzed and procuring
and transferring the results to the main information architecture of the
healthcare facility. Furthermore, such systems face more uptake
hesitancy from radiologists as they see them as rife with additional
tasks to add to their already brimming workloads. Therefore, radiology
AI developers prioritize building software that integrates easily with
the rest of the information architecture of hospitals and clinics.

So,
what is this information architecture of healthcare facilities that we
just mentioned? Hospitals and clinics use a variety of IT applications
to expedite their workflow, hasten their turnaround times and bolster
their productivity. At the hospital level, two indispensable and
mandated tools are the EHR and EMR. The electronic health record (EHR) system
is the backbone of any healthcare facility’s digital communication
strategy. It is a facility-wide network that collects, stores and shares
demographic and clinical data of patients in a digital format. The electronic medical record (EMR) system is linked to the EHR. It mainly deals with creating, maintaining and communicating electronic patient charts.

At the radiology department/imaging clinic level, key IT tools are the RIS, PACS and DICOM. The radiology information system (RIS) assists with order entry, patient scheduling and report generation. Where the RIS assists with patient management, the picture archiving and communication system (PACS) helps
with medical imaging data management. The PACS server organizes medical
images and stores them securely. It facilitates their retrieval and
distribution. When medical images are shared and viewed, they are done
so in a standardized format called the Digital Imaging and Communications in Medicine (DICOM) standard.
While these are essential IT systems for any radiology practice, there
are many task-based software programs available as well. A notable
mention here would be of PowerScribe 360 Reporting, a
real-time radiology reporting platform. Its cloud-based version is
called PowerScribe One. PowerScribe leverages AI-powered speech
recognition technology to assist radiologists with radiology reporting
and workflow management. Being used by almost 80% of radiologists in the
US, PowerScribe has become another core tool for radiology practices.

Developers
and vendors of radiology AI solutions seek the integration and
interoperability of their products through different approaches. Where a
medical imaging center is specialized in a particular type of imaging,
for example, screening mammography, a standalone station offering an AI
application for a defined set of analyses can be a practical option. In
most cases however, better integration is necessary. One approach is to
link the AI software with one of the key IT systems, for instance, the
RIS or PACS. This would depend on the nature of the AI software: one
dealing with improving patient flow would need to be connected to the
RIS, while one that facilitates medical imaging data analyses would
require linking up with the PACS server. With such an integration, the
AI algorithm can autonomously collect the data it needs from the IT
system it is linked to, run its analyses and send back the results.
Users can access the results through the broader IT system that they are
already trained on and used to working with. They do not have to take
any extra steps or go through another learning curve to be able to use
the new AI software. Finally, platform companies are specialized in AI
solutions integration. When a radiology practice intends to utilize many
different AI solutions, a good way forward is that they collaborate
with a platform company to handle the integration and interoperability
of these varied AI applications.

Section 4: State of AI in Radiology

Radiology AI—Current Challenges

A Comprehensive AI

The
ultimate goal of scientists and researchers is to create an AI that,
like human intelligence, can autonomously learn any task it sets upon
via inbuilt processes such as trial and error and reiterations—in other
words, a ‘strong AI’ or Artificial General Intelligence (AGI). Such an
AI would be the closest thing to a radiologist because not only would it
excel at image-related analyses, it would also ‘understand’ what those
findings mean in a clinical context and correlate patient data from
multiple sources to ‘decide’ the appropriate next management steps for a
particular patient.

At the
moment, most AI applications in radiology are task-based focusing on
single tasks at a time—that is, they fall under the category of narrow
AI. While such systems may excel at well-defined image-based tasks, they
are incapable of taking into account the bigger clinical situation.
Devising a comprehensive, general AI is a true challenge and when—not
if—achieved, it will dramatically change the radiology landscape as we
currently know it.

Data Volume and Curation

As
we discussed before, deep learning loves big data. The more data to
train on, the better deep learning systems get over time. Big data in
radiology is now a possibility with millions of medical images generated
every year. However, while deep learning systems are being developed at
software houses, medical images get accumulated at healthcare
facilities. One challenge has been to devise different methods of
imaging data sharing to ensure the continual availability of training
datasets. Technologies such as the Picture Archiving and Communication
System (PACS) facilitate the sharing of medical imaging data, while
legislation such as the Health Insurance Portability and Accountability
Act (HIPAA) ensures that patient privacy is maintained. Furthermore, the
emergence of large repositories of medical imaging data has been a
welcome development in this regard as they serve as ready sources of
imaging data for algorithm training.

And
just the availability of large volumes of imaging data is not enough.
It has to be curated. Curation refers to organizing, classifying and
annotating data. In radiology, examples of data curation include
segmentation of medical images and grouping and annotating images
according to the patient cohort they represent. Machine learning
requires curated data to train on. For instance, medical images curated
to match a specific patient cohort can assist algorithms in learning how
to correlate imaging and clinical endpoints. Yet, data curation is a
manual process that is extremely time-consuming, and hence costly. Data
curation has therefore been one of the barriers to the rapid development
and deployment of AI applications in radiology. The solution it seems
lies with AI itself, in the form of deep learning that is not dependent
on labeled data. Deep learning architectures have unique data mining
capabilities where they can learn to discern patterns in raw data on
their own. This can eliminate the need to curate data, a laborious and
lengthy process. In the meantime, public repositories of medical images
and imaging biobanks have a crucial role to play in the advancement of
machine learning because the medical imaging data they store is not only
available in huge volumes but most of it is curated as well.

Unraveling AI’s Inner Workings

A
unique challenge is that scientists still do not fully understand the
inner workings of deep learning neural networks. While the input and
output of such networks is easier to determine in terms of the training
data and inferential endpoints, respectively, what goes on between those
two layers—in the ‘hidden’ layers—is not fully known. Neural networks
comprise thousands of highly interconnected nodes and deciphering all
the activity that goes on between them is a challenge in its own right.
While this may not be such a pressing problem in some other areas of AI
application, when it comes to healthcare, understanding ‘what is going
on’ becomes imperative as people’s health and lives are at stake. As
researchers get better at unraveling the mathematical logic behind AI’s
inner workings, it will greatly improve the application of AI in
radiology. Not only will it enhance our knowledge of how radiology AI
systems solve tasks, it would also improve troubleshooting in such
systems as well as their refinement and evolution. AI applications in
radiology then wouldn’t be referred to as ‘black-box medicine’ anymore.

Regulatory Issues

The
regulation of AI has been a hotly debated topic for a long time.
Proponents believe the power and possibility that AI represents means
that its growth needs to be kept under a close watch. Opponents believe
tight regulation is going to stifle AI research efforts. In radiology,
AI regulation looks at problems such as the opaque workings of AI
systems, which we discussed above, and patient data privacy. Regulatory
bodies include the Food and Drug Administration (FDA) in the US and the
EU with its Medical Device Regulation. Legislation such as the Health
Insurance Portability and Accountability Act (HIPAA) calls for the
security and privacy of patient data. While ensuring patient data
privacy is imperative, it impedes AI training which relies on the easy
and ready availability of data. AI developers look at workarounds such
as using encrypted data, using data on-site at hospitals and using
publicly available anonymized data from medical image repositories and
biobanks.

Ethical Issues

A
key ethical question concerning AI is that when it makes a decision,
who is responsible for the consequences? While the goal of AI is to
create systems capable of thinking and acting like humans, a crucial
consideration here is that humans are responsible for their actions.
When an AI system decides and acts autonomously and errs, who is to
blame? This is an area of active discussion. When it comes to radiology,
this ambiguity can be a source of medicolegal issues. Currently, CAD
systems only support radiologists with their workflow and do not
independently adjudicate clinical decisions. Further along the line when
AI systems become better than radiologists at more and more image-based
tasks, they will start making important clinical calls—and then ethical
issues surrounding the ownership of actions will take center stage.

Radiology AI—Future Outlook

If
we put the ethical debate aside for a moment, the future of AI looks
extremely promising. Big data continues to grow exponentially each year
and so does computing power. AI research continues at a frenetic pace.
Notably, it is not restricted to a few sectors. Whether it is
self-driving cars, voice and image recognition, space exploration,
finance and ecommerce, gaming and virtual reality, or healthcare and
medicine, AI touches almost every aspect of modern human life. The thing
with such a broadly applied scientific discipline is that if there is a
breakthrough in one area, it translates to gains in all sectors. For
instance, if AI research for self-driving cars, or ecommerce for that
matter, yields new insights, those can be applied to healthcare as
well—since it’s the same underlying deep learning algorithms that are
powering AI all around. The point here is that AI seems bound to make
major breakthroughs in the near future.

For
radiologists, this should come as good news and not something to fear.
AI can take over routine and laborious image-based tasks while
radiologists can focus on how to leverage its power to improve patients’
health outcomes. The role of the radiologist may change in scope but is
unlikely to be replaced. In fact, radiologists can help shape the
future of AI in radiology by participating actively in its development.

References

  1. Hosny A, Parmar C, Quackenbush J, Schwartz LH, Aerts HJWL. Artificial intelligence in radiology. Nat Rev Cancer. 2018;18(8):500-510. doi:10.1038/s41568-018-0016-5
  2. European
    Society of Radiology (ESR)., Neri, E., de Souza, N. et al. What the
    radiologist should know about artificial intelligence – an ESR white
    paper. Insights Imaging. 10, 44 (2019). doi:10.1186/s13244-019-0738-2
  3. Pesapane
    F, Codari M, Sardanelli F. Artificial intelligence in medical imaging:
    threat or opportunity? Radiologists again at the forefront of innovation
    in medicine. European Radiology Experimental. 2018;2:35. doi:10.1186/s41747-018-0061-6
  4. Dey D, Slomka PJ, Leeson P, et al. Artificial Intelligence in Cardiovascular Imaging: JACC State-of-the-Art Review. J Am Coll Cardiol. 2019;73(11):1317-1335. doi:10.1016/j.jacc.2018.12.054
  5. Mayo,
    R.C., Kent, D., Sen, L.C. et al. Reduction of False-Positive Markings
    on Mammograms: a Retrospective Comparison Study Using an Artificial
    Intelligence-Based CAD. J Digit Imaging. 32, 618–624 (2019). doi:10.1007/s10278-018-0168-6
Radiology AI
Logo
Compare items
  • Total (0)
Compare