Central Line
Episode Number: 169
Episode Title: AI from A to Z (Part Two)
Recorded: August 2025
(SOUNDBITE OF MUSIC)
VOICE OVER:
Welcome to ASA’s Central
Line, the official podcast series of the American Society of Anesthesiologists,
edited by Dr. Adam Striker.
DR. ADAM STRIKER:
Hello, everyone, and
welcome back. This is Central Line, and I'm your host, Dr. Adam Striker. This
is the second and final episode we are presenting in our series on artificial
intelligence and the many ways AI touches our work. We posted the first episode
on this topic at the end of March, and today we are sharing four additional
short conversations with members of the Committee on Informatics and
Information Technology, or CIIT. Some of our guests for this episode are either
members of the US military or work for the US government, the opinions and
exertions expressed herein are those of our guests and do not reflect official
policy or position of the United States government.
We're going to start
with an issue that colors how we understand AI and its limitations, the thorny
challenge of addressing bias and ethics in AI algorithms. To help us work
through this issue. I spoke with Dr. Christopher Goldstein.
Dr. Goldstein, if you
don't mind, lay out the landscape for us just a little bit. What is the scope
of this particular problem?
DR. CHRISTOPHER
GOLDSTEIN:
Sure. As medical
professionals, we are taught to be aware of our own human biases to not let
them negatively affect our care or research studies. And there are about 150 to
250 human biases described depending on the source. Regarding our own
specialty, we have been reminded by several articles in the last couple of
years that pulse oximeters can also give us unreliable values depending on skin
color. So it is an issue that we have had to be aware of, even pre AI, that
examples of biased algorithms keep making the news is unfortunately not
surprising then as many AI biases have human biases as underlying cause. Like
designing and completing well conducted research studies, it's difficult to
ensure AI models are trained on high quality and representative data, and that
their learning pathway is steered in a desirable way. It becomes even harder
when multi-layered deep learning algorithms and neural networks are employed,
and in some circumstances, not to get too technical, but so-called model drift
can lead to issues down the road that were not even apparent during validation
or early use. So it's definitely a complex issue for developers.
From the clinician user
side, I think what we must increasingly watch out for as AI applications become
more ubiquitous is automation bias. Using machine intelligence tools implies
that the decisions are rational or objective, basically near perfect, which can
make it hard for us to question them. Having to deal with automation bias on
top of the other time pressure tasks in a fast paced environment like the OR,
can add yet another layer of stress and potentially reverberate negatively into
patient care.
DR. STRIKER:
Take us through a real-life
example of how a biased algorithm may impact patient care, and maybe even
broaden that out to an example of the danger that that could present.
DR. GOLDSTEIN:
There's growing
excitement about the rise of AI tools in perioperative medicine, for sure. So
let's zoom in on a couple of examples.
Preoperative AI agents
that call patients autonomously, going patiently over questions and providing
personalized answers and pre-op instructions, even after hours and weekends,
are becoming a reality, and it's exciting to envision a future and a time
consuming pre-op evaluation and risk stratification process will become more
streamlined thanks to this AI agent. But what if, in a worst-case scenario, a
biased algorithm labels let's say a cancer patient not fit for potentially lifesaving
curative surgery, and this doesn't get questioned or overruled. That completely
changes this patient's care trajectory and probably the outcome.
Intraoperative and
post-operative, we will also see more AI tools augmenting us. But imagine a
semi-autonomous hypertension avoidance and pressure fluid management system,
strongly recommending certain actions in an unstable patient. What if I
disagree? Despite the machine alarming and suggesting immediate action? Will I
doubt myself in the moment and possibly let automation bias insure me? Maybe
thinking what am I missing? What if the situation turns even more unstable if I
deviate from the machine suggested path? Now put a concerned surgeon wondering
why anesthesia is not following these urgent recommendations on top of this.
This scenario adds a significant layer of pressure, stress, and maybe even fear
of potential medical legal consequences. Some might even think, why did you
think the algorithm was wrong and decided to deviate from the recommendations,
doctor, that they are being asked that during an M&M or even during a court
of law?
So based on the
confidence and experience level of the team involved, it's possible that care
suggestions are being followed in the heat of the moment that turn out to be
suboptimal later. There are already anecdotal reports from radiologists who had
a hard time convincing their E.R. colleagues that the preliminary IAI read was
wrong and no treatment was needed. So the stakes are definitely high.
DR. STRIKER:
Well, are all patient
populations at risk, and are there ideas out there that might make AI safer?
DR. GOLDSTEIN:
I would consider anyone
at this point at risk, depending on the situation. As we discussed before, high
quality representative data is of course fundamental, but there are often quite
unpredictable factors that determine what an AI learns depending on the
complexity of the algorithm deployed.
Take mass casualty
events or ICU bed triage in shortage situation like a pandemic as an example. Here
AI offers the tempting benefit of fast, data driven prioritization within
seconds at a scale and speed that humans just can't match. Again, an exciting
use case of AI augmenting humans. But what if the algorithm was trained on data
that does not adequately represent all victims? Maybe a US trained model
deployed in a different country to render humanitarian assistance. Good
intentions can still result in harm, especially when we defer to autonomous
systems because we are overwhelmed by the scope and pace of an event.
To answer your second
question, to mitigate risks of bias and more generally, facilitate ethical AI, various
frameworks have already been developed. Core strategies usually include
maintaining human agency; being able to veto and request human review, like in
the cancer pre-op patient example that we had earlier; adequate human
supervision as anesthesiologists still being there, ready to intervene in the
ORS, for example; and transparency. To what extent do we have to, or should we,
disclose the use of AI tools to our patients? And importantly, what should
manufacturers disclose to us as end users? I like the concept of AI nutritional
labels, a sticker on a device, or a pop-up window declaring key metrics such as
data origins, performance, known biases, and the intended use cases and
limitations. For instance, label statements--like algorithm trained on US
adults 18 to 79 use on pediatric patients, patients at the extremes of ages,
and non-US populations, is off label--can allow for quick safety check if the
AI tool is appropriate to use.
DR. STRIKER:
Let's zoom in on
clinicians. What can we, as anesthesiologists, do to help move the needle in
the right direction and protect patients?
DR. GOLDSTEIN:
I'm hopeful that we have
what it takes to lead in the age of AI. Important is to remind ourselves that
despite AI our tools becoming very good and extremely confident, it remains
just another tool that comes with its own set of problems like the discussed bias,
hallucinations, or a better, maybe a less anthropomorphic term, AI
misinformation, and even cybersecurity problems like adversarial attack
vulnerabilities.
Being vigilant, patient
advocates, adaptable and lifelong learners is at the core of our profession.
These traits will remain pertinent when it comes to AI. A cornerstone will be
fostering awareness and providing ongoing education. The ASA and ABA should
continue to take an active role here by adapting educational offers like this
podcast; postgraduate training AI certificates may be modeled after the
successful POCUS pathways; and modifying residency in exam content outlines
accordingly; and possibly even offering AI fellowship pathways. Some
anesthesiologists will continue to be closely involved with industry and
R&D, while others will focus on advocacy, policy making and representation
similar to the aviation industry. The risks of automation bias, deep scaling,
and overreliance on automated systems can be tackled through courses and
simulation, facilitated again by the ASA and ABA. Overall, we have to ensure
that we adapt and become dual competent physicians. I would like to call them
anesthesiologists in the loop, capable of harnessing AI to the maximum benefit
of our patients while remaining in the loop, ready to intervene when needed.
DR. STRIKER:
Well, Dr. Goldstein,
thank you very much.
DR. GOLDSTEIN:
Yeah, thank you very
much.
DR. STRIKER:
Dr. Matthew Wecksell
shared some thoughts on cybersecurity and how AI is impacting the safety of
care.
Dr. Wecksell, if you
don't mind, first, talk to us a little bit about the risks here. When we're
talking about AI, we're talking in part about large language models and the
impact these models have on patient Information. So distilling so much
information across disciplines is great, but there's got to be some security
risks, right?
DR. MATTHEW WECKSELL:
Absolutely. Large
language models are really great at amplifying what people can do, especially
the people that are below the top of the bell curve and skill. So what you'll
hear in the popular media is that LLMs aren't going to replace lawyers or
programmers, but they're going to make the lawyers and programmers who use them
significantly more effective. And unfortunately, this is also true for the bad
guys in healthcare IT we've got a real lack of diversity. Epic's got about 35
to 50% of the market. Oracle Cerner has about 25%, Meditech about 15%. And the
rest of the IT stack that healthcare is running on tends to be a windows
environment, and with very few potential targets, it becomes a lot easier for
anybody to say, okay, what are some of the problems and vulnerabilities with
this? It used to take a lot of great skill to identify those problems. Now,
it's as easy as asking a large language model. Hey, how do I exploit
vulnerabilities in this vendor's platform? How do I exploit lower in the
technology stack? And then, okay, now how do I add a ransomware module to that?
So in some sense it almost becomes like plugging Legos together and the bar for
a bad actor becomes significantly lower. Ideally, you don't want a nation state
to fund bad actors to target your hospital for ransom. But at the same time,
you also don't want drunk teenagers or disgruntled employees also being able to
target you. And I, particularly with a large language model behind it, really
lowers the skill bar for somebody launching those kinds of attacks.
DR. STRIKER:
Are there real world
examples? I mean, have American anesthesiologists or healthcare organizations
faced issues with security and and what like that look like in practice?
DR. WECKSELL:
Absolutely, they have
faced those issues. In April of this year, Yale-new Haven Health System
determined that an unauthorized party had gained access to its network and
copies of its data--copies of its data, including patient names, birth dates,
phone numbers, addresses, email addresses, Social Security numbers.
Fortunately, Yale's electronic medical records weren't involved in the breach,
so the incident didn't impact their ability to provide care. What we have to
understand is in health care and everywhere else, there are different levels of
security. It's one thing for somebody to deface the website of a bank. It's
another for somebody to steal a bank's client list. But it's really, really bad
if they steal the money. So too, in health care, you don't want somebody doing
things that are going to shut down your EMR because then you're dead in the
water delivering clinical care. This attack against Yale, the EMR, wasn't
affected. But obviously, even if somebody just gets your patient list, if they
know that somebody is a patient of an HIV clinic, you really can make some
obvious assumptions about why they might be going to that clinic and be able to
extract some fee from that without actually having access to the EMR.
On the other hand, AI
helps us play defense, and I noted that Yale determined that an unauthorized
third party had gained access. Well, how do you do that? You get to have an AI
looking at your server logs and your computer logs. And so instead of waiting
for humans to say, hey, we've got unusual logins, we've got unusual data
transfer, you can have an AI tool that's sitting there constantly going through
the logs and saying, what's out of the ordinary? What's spurious? Do we need to
do something about this? Do we need to alert the humans have then take a look
and do something about it so I can help with compliance monitoring. It can help
with automated incidents response. So it's not all bad.
DR. STRIKER:
Well, obviously
everyone's at risk here from individuals to the organizations. If you don't
mind, take us through how those risks differ depending on the particular target
audience, and then specifically how we as anesthesiologists might be affected.
DR. WECKSELL:
Sure. It absolutely
depends on the risk that you're talking about. Individuals are mostly at risk
of seeing their data leak out of the health care system. You know, we mentioned
earlier, the bad guys can know you're a patient at a cancer center, you're a
patient in an HIV center, and extract information just from the patient list,
just from the data, that's not actually part of the EMR. Obviously, if the EMR
information gets breached and leaked, there's significantly more risk to the
individual about all of their PHI being shared, either with bad actors or being
shared widely. Fortunately, we really haven't seen attackers altering patient
data or the behavior of therapeutic devices. Um, security wise, that's the
nightmare. Either, you know, changing lab values in the computer so that
patients receive care that's not indicated so that they receive emergency care
that's not indicated, or modifying therapeutic devices where now that you have
CPAP machines on Wi-Fi, insulin pumps, pacemakers, uh, the more networking
tools and networking hardware we put into those kind of devices, the greater
the risk that somebody has the opportunity to compromise them and engage in
some really bad activity against patients. Uh, fortunately, that hasn't
happened so far. Or if it has, it really hasn't gained traction in the media.
For institutions, the
risks are going to be both legal and operational. Legally, HIPAA spells out
penalties for data breaches. There's just a clear legal responsibility to
prevent that from happening. But operationally, ransomware can shut down a
hospital. Other attacks that compromise an EMR can shut down a hospital as they
go through and verify that they still have a functional EMR, that nothing's
been changed, and that they can be secure in the tools that they have. And
forcing an institution back to paper for an extended period of time carries
with it its own patient risks. We can provide anesthesia records on paper, but
obviously the pharmacy is not going to be able to automatically check for drug
interactions when people order that. All the safety features we've built into
the electronic medical record and our health care workflows go away when we
move back to paper. For the operating room, the other risks are just extended
downtime. Again, if the HMP is in the EMR just because you can provide an
anesthetic, give the drugs and chart on paper. If you can't read the history
and physical. If you can't read any consult notes if the lab values aren't
available. Um, that's not an environment in which you probably want to be
practicing medicine unless it's emergent. So there's risk there as well.
DR. STRIKER:
Well, how do we protect
ourselves and our patients? If you don't mind, share a little bit of advice
with our listeners to help them engage in some modem of protection from these
kinds of security risks.
DR. WECKSELL:
Sure. Most of the
safeguards that we have are going to exist at the institutional level, which is
where the HIPAA security rules focused, So we need to have safeguards about who
can access what data, when and how. Um, HIPAA requires institutions to have
security policies, and clinicians really need to be aware of that and
understand why hospital IT and healthcare IT departments are doing what they're
doing. There's a tendency to want to throw our weight around, say, I'm the
doctor, I need this. But it's important to understand why the hospital IT
department is often going to say no. We've got a lot of web-based resources
that are valuable in everybody's day to day practice. You might have a call
schedule that you purchased through an online vendor… And all of that stuff on
the web is great, but it becomes a bad idea to have internet access on any
computers that patients, their families, and visitors have physical access to.
So we become limited. You can access those resources in your office, in the operating
room. But in the holding area, you've got computers in the patient bays where
the patients are sitting, they’re left alone. Those are places where we
probably shouldn't have access to stuff, even if those are resources that are
valuable for clinical care, even if they're valuable for the administrative
running of the department and you're paying for it. There's a time and a place
for that. So too, IT is going to want to mandate different services the EMR,
the PACs, whatever, automatically log out after a certain amount of time that
they're not being used. And even if it's more convenient for the clinicians to
have, you know, you log in once and stay logged in forever, there's good
reasons why IT is doing what they're doing in order to, at that institutional
level, protect everybody.
DR. STRIKER:
Dr. Wecksell, thank you
so much. Appreciate it.
DR. WECKSELL:
You're welcome. Thank
you.
DR. STRIKER:
We wanted to learn how AI
can be used to boost revenue in anesthesia practices so we turn to Dr. Jonathan
Tan.
Dr. Tan, when it comes
to perceptions of AI, I think many of us have some mix of hope and fear. Let's
focus on the hope. As the specialty grapples with various monetary challenges,
do you think AI represents some opportunity to win back some revenue?
DR. JONATHAN TAN:
Yeah. This is a great area
of focus for our specialty. Now, if we think about how complex billing can be,
the complexity of understanding how to properly bill, how to efficiently bill,
and just the whole administrative structure around it, it's a great opportunity
for disruption. And, you know, what we're seeing is artificial intelligence as
a potential tool to be able to streamline this processes for practices in large
groups. I think there's a lot of hope--a little bit of hype, but there's a lot
of hope with artificial intelligence to help address billing and revenue
cycles.
DR. STRIKER:
Well and talk a little
bit about how we can partner with AI to create higher reimbursement rates for
the anesthesia community, how it might be used for coding, etc..
DR TAN:
Yeah, I see there's two
different sides and it's a really wonderful approach. After you think of, you
know, two sides of the coin here. The first is ensuring and optimizing billing
capture. So for artificial intelligence to be able to use, to scan our charts,
to scan the cases, and to understand if we're optimizing our billing practice
from the front end. And then in the back end, the entire administrative support
around billing and the time it takes the bill and the staffing required, there's
a huge cost right there of real dollars for practices. And if you could either
reduce that overhead from an administrative standpoint or make it more
efficient on the front end. Those are two really great opportunities for
artificial intelligence to make a huge impact on the finances of clinical
practices.
DR. STRIKER:
Well, I imagine these tools
exist now. And if they do, do you think the anesthesiology community at large
is aware of them and the financial upside, or if they do exist currently, is it
under the radar?
DR. TAN:
Yeah. You know, I think
it's a mixed bag. I think it depends who you're talking to. Um, certainly there
are early adopters that are out there in our specialty, just like early
adopters in any technology domain we're talking about. Those are the ones that
are going after it. They're looking for the best tools to help address their real-life
problems. And in this case, it's billing. And the great tool is artificial
intelligence.
On the flip side, I do
think certainly that our field and our specialty of anesthesiology and all the
subspecialties around it, really need to come around and we need to move really
fast as a specialty to grow and learn about these tools. It's incredibly hard
to do so. You know, I think anesthesiology in itself and the practice of
medicine is busy enough. It's something that we spent our entire careers and
educational life investing into. And artificial intelligence is another just
large and amazing domain. And it's going to take some education for our
specialists, those in training, those out in leadership in our field, and also
collaborating with AI experts to bring them into our field to help us. So I
think that while there are early adopters, the majority of the field still need
to learn about what AI is outside of things they might use on their phone. And
then, of course, there's a lot of hype around AI as well, and a lot of just
conversations, a lot of headlines, but not necessarily like the real proof is
out there yet for the value of artificial intelligence. And, you know, I think
that has to be shown. And I think that takes time. It's going to take
integration into practices to see the return on investment.
DR. STRIKER:
Now, two parts here.
One, how can our listeners or anesthesiologists at large tap into this? What
are some things that they can do to to reap some of these benefits? And then
secondly, as a specialty or as a collective, what are things that we should be
doing to tap into this?
DR. TAN:
Yeah. You know, I think
I think to start with individuals, all of us really need to just read and learn
and see what's out there. You know, to see beyond some of the failures of
artificial intelligence or at least some of the fear of it, the lack of transparency
and the fear that I might take our jobs. I think that's always been this kind
of in the ether of the conversation of AI and anesthesiology. You know, a lot
of people really talk about AI really being more about instead of artificial
intelligence, AI actually might stand better for augmented intelligence, where
we more view artificial intelligence as a partner in our field. And I think we
have that mindset as individuals, and seeing how it actually already helps us
with our decision-making day to day in non medicine fields, I think it allows
us to kind of see the opportunities as an individual in the medicine field.
But then as a society, I
think that's where the real opportunity is. As a larger group like the ASA, the
ABA and many other organizations. I'm on the board, for example, for the Society
for Pediatric Anesthesia. And one of the things we're committed to in our new
strategic plan that we're going to release pretty shortly, is integrating
artificial intelligence to every facet of our society. The educational
sessions, you know, wouldn't it be great if we're having a conversation about
medication safety and all the errors that result from it? So we have the
classic conversations about medication safety problems. But then one of our
speakers or, you know, as a part of it, will always bring in -- what are the
technological tools that exist out there with artificial intelligence that's on
the cutting edge, that might help reduce our risks of medication errors in the
operating room. So I think there's a huge opportunity for conferences,
societies just to to build artificial intelligence into their educational
curriculums and conversations. I think that's going to really move the needle
forward for our field.
DR. STRIKER:
Excellent. Well, Dr.
Tan, thanks for joining us.
DR. TAN:
Thanks for having me. Dr.
Striker, it's a pleasure.
DR. STRIKER:
Finally, because AI is
an ever-shifting landscape of opportunities and challenges, we asked Dr. Hannah
Lonsdale to share her thoughts on future trends.
Dr. Lonsdale, what does
the future look like? Let's start with what we can look forward to. Is there
something you expect AI to help us tackle or any potential future solutions
you're particularly excited about?
DR. HANNAH LONSDALE:
Hi, thanks very much for
the question. It's really useful to look forward to what is going to be
beneficial about AI, because I think at the moment we're in this, uh what's
known as a trough of disillusionment from the Gartner hype cycle, which is
where there's been a lot of publicity around AI, people have heard a lot about
it, they've got very excited, and then there's nothing. Like for most people in
their clinical practice, they will not see a lot of AI clinical applications
yet. And AI is in this weird place between oh, look how silly AI is. It doesn't
really do anything useful. But the flip side of it, some people are thinking,
oh my God, AI is coming for my soul. Um, so it's this really weird balance at
the moment.
In terms of what I'm
excited to see in the near future. Large language models have been game
changers, so things like ChatGPT, um, are going to be able to give us
assistance, for instance, with patient communication. I would love to spend as
much time as I possibly could with my patients in the pre-op assessment phase,
but with the increase in clinical demands, that's pretty challenging. And so
seeing curated large language models that are able to answer patient questions
like chat bots at all times of day, whenever the patient has a question, or if
they have an hour's worth of questions, seeing a large language model that can
really help with that. They can also summarize past medical histories, and so
that will streamline our pre-operative assessments. They can give us literature
summaries--that's already out there. So if I have a patient with a rare complex
condition, I can go online and look for a AI guided literature summary and also
AI augmented teaching. Then more clinically, AI is going to come into clinical
decision support. Those tools that are going to provide us with knowledge and
patient specific information that is more intelligently filtered and presented
at the right time so that we can enhance our decision making and also hopefully
improve patient outcomes. And clinical decision support gives us suggestions
and summaries rather than mandates. A PEW study recently showed that patients
are very open to doctors using clinical decision support from AI, but they are
uncomfortable with AI as replacing the physician decision support. I'm also
uncomfortable with that. It might give us personalized care. So there's an idea
of the human digital twin, where we take all the information we have about a
patient and construct a model of that patient. So that includes genomic data,
laboratory data, past medical history. And it means that we can get
increasingly personalized treatment plans for patients, which antiemetic might
work best, or whether we should avoid TiVo because there'll be a longer wake up
than there would be with inhalational induction, and many other decisions like
that. I foresee that AI will help us forecast complications and patient
outcomes, enabling us to better direct or pre-operative optimization. And also
it may, for instance, help us decide appropriateness for ambulatory surgery
centers by screening patients much faster than any human can. It's also coming
in for our workflow and patient flow enhancement. So what medical center
doesn't need more bed capacity or more O.R. capacity? Using AI to smart
schedule to make the best use of the resources we have. And in time, we will
see more intraoperative applications. Although they're the toughest because
they are the highest risk and they need the most testing. Things like automated
closed loop IV anesthesia are in development, and I'm really excited to see
where that will take us.
DR. STRIKER:
What concerns do you
harbor when it comes to how AI might impact the future of healthcare? I know
we're always concerned about what the potential negative consequences of of AI,
but what do you see on the horizon when it comes to negative issues?
DR. LONSDALE:
Absolutely. We I mean, we
have so many real-life examples from, for instance, Silicon Valley, where the
mantra of move fast and break stuff causes all kinds of problems. But as
doctors, we can't do that because move fast and break stuff means harming
people, which is absolutely not what we want.
One of the major
concerns is data and bias in data. So AI doesn't add new bias to data, but the
data we use to train AI models is taken from the real world. And so in real
life, bias is baked into a model unless we take steps to take that out. And
that may perpetuate inequalities and care that currently exists, for example,
in marginalized groups. Another big concern is clinician well-being. Some of
these new tools may save us time or give us more information. But as a
specialty, we need to protect ourselves to ensure that we aren't forced to
accept the decisions of these tools without clinical scrutiny, that we lose our
clinical autonomy, and that any savings we make in time or cognitive load
aren't simply repurposed to further stretch us beyond the current stressors of
workforce shortages and ever growing patient complexity.
Large language models
have their own set of concerns. They can present incorrect information, which
is known as hallucinations. The internet has always contained misinformation,
and I think we regularly see this in our patient conversations, that a patient
has read something on the internet that is perhaps not well screened or written
by a person who is very knowledgeable. And all of that data from the internet
has been used to train publicly available large language models. So it then
becomes baked into the answers, and they may not give completely accurate
answers without curation by knowledgeable clinicians.
Another concern is that
a study in radiologists examine the diagnostic performance of doctors who
received either correct or incorrect support from AI. And giving these
radiologists accurate suggestions drove a modest increase in their correct
diagnoses. However, when they were given incorrect suggestions from the AI,
their decision making accuracy decreased from 78% to 28%, though it appears
that putting something in text adds an element of legitimacy that may be
unwarranted, and we really need to be careful with that.
There's a lot of snake
oil out there, a lot of solutions to problems that don't exist. And we can
mitigate that by focus on problem solving and not solutions. Selling, maybe
commissioning of ineffective or untested applications by people who don't
necessarily understand the nuts and bolts and the limitations of AI, that is,
those many examples from Silicon Valley. And it's my feeling that early
premature implementation of tools that are not appropriate for what we need
will actually harm trust, and ultimately delay implementation of the right AI
based tools. My advice to listeners who are asked to look at these new tools is
to treat a clinical AI tool as you would a new drug or a new anesthetic
technique. Look at the evidence. Ask for large scale randomized clinical
trials. At the moment, these are few and far between, although there are many
excellent physician scientists working to change that. And it will happen.
And my final concern
comes from academics. It's becoming increasingly common to see fabricated
articles and hallucinated citations in submissions to journals, and without the
appropriate skills to be able to detect those things that may be incorporated into
the academic literature. And that's very concerning as well.
DR. STRIKER:
Let's talk about
anesthesiology specifically here. I can't imagine our jobs will not look
differently ten years, even 30 years, down the road for a number of reasons,
but particularly because of AI. How do you see this specialty evolving?
DR. LONSDALE:
I think we have to think
bigger picture even than just our specialty here. So the Pew study I mentioned
earlier that suggests patients are open to clinical decision support, also
suggested that they are open to partially autonomous surgery. And so a bigger
picture is how will that change our practice? Even today, everyone loves a nice
long robotic case, and partially autonomous surgery when it is introduced is
not going to be fast either, so we will need to adapt to that big picture. I
think it's going to be a slow evolution, though, because compelling evidence of
effectiveness needs to exist before we can adopt these tools in a widespread
fashion. It's not going to be a sudden and dramatic change. Will things look
very different in 30 years. Yes, without a doubt. But every sci fi novel, every
distant future prediction I've ever read, looks kind of adorably quaint when
viewed from the present time, so it's hard to see where we're going. But a lot
of interesting AI tools and development have been explored in details by other
guests on this podcast. I expect to see more clinical decision support, real
time dynamic adverse event prediction; using machine vision, so cameras and and
perhaps maybe even autonomous robotic support for things like ultrasound guided
regional anesthesia, invasive lines, intubation and bronchoscopy. But more
importantly, I think we're going to see a reduction in those low-level
repetitive tasks. And that means that they'll free more time to focus on
complex tasks, things that require expert level cognition. And then we can
focus on education of the next generation of anesthesiologists, and in keeping
the human in the process of caring for each patient rather than by replacing
the human clinician.
DR. STRIKER:
Finally, when it gets
right down to it, our top priority is always patient safety. And so distilling
all this down to its essence, do you believe AI will benefit patients?
DR. LONSDALE:
Absolutely. And the
reason for that is there is still scope for improvement in safety. There was a
2020 study showed a poor rate of preventable in-hospital mortality of over 3%.
And reducing that is just the tip of the iceberg. By personalizing care and reducing
the low-level tasks, allowing physicians to do what they do best, focus on the
patient, we're going to get more right, first time. We're all currently
overwhelmed by data: academic literature, vital signs, baboratory results, complex
past medical histories, genomics, scheduling challenges. And somewhere in
there, as a patient who might be scared or have questions or be able to tell
you something vital that you can't glean from that chart review. Any human
brain, yes, including those of attending anesthesiologists, can only retain 3
to 5 chunks of information at any given time. And so using AI based clinical
decision support to reduce the conflicting demands on our cognitive processes
and help us to dedicate more time to that human connection for our teams, our
trainees, and most importantly, for our patients.
DR. STRIKER:
Dr. Lonsdale, thank you
so much for joining us.
DR. LONSDALE:
Thank you very much.
DR. STRIKER:
And thanks to all of
you, our listeners, for tuning in to this special short series on AI. If you
missed the first AI episode, you can find that. Tt's episode 157—157--on ASA's
website or your favorite podcast platform where you normally get centralized.
We appreciate all the various members of Asa's Committee on Informatics and
Information Technology sharing their expertise, and we certainly look forward
to seeing you here again soon.
(SOUNDBITE OF MUSIC)
DR. JEFF GREENE:
Hi, this is Dr. Jeff
Green with the ASA patient safety editorial board. Medical malpractice
statistics are sobering. 1 in 3 providers will be sued over the course of their
career. Anesthesiologists experienced an annual rate of paid malpractice claims
of 11.7 per 1000 physician years, with 10% of paid malpractice claims reaching
over $1 million. There are increasing anesthesiology claims for Mac, NORA, and office-based
surgery procedures. Anesthesiologists can reduce exposure to these increasing
risks for medical malpractice claims by one. Understanding the risks of Nora
and Mack anesthetics, and educating stakeholders to recognizing and reducing
the incidence of respiratory events in Dora and Mack anesthetics. Three
monitoring and deterring complications in the postoperative setting before
rescue is required. And finally, by thoroughly examining and documenting dental
disease prior to anesthesia. With these safety tips in mind, it is hoped that
anesthesiologists can reduce the incidence and severity of malpractice claims
in the future.
VOICE OVER:
For more patient safety
content, visit asahq.org/patientsafety.
Get to know your new knowledge
assistant, BeaconBot, just for ASA members with a focus on anesthesiology
content and information not widely or publicly available. Ask a question today
at asahq.org/beaconbot.
Subscribe to Central
Line today wherever you get your podcasts or visit asa.org/podcasts for more.