Many health IT decision-makers and clinicians are cynical about the potential for artificial intelligence solutions to improve healthcare, particularly in the wake of high-profile setbacks and continued frustration at care recommendations that seem to come from a black box of technological machinations with little transparency or context.
It seems to boil down to a matter of trust: AI developers need to prove the technology’s promise to healthcare end users, many of whom are still (often rightly) skeptical about algorithms’ shortcomings.
Industry groups, whether they’re focused on clinical informatics or on consumer technology, have been keenly focused on this idea recently.
Healthcare IT News decided to dig into the trust factor of AI by interviewing Punit Soni, CEO of Suki, a vendor of AI-powered voice solutions for clinicians.
Soni discusses how transparency is paramount with AI, how iteration is key for getting customers on board, how AI technologies should learn from their customers how voice is one area where AI can have an immediate impact and how health system leaders need to be brought on board.
Q: You have said that transparency is paramount in healthcare AI, and that providers will be far less hesitant to adopt AI if they are confident in the results. How can the industry achieve the necessary transparency to assure providers, and what does this transparency look like?
A: There are a few different aspects when it comes to transparency in healthcare AI. First, when it comes to clinical decision-making, there needs to be “explainability of outputs.”
Physicians will be rightfully wary of a black box providing recommendations regarding important treatment decisions, but if the AI can show why it is advising a certain approach, and with what confidence it’s making a recommendation, that is much more helpful.
Explainability, in that sense, allows a complementary relationship between the provider and the technology, which is what we should aim for. After doctors undergo years of extremely specialized training, and further build on their education with real-world experience, it’s neither wise nor realistic to ask them to accept recommendations from AI without any context.
Another aspect of transparency is clarity around the data used to train the program. If the data used to train the AI isn’t representative of how the technology will be used, that’s a non-starter, and that needs to be addressed head-on.
Having honest conversations about this also provides opportunities to improve the program: Data from customers should be integrated into the program whenever possible so the AI can account for customer preferences, treatment policies and other unique or specialized needs.
Additionally, many of the concerns around AI revolve around how the technology reinforces bias within the healthcare system. There are two entry points at which bias can seep into an AI tool. The first is in the design itself: The biases of the design team imprint upon how the system makes decisions and learns from its experiences. So bringing together diverse design teams is key.
The second is in the data used to train the system. In healthcare, many of the common data inputs – claims data, clinical trial data – reflect biases in how care has been delivered in the past. And while we as technologists cannot change the history of inequality in healthcare, we can solve for it by assembling more diverse, representative and accurate datasets to inform the AI.
Lastly, there needs to be transparency around what an AI program can achieve with confidence now, versus what its potential will be down the road. Expectations should be set for specific use cases, and there can be iterative improvement from where we are today.
Health system leaders must understand that AI is not like other typical software deployments. The ROI is significant, but success requires thoughtful engagement, a lot of data orientation and a commitment to a stepping-stone approach for development and deployment. In fact, a key reason for failure in deploying AI technology in health systems is the mismatch in expectations of developers versus sponsors.
Q: Iteration is key for getting customers onboard and getting AI technology right, you’ve argued. What do you mean by successful iteration in AI?
A: Successful iteration requires starting with an intense focus on where you know you can provide value and building from there. A successful AI deployment doesn’t have to be a moonshot. It doesn’t have to be solving cancer or some other epic challenge. It can be solving smaller, simpler problems that improve efficiency and help a health system function better.
It’s far preferable to identify small, specific challenges and successfully address them, slowly expanding from there, than to promise the world and fail to deliver. That latter approach undermines trust and makes it harder to build momentum for AI in healthcare as a whole.
AI’s successful deployment in reducing administrative burden is an excellent example. Electronic health records have significant benefits, but they have historically been very frustrating for doctors: They often are not user friendly, and it takes a lot of time to properly fill in their required information.
This turns physicians, one of the most highly trained and important groups of professionals, into glorified data-entry clerks. It wastes their valuable working time that could and should be spent with patients.
So companies have developed voice-enabled AI systems that allow them to dictate their notes into the EHR with a high degree of accuracy. This alone was a huge improvement.
We’ve then seen such AI solutions develop iteratively, more fully integrating with the EHR, progressively taking more and more burden from the physician, such as helping with diagnosis coding or pulling information from the EHR by using voice commands.
Now these tools, that started as simple dictation software, can handle a variety of more advanced commands and processes. And they’ve built tremendous trust along the way.
But the wrongheaded “moonshot” approach can even make its way into areas like reducing administrative burden. For example, the development of an “ambient” AI assistant, that can produce all necessary clinical documentation, order prescriptions, and complete other administrative tasks just by listening in on the physician-patient interaction, is a worthy goal. But the tech isn’t there yet.
Q: What do you mean when you say that AI solutions should learn from their customers?
A: Technologists’ success in disrupting and transforming various parts of the economy makes them think, not entirely unreasonably, that they can do the same in healthcare in the same way. But healthcare has a totally different ethos than other industries – a “move fast and break things” attitude simply doesn’t work when patient care is on the line.
Rather than thinking that we know best, technologists need to approach the problems of healthcare with a sense of humility, working collaboratively with clinicians and health system administrators to develop solutions that work for them and with them.
We may have innovative perspectives and ideas that can transform healthcare, but we can’t do it alone. For example, leaving doctors out of the development process is what made EHRs so unwieldy in the first place.
For AI solutions, learning from the customer means prioritizing their input and feedback, working constantly to adapt the product to suit their specific needs. It means realizing that a “one size fits all” approach often won’t work, and we must figure out how to efficiently design programs that can succeed in a wide range of circumstances and workflows.
And that collaboration should happen from the start; if you go too far in the process without really incorporating the thoughts of the end user, it’s a lot harder to adapt and make changes where needed.
One of the benefits of AI and machine learning technologies is that they are capable of learning and adapting with each user interaction. This both helps improve the outputs they generate over time and allows for a degree of personalization for each user.
The more feedback AI products receive from users through regular interactions, the better they can become at serving the unique needs of a particular practice, system or provider.
Q: It seems like voice technology is one area of healthcare where AI can have a serious impact. What needs to happen in the realm of voice for AI to take off here?
A: Voice technology is actually already having a big impact in healthcare. When it comes to reducing the physician administrative burden, voice AI is a shining success. Now, the challenge is effectively expanding the use cases to improve other processes in healthcare using this proven technology.
The underpinnings of voice technology – speech recognition, natural language processing (NLP), intent extractors, understanding of medical terminology – have come a long way. The technology has been battle-tested, and now our focus needs to be identifying other challenges beyond documentation that voice is capable of solving.
Additionally, vendors can fine-tune these applications as they are deployed so that new voice users have as smooth of an experience as you or I have interacting with, say, a smart speaker or, for that matter, as physicians have had with digital clinical assistants.
And that’s where learning from the customer comes in again. I and others may think we have fantastic ideas for where voice can go next, but the best approach is to actually speak with these potential users so we can really tailor solutions to fit their needs.
But generally, I think, voice will become a ubiquitous interface for back-end healthcare processes, as well as a means of patient intake, and enabling patients to track their own symptoms and interact with their care teams while outside of the clinic.
Think of how commonplace using digital assistants like Siri and Alexa are in the consumer space. I believe the same will happen in healthcare. The last year saw an explosion in digital health investment and adoption borne out of necessity. I think using voice to improve that experience and expand access to these applications is the next phase.
Q: C-suite buy-in is a common theme for big IT investments. How does it apply to AI?
A: Healthcare leaders can’t just buy in on the idea. They need to buy in on the process. We need to educate the C-suite so they understand the value, and we also need to ensure that they have the proper collaborative mindset to maximize the ROI.
With AI, a purpose-driven engagement between vendors and customers yields more effective results. Even for easily scalable software-as-a-service solutions, ensuring that individual customers have the depth of understanding they need to successfully deploy the technology makes a huge difference.
For example, choosing pioneering users to try the solution is key. These early adopters should be open-minded about new ways to get things done and be able to commit to a certain level of use in order to provide informed feedback on the solution, which will help it improve over the long term.
Healthcare leaders and vendors must work together to identify the right pioneering users. Otherwise, they may end up in a situation where the feedback is not particularly meaningful, and the value of the solution is diluted.
And this gets back to the issue of iterative improvement. Healthcare leaders need to know that the solution provides value now. But they also must understand how much more value can be unlocked by working with a vendor to improve the product’s capabilities over time.
That’s how the big transformations can occur. That has to be a deliberate approach, and it’s one that builds trust. It all comes back to transparency and offering the end user a real sense of agency in the process.
Twitter: @SiwickiHealthIT
Email the writer: bsiwicki@himss.org
Healthcare IT News is a HIMSS Media publication.
The post How to help C-suite leaders and clinicians trust artificial intelligence – Healthcare IT News appeared first on abangtech.
source https://abangtech.com/how-to-help-c-suite-leaders-and-clinicians-trust-artificial-intelligence-healthcare-it-news/
No comments:
Post a Comment