November 19, 2012
Documentation in Motion
By Selena Chavis
For The Record
Vol. 24 No. 21 P. 18
Advances in mobile speech recognition expand dictation and documentation options for clinicians.
A growing body of healthcare industry research suggests a direct correlation between the amount of clinician time spent at a patient’s bedside and the quality of care provided. In essence, healthcare professionals are realizing that the more care is provided face to face, the greater the probability that the desired outcomes and efficiencies are met.
Point-of-care technologies that allow mobile access to patient information quickly are becoming an integral part of hospital HIT strategies, which in turn is leading to increased efforts to advance mobile speech recognition technology to support these goals, according to Brian Phelps, cofounder and CEO of Montrue Technologies. In fact, according to research firm RNCOS, physician adoption of mobile devices is expected to reach more than 80% this year, significantly expanding the opportunity to address mobile documentation needs through speech recognition.
“For me, as a physician, the patient’s experience and story is the best way to understand the patient condition,” Phelps says, adding that having an avenue to dictate important clinical narrative into mobile devices at the patient bedside enhances patient-clinician interactions, accuracy of care, and efficiency. “We see speech as integral to mobile delivery going forward.”
Yaniv Dagan, CEO of Integrated Document Solutions, points out that economic pressures also are spurring demand for more “documentation on the go” by physicians. “Because of the economic changes, they have to see more patients,” he notes, adding that when documentation can be done while interacting with a patient, it significantly frees time for more patient visits. “Every physician’s dream is to be able to dictate into a mobile device … such that the output will automatically be inserted into an EHR to meet meaningful use.”
And while mobile speech and EHR technology may not yet have fully reached that level of integration, there have been notable advances in the development and uptake of the technology.
For example, as the winner of the 2012 Mobile Clinician Voice Challenge, Montrue Technologies sought to increase momentum with mobile speech recognition by integrating the technology into its SparrowEDIS, an emergency department information system for the iPad. By incorporating Nuance’s voice recognition technology into the application, doctors and nurses can dictate the clinical narrative—or patient story—at the point of care. An added layer of medical voice recognition allows clinicians to verify potential drug safety interaction issues, create prescription orders, and share discharge instructions at the bedside.
Pointing out that the average emergency department visit runs four hours with only 10 minutes of that time spent face to face with a physician, Jonathon Dreyer, senior manager of mobile solutions marketing at Nuance, says the ability to dictate directly into a mobile device increases interaction. “Patients feel listened to, and doctors get to spend more time with patients, which is what we want,” he notes.
While there are many positive aspects to the deployment and use of mobile speech recognition technologies, some industry professionals believe on-the-go documentation practices pose specific workflow problems. “We think voice recognition is really appropriate for physicians doing dictation in a private office, [but] if you are on a busy floor of a hospital, the last thing you want to do is add to the noise,” says Rob Campbell, CEO of Voalte.
To combat this problem, Voalte has developed an integrated solution that addresses the need for wireless through touch-based smartphones and iPads that deliver alerts and appropriate communication via text messages at the point of care.
In January, Nuance took the core assets of its speech recognition engine, lightened them, and made them available to developers through its 360 Development platform. An enterprise-class development ecosystem that allows independent software vendors and internal development teams at provider and payer organizations to leverage the technology, the platform currently has 200 active developers in the United States and 300 globally, according to Dreyer.
Once accepted to the platform, developers gain access to the company’s cloud-based medical speech recognition application for mobile devices along with an application for analyzing free text and extracting information and a management console for account and user management, licensing, usage, and activity reporting.
According to Dreyer, company research revealed that physicians were looking for alternative methods of data entry on mobile devices. “Physicians are not going to try [to] type into a mobile device,” he explains, adding that the process is too cumbersome and time consuming. “Speech recognition is critical to healthcare from an efficiency and time-sensitive approach.”
As part of the 2012 Mobile Clinician Voice Challenge, developers accessed the 360 Development platform to embed Nuance’s mobile speech recognition services into their healthcare applications. Dreyer notes that the company wanted to “see what developers could do to take integration to the next level” as well as draw more developers into the program to drive uptake and adoption of mobile speech recognition.
Phelps credits much of Montrue’s success in winning the challenge to the company’s partnership with Apple. “We adopted a clean Apple interface,” he says. “We developed a relationship with Apple halfway through the process, and they helped us understand how to best use the space of the iPad. Following their guidelines made a much better product.”
Other innovators recognized as part of the competition include healthcare application developer Remedy Systems and WholeSlide, Inc. Remedy Systems embedded speech recognition into its Deep Query Engine app, essentially creating a medical personal assistant. The app enables physicians to interact and organize their practice via voice commands.
A technology company that creates novel tools for image viewing and analysis, WholeSlide integrated speech recognition into its educational program app, enabling clinicians and researchers to voice annotate and share regions of the app’s hosted histology, hematology, and pathology data with colleagues.
“It has been exciting to see what developers have done with these technologies,” Dreyer says.
While many industry professionals would not deny the notable advancements to speech recognition accuracy over the past decade, the fact that the technology is not foolproof still causes concern for some, especially in a mobile setting.
When working in a stationary environment, the accuracy of speech recognition tends to be high, Dagan says. On the flip side, when speaking into devices on the go, the environment becomes dynamic and many other variables, such as background noise, come into play. Dagan also points out that heavy narrative tends to be less effective with speech recognition technology as a rule, and some specialties will be better positioned for mobile use than others.
Campbell points to three potential hurdles to the use of mobile speech recognition technology:
• Because speech recognition relies on the spoken word, the process needs to be conducted in private to avoid any potential HIPAA fallout.
• No matter how hard vendors work, the technology is still not 100% accurate. Campbell notes that 10% to 15% of the time, speech recognition will not accomplish what a clinician needs.
• If speech recognition is being used as a call to action, it is disruptive by the sheer nature of the voice being used.
“Anytime there are disruptions in healthcare, more errors occur,” Campbell says. “We think that a smartphone that presents information via text is more effective.”
According to Dagan, user uptake and demand for mobile applications—including speech recognition—are driving development. He notes that despite the challenges presented with the use of the technology, Integrated Document Solutions is actively involved with several larger organizations to explore viability and opportunity.
Phelps suggests that the use of mobile speech technology is a natural progression for most physicians. “Doctors have been using speech recognition for a long time. They are very comfortable with it,” he explains. “I can’t imagine why you would have mobile documentation without speech recognition.”
From an evolution standpoint, Dreyer notes that the first wave of mobile speech applications was essentially speech to text—software offered a basic hands-free navigation that converted voice to text.
Recently, Nuance released a more advanced offering that allows more voice control and conversational elements to the technology. “The user can now interact with the device by voice,” Dreyer notes. Physicians can query the device by asking “What’s my schedule for today?” or “What are Mary Smith’s vitals?” The technology would then provide a targeted response to users.
As the market moves forward, Phelps believes the vendor market will continue to refine the existing technology. “Right now, there are certain elements of documentation needs that are a good fit for speech recognition,” he says. “In some areas, text entered is better.”
One focus area for Montrue is to move deeper into effective navigation via speech that better directs documentation and patient flow. On a basic level, a physician could command the technology to open a chart. On a deeper level, questions could be asked for guidance in patient care activity such as “Is there anything else I need to consider?”
According to Dreyer, as the vendor community continues to refine existing applications, hospitals can expect better integration between speech recognition engines and EHRs. Also, current advances in clinical language understanding will expand capabilities for extracting needed data elements. The technology leverages advances in natural language processing, medical artificial intelligence, and speech recognition.
Dreyer says at one point the market was moving too fast, introducing a lot of apps that were seemingly useful, but physicians had no time to differentiate between products. Now, he says the industry has a better handle on user needs. “This stuff is just evolving so rapidly,” he says. “With all the developers involved … that’s really driving this forward.”
— Selena Chavis is a Florida-based freelance journalist whose writing appears regularly in various trade and consumer publications covering everything from corporate and managerial topics to healthcare and travel.
Making Technology Smarter
Following the success of its Mobile Clinician Voice Challenge, Nuance recently kicked off an Understanding Healthcare Challenge. The contest asks developers to create strategies for integrating the company’s 360 Understanding Services, a clinical language understanding software development kit, into healthcare applications.
The initiative’s goal is to identify the best ways to use the technology to impact patient care and provider efficiencies. In contrast to the Mobile Clinician Voice Challenge, Jonathon Dreyer, senior manager of mobile solutions marketing at Nuance, says this competition is more about “sharing ideas surrounding a complex topic.”
A secure portal and code is provided to developers, enabling them to embed clinical language understanding in their mobile, Web-based, or desktop healthcare applications. Clinical language understanding promotes the extraction of key data from dictated or free-text patient notes, such as medications, allergies, and vitals. This information is crucial to meeting future regulatory initiatives such as meaningful use.
“We wanted to give developers the opportunity to put thought into how to improve healthcare through this technology,” Dreyer says.