Home  |   Subscribe  |   Resources  |   Reprints  |   Writers' Guidelines

July 2013

CPOE Anytime, Anywhere
By Selena Chavis
For The Record
Vol. 25 No. 10 P. 10

Frustrated with productivity losses associated with EHR use, many physicians are keeping close tabs on advancements in mobile, speech-enabled computerized physician order entry.

Physicians are striving for workflow efficiency as they balance patient care with an ever-evolving HIT environment.

Early on, many industry experts pointed to EHRs as catalysts for creating more efficient workflow for physicians. And while EHRs provide a necessary and tactical foundation for the health care industry’s goals for better care delivery and lower costs, a growing body of research suggests they haven’t necessarily lived up to physician productivity expectations.

A 2012 survey by the American College of Physicians and AmericanEHR Partners revealed that EHR satisfaction and physician usability ratings have decreased since 2010, with 34% indicating that they were very dissatisfied with EHRs’ ability to decrease workload. The survey results also indicated that nearly 40% would not recommend EHRs to a colleague because of productivity challenges, difficult software interfaces, and a lack of improvement in patient care.

Those findings came on the heels of a 2011 Medical Group Management Association survey that found that more than 30% of responding physicians felt their productivity decreased with EHR use.

“Many physicians are very concerned about loss of productivity,” notes Brian Zimmerman, MD, an emergency department physician and the EPIC physician champion for Premier Health Miami Valley Hospital in Dayton, Ohio. “Providers are all about efficiency, and if anything takes additional time away from my day, it can be frustrating.”

As providers look for the best possible way to meet the computerized physician order entry (CPOE) requirements of meaningful use, many are considering the potential of speech recognition advances to help overcome productivity obstacles—and vendors are responding.

Industry-leading speech recognition companies M*Modal and Nuance both have innovative products in the works that may advance the concept of speech-enabled CPOE. “What we are doing is [introducing an application] to allow physicians to follow their natural workflow, and the application takes care of data entry,” explains Keith Belton, senior director of product marketing at Nuance, regarding the company’s health care virtual assistant project affectionately dubbed Florence. “One of the real frustrations is that CPOE essentially turns physicians into data-entry personnel.”

According to M*Modal Chief Medical Information Officer Jon Handler, the company is teaming with Utah-based Intermountain Healthcare to prototype and develop a mobile app designed to address speech-enabled CPOE. “This provides the opportunity to meet clinicians where they are,” he says, adding that the mobile app is designed to not only tackle workflow issues but also help eliminate errors.

Leveraging Existing Opportunities
While many physicians have high hopes as to where speech-enabled CPOE advances may lead, Zimmerman says his department found a way to take advantage of speech recognition technology through existing interfaces. “We learned very early how to leverage speech recognition,” he says. “Normally, if you wanted to place a lab order, you would have to search for the order. We found you could use voice commands and have the order automatically placed.”

Miami Valley Hospital had previously deployed front-end speech recognition through Nuance to reduce and eliminate the need for transcription. While the initial purpose for implementation was related to dictation workflows, physicians found that the system could be expanded to work for orders as well. “We started creating all of these voice commands,” Zimmerman recalls. “We ended up with something like 2,000 voice commands within the EHR.”

The resulting productivity gains were pretty dramatic. “If you can voice automate those processes, you can save a tremendous amount of time,” says Zimmerman, who believes all EHR vendors need to open the door to speech-enabled capabilities. “I’m excited as a physician to actually see this taking place. Many physicians want an efficient and better way to interact with EHRs.”

Advancing Speech-Enabled CPOE
As EHR vendors consider how to support integration with third-party applications such as speech recognition, speech vendors are trying to push the speech-enabled CPOE concept to its full potential. Nuance is responding through its Florence virtual assistant concept, creating a more intuitive way for physicians to coordinate care and improve efficiency through dialogue-driven intelligent systems that hear, understand, and respond.

Essentially, the prototype application works as an intelligent, voice-driven virtual assistant that interprets the intent of a physician’s request, prompts for all necessary information, asks for clarification, and manages a course of action. “I believe most physicians will interact with the EHR by voice,” Zimmerman notes, pointing to the potential he believes exists for the virtual assistant concept. “Not only will physicians speak into systems, but systems will speak back.”

A recent survey conducted by Nuance supports this assertion, revealing that eight of 10 physicians believe virtual assistants will drastically change how they interact and use EHRs in the next five years, ultimately making them more efficient and freeing up time to spend on patients. “The way to really attack the CPOE issue is to create a natural interface of how a physician would dictate before CPOE,” Belton says. “Florence provides a way for physicians to have a natural workflow without having to look at a keyboard.”

CPOE will be one of many workflows offered through Florence, and the product can be embedded in various health care apps on mobile devices, Web browsers, and desktops. Zimmerman points out that the EPIC EHR has several applications that help physicians become more mobile, plus the Nuance technology already is integrated into those offerings.

Although still in the development phase, the plan is for Florence to be able to capture 95% of common orders, according to Belton, who says one challenge is completing the technical work required to get the output into the EHR. “We’re going to be using industry standards,” he says, adding that he hopes more EHR vendors move in that direction to make integration easier.

The current M*Modal prototype Apple iOS app allows physicians to speak orders into an iPhone or iPad. Once orders are picked up by speech recognition, they are sent to the M*Modal cloud server, where the company’s speech-understanding engine translates the information into text, Handler explains. The information then is sent back to the clinician—reappearing on the phone—allowing him or her to review and edit the order.

Developed in cooperation with Intermountain Healthcare, the product may be tweaked or enhanced based on experience. “That’s the beauty of having a great partnership for developing,” Handler says. “There will be unexpected things we will uncover that will make the product better.”

Currently, the app is designed to address three primary error classes often connected to verbal orders that get lost in translation. The first focuses on transcription errors where a physician gives a verbal order to another party, but the order is transcribed incorrectly. For example, if a nurse listening to a physician on the phone misunderstands what’s being said and writes down the incorrect order.

The second class of errors includes delayed or dropped orders. “We are targeting folks on the go,” Handler says. “It’s incredibly common that people forget.” An example is a physician trying to dictate an order into a CPOE system who gets pulled away or distracted by something else happening nearby. “This technology provides a fast and easy way for clinicians to execute a plan of care at the time they think of it,” Handler says.

The final group of errors features internal orders that simply do not make sense. To help combat such occurrences, the app will be designed to offer a decision-support component to question certain kinds of orders.

Initially, Intermountain will use the prototype for medication orders, then expand the tool’s scope to include testing for laboratory, imaging, and observation orders. Handler explains that the app will be designed first to address common orders and workflows that make sense for mobile use. “When you are working on a phone, there is a limited amount of space. Some sets of orders are very complex and incredibly cumbersome by voice,” he says, adding that using a mobile speech app would not likely be a time-saver in some instances. “We’re picking the low-hanging fruit: things that happen all the time, every day.”

Handler says that during the first phase, the amount of risk will be limited. For example, an oncologist likely would not want to place a complex order for chemotherapy regimens via the mobile app. “Let’s start with obvious, simple wins and build on that success,” he says.

Clinical Decision Support
The availability of clinical decision support within the CPOE process has been identified as a critical element to the industry’s ability to standardize best practices and meet the aggressive quality expectations laid out in federal initiatives. Both M*Modal and Nuance acknowledge the need for clinical decision support, but they are introducing the capability one step at a time.

According to Belton, the first phase of the Florence initiative will entail capabilities to capture what a clinician says, format it, and confirm it based on Nuance’s Dragon technology. Phase 2 will include a decision-support component, possibly in the form of alerting. In the early planning stages, feedback will be gathered from panels composed of physicians and clinical users to ensure Florence meets clinician needs. Nuance will provide the dialog hooks—speech to text, text to speech—and manage the conversations, while its third-party partners ultimately will be responsible for providing the clinical decision-support content.

The first phase of the M*Modal project will include rudimentary clinical decision support to reduce a set of common, straightforward errors. For example, if an order is made for a patient to take 72 tablets of aspirin when a typical order would be two, the system would recognize that there could be a potential error and alert the physician.

While alerts can help reduce errors and aid physicians in decision making, the reality is that many industry studies suggest that too much alerting will result in a phenomena known as alert fatigue. In a white paper authored by Handler, “The Checkbox Is Not the Patient,” he notes that the industry needs to improve its approach to alerting. Medical literature has identified that only a few alerts actually improve patient outcomes because most are ignored.

“We know that clinicians who are overwhelmed with false or trivial alerts get alert fatigue, which causes them to ignore important alerts,” Handler wrote. “There is published data on which alerts actually improve patient outcome and published data on how to design effective alerts.”

Noting that while well-intentioned efforts to be liberal with alerts “just to be safe” are commonly implemented, Handler says they are “unsupported by the data, ineffective, costing money, and delaying care.”

Plenty of decision support exists in the industry today that is not designed in the form of alerts and could be considered as potentially effective alternatives to existing systems. Handler says common examples include the following:

• Provide the normal range and an abnormal indicator (high, low, critical) next to the lab result to help clinicians properly analyze the result and ensure they notice the abnormal ones.

• Provide a cost indicator next to each order in the CPOE system to help clinicians incorporate finances into their decision making.

In the end, some form of alerting or decision-support component will be an important consideration for speech-enabled CPOE workflows. Belton points out that one challenge is making sure physicians take the time to give orders a second look for accuracy. Many of the same challenges associated with deploying front-end speech recognition for dictation—where health care organizations have found that it is difficult to get busy physicians to take the time necessary to do their own editing—will have to be overcome to ensure accuracy.

One option would be to build a third party into the workflow, Belton says, pointing to physician assistants as potential candidates to provide that assurance of a second set of eyes.

— Selena Chavis is a Florida-based freelance journalist whose writing appears regularly in various trade and consumer publications covering everything from corporate and managerial topics to health care and travel.