Can Generative AI Finally Enable Healthcare’s Dynamic User Interface

Chatbots along with generative AI speech models create hope for advancing improved human-machine interaction in healthcare and making software tools like EMRs more useful for human users. For years, this has been described as the search for a dynamic user interface that improves effectiveness, experience, and efficiency.

Software has always been an imperfect tool, requiring human users to adapt to its structure and rules and with an inability for it to in turn adapt to human users, often requiring long learning curves to achieve benefits.

Beyond being an annoyance, these traits make the software tools much less useful. Despite using the features for problem-solving, analyzing data and information, decision making or evoking an action, users must always do the same number of clicks and motions, despite learning insights in how to do it better.

Human users must adapt to the software while it should be the other way around. Software is frequently conceived, designed, and written by non-users who may or may not understand the domain the software is meant to assist or where design-build trade-offs may work, or where they may disrupt fitness for purpose.

Two humans will frequently use the same data in different ways, presentation, or order to arrive at the same conclusion. Forcing all humans to use software in the same ways or with limited variations makes some of us less efficient and more annoyed or to forgo using the software altogether. Avoidance, as in the rise of human scribes in the EMR world.

Adding complexity, the same human will likely wish to use the same data in different ways when confronted the next time with the same problem or activity that was successfully solved yesterday.

Software hasn’t up till now learned to adapt to human users while assuming humans will do so. We do it all the time. Find shortcuts and workarounds or add third-party functionality missing in the original. Once we have learned one way to use a tool, we will almost always seek to improve our capability and efficiency, especially for repetitive tasks or problem-solving.

Software also suffers from bloat in order to reach broad markets resulting in the other 80-20 rule. 80% of the functionality in the software I don’t need. And 20% of what I need isn’t there.

When electronic medical records were undergoing forced acceptance in the early 2000s, work we did at the time showed that the main problem with EMR acceptance wasn’t the promise or the data or the services aspects. It was the user interface, forcing users to conform to using information in ways that were unfamiliar or were less efficient. Adding to the pain, we took the most highly skilled and paid individuals and made them enter data as a primary work function instead of bypassing that by focusing on automation and data interoperability.

When physician users were asked what they would prefer, the idea of the dynamic user interface appeared, even though at the time it wasn’t clear how to do what they asked. They described a capability to present the data precisely the way they wanted it today, which may be different than how they wanted it yesterday to solve the same problem due to the learning effect.

Breaking this request down further led to the understanding that both inputting and using software EMRs require multiple modalities and ways of entering and retrieving information or data. One key feature would be natural language speech. We didn’t have it yet in the way needed.

But now, the latest iteration of using automation and interoperability to access whatever datum is needed to either enter or elicit information from these software behemoths might be a step highly useful to accomplish a dynamic user interface. If Chatbots and generative AI work as advertised, this should bring us closer to more efficient, more effective use of medical information for the benefit of patients and users.