Dianne Williams1, Hong Ding1, John Awebwa1, Drew Williams1, Sheikh I. Ahamed1, Rochelle Mendonca2
and Roger O. Smith 3
1Math, Stat. and Comp. Sci. Department, Marquette University, Milwaukee, WI
2Occupational Therapy, Temple University, Philadelphia, PA
3R2D2 Center, University of Wisconsin-Milwaukee, Milwaukee, WI
Introduction
It can be challenging for individuals to choose a personally used medical device when one is needed, given the diversity of designs and sheer quantity of available devices. Often, medical devices – including devices used in clinical settings, such as scales and dental chairs – are designed without people of all abilities in mind. [1] For people with disabilities, the choice becomes more onerous – no database of accessibility features exists for medical devices. [2] Currently, if a user wishes to purchase a medical device appropriate for their needs, they must check existing reviews on websites such as Amazon.com [3] or consumer review websites. [4] However, the former requires individuals with disabilities to voluntarily comment on their experiences with devices, good or bad. The latter relies on the consumer review agency to evaluate the accessibility-related features of a device. Neither of these options are currently implemented.
Med-Audit, an application for assessing the accessibility features of medical devices, was developed to solve this problem. [5] Med-Audit allows a user – such as an assistive technology specialist, or a clinician - to assess a medical device for accessibility by checking for device usability. The process involves taking a computerized, dynamic assessment, characterized by a trichotomous, tailored, sub-branching scoring system (TTSS system) that relies on a “Yes”, “No”, or “Maybe” response set. [1, 6] By branching into greater detail only when a “Maybe” option is selected, Med-Audit ensures that a user only answers questions that are pertinent to the device that they are auditing. [6] Med-Audit allows a user to understand general medical device accessibility and identify potential problems with medical devices, [1] but does not broach the problem of comparing a user’s abilities and needs to device accessibility results.
To solve this problem, we present MediRank. MediRank is designed to be used by a clinician or a seller of medical devices, and allows users to visualize the comparison between medical device data gathered by Med-Audit and consumer needs via a user profile. MediRank additionally acts to be understandable by all via implementation of a clean and accessible user interface design. In this paper, we discuss the initial design and development of MediRank. We begin by discussing our motivation for working on MediRank in more detail, and the problem we seek to solve. We then talk about the architecture of the application and the development process. Finally, we discuss our results and offer some examples of future directions MediRank could take.
Motivation
People with disabilities face unique challenges when choosing medical devices: features highlighted as beneficial for users without disabilities might be problematic for them. Take the example of a glucose meter: a person with weakness in their hands may find a small glucose meter difficult to manipulate. This could potentially carry grave consequences. Yet labels for small glucose meters might indicate great portability: a desired trait in these devices. How can a user with disabilities find a portable glucose meter that they’re able to easily operate?
In addition to those with chronic conditions, consider a person just diagnosed with a condition requiring a lifelong medical apparatus. Although research has shown that four unique and decisive customer moments determine the choice made, this process heightens the intensity of the first customer moment - “Now or Never”. [5] This could result in a hasty purchase. This type of purchase is not always in the best interest of the individual. [5]
In addition, MediRank could greatly benefit those individuals with disabilities that find themselves in a time-sensitive position – such as children with medical complexity, (CMC) children experiencing medical instability after being born premature. [7] This group of people are frequently in need of life-saving and stabilizing devices on a large scale, with their daily functioning being dependent on these required medical devices. [7] Choosing a well-matched device is of utmost importance, and must be done quickly. [7] By answering standard device questions, MediRank can significantly benefit all involved in an efficient and simple manner, thereby improving their care. [7]
Methodology
Our goal with MediRank is to offer a simple, easy to use and easy to understand solution for making educated decisions on medical device selection in a quick and efficient manner. MediRank is designed to work with preexisting data, and as such does not include any data collection tools itself: it focuses on the visualization of previously-obtained data on a user’s desktop computer. Currently, MediRank is compatible with both Windows and Macintosh computers, with Linux compatibility in testing. Targeting a desktop release allows for users in a corporate environment (such as a doctor’s office and device sale points) to take full advantage of MediRank.
MediRank is comprised of three things: a Microsoft Azure database, a data access layer, and the LiveCode application. The Microsoft Azure database holds data collected by Med-Audit for each question of the assessment. The data access layer facilitates communication between the SQL server and the LiveCode application. Finally, the LiveCode application displays information from the database for the consumer, showing potential concerns by comparing consumer desires, and what a device offers. By using a service-oriented approach, we’re able to keep the design modular in case an element changes with future advancements in technology.
Programming Environment
For purpose of cross device implementation of our front end, we chose to use LiveCode as our language of development. LiveCode is a multi-platform programming language, which allows applications created to be run on Windows, Mac, Linux and mobile devices, with preliminary support for the web. [8] This choice was also influenced by the fact that the xFACT framework, which Med-Audit runs on, has been designed using LiveCode.
Data Storage
Data is centralized in a relational database after being collected from Med-Audit. MediRank accesses this database for both device data, and user data. For optimum security and HIPAA compliance, MediRank will initially work with Microsoft Azure and Microsoft SQL Server 2016. A special API loaded with a SQL connector and hosted in the cloud is provided to facilitate communication between Azure and the LiveCode application. The API can execute desired SQL queries and procedures on collected Med-Audit data, and return XML formatted data. The supplied XML can then be parsed by the XML extraction and reader libraries of LiveCode [9].
As mentioned, all data collected by Med-Audit follows the TTSS format: each question is answered with a “Yes,” “No,” or “Maybe,” translating to a 2, 0, or 1 score, respectively. “User profiles” are designated as a Med-Audit assessment taken by a user, with what they would like to see from a device in mind instead of what a device offers. While Med-Audit may ask a user to evaluate whether “People who are hard of hearing are able to prepare and perform all steps for device use including select appropriate device, understand device use, receive training to use the device,” [10] a user profile will ask the user to score this question in relation to how important such a feature is to them. If a user has good hearing, for example, the feature may not be as important to their particular device needs.
User Interface
Our intention is to keep the interface simple, provide only relevant information and exploiting visual options using images and icons. The login page is clean and composed only of essential features, and upon logging in the user is allowed to choose whether they want to update a profile, manage other users (for administrators), edit system settings or select and compare devices. The profile/device selection screen displays an image of the comparable devices, information about their model and the type of device, and their function. The actual comparison screen shows both device assessments on the left hand side of the screen, and allows for a device question to be selected such that a user can understand what particular differences exist between a profile and a device, or two devices.
Data Visualization
Once a series of devices and/or people have been selected, the user is taken to the comparison screen. Here, the screen is divided into several components. On the left side, the outline of the assessment questions answered for both devices are available: toggling any question with an arrow displays (if available) sub-questions for that item. Below the outlines is a zoomed view, showing the scores for questions in the same section as the selected one, as well as the score for the entire parent section. Note that this view only displays scores for the device outline selected.
On the right, a snapshot view displays images of the devices being compared, and a quick overview of differences between the selected section’s scores. The sub-properties view displays the sub-sections of the selected section for each compared device/profile, and scores for each. Differentials between profiles and/or devices are shown in the third column. An illustration of the setup can be seen in Figure 3.
At this time, the meanings of scores and compared scores are left up to the user; however, differences in desires and the device features are easily spotted. Future features planned include implementing an algorithm for computer-aided decision making, highlighting of large differentials in the outline (for understanding where problems with a device may lie with a quick glance) and report export (such that a clinician can print a report of the device comparison and allow a patient to take the report home with them).
Discussion
Work on MediRank is ongoing. In its current state, MediRank allows a user to visualize the differences between a consumer profile (i.e. what a user desires in a device), and what a device offers in terms of accessibility features. Ongoing testing is required to stabilize application speed, and refine the workflow further. User studies can help us in improving the application interface, and field testing would be beneficial for understanding any pitfalls of our initial design. However, once completed, MediRank will be greatly beneficial to potential customers.
Relying on cloud storage means that our application could potentially be ported to a mobile application, for on-the-go device comparisons or user notifications regarding new devices audited by Med-Audit that may be beneficial for a user. If this route were taken, the desktop application could target health professionals, who can assist their patients in choosing the most suitable medical devices. Individuals with disabilities could primarily use the mobile application.
Additionally, in displaying scores for devices and profiles side-by-side, we achieve our goal of being able to determine if a device meets the needs of a user – a practitioner can discuss any differentials with their patient, and help them come to a conclusion about whether or not the device meets their needs. However, as mentioned, we’d like to augment our current application with a recommendation engine that one could use to determine their best choice in the future. Such would require knowledge of a user’s wants and needs, and weigh these against device features appropriately. Machine learning may be a good method of solving the recommendation problem: by learning how wants and needs play into the selection of devices, we can construct an algorithm that devises appropriate solutions based on scores from Med-Audit data.
Conclusion
In conclusion, MediRank has the potential to greatly assist individuals with disabilities with medical device selection. By visualizing a device profile alongside other devices or user profiles, a clinician can pick out potential problems with a device, and notify a patient of these in a timely fashion. This can improve patient satisfaction with their care, and their overall quality of life. The initial application was designed with a clinical user in mind, helping a patient select an at-home medical device, but the possibility exists for a consumer application to be developed that uses many of the ideas stated here. Bringing device accessibility into the public eye could start a broader conversation about the best methods of creating and distributing accessible devices, improving healthcare for all.
References
[1] Smith, R. O., Barnekow, K., Lemke, M. R., Mendonca, R., Winter, M., Schwanke, T. D., & Winters, J. M. (2007). Development of the MED-AUDIT (Medical Equipment Device-Accessibility and Universal Design Tool). In J. M. Winters and M. F. Story (Eds.), MEDICAL INSTRUMENTATION Accessibility and Usability Considerations (pp. 283-296). Boca Raton: CRC Press.
[2] Mendonca, R. & Smith, R. O. (2006). MED-AUDIT (Medical Equipment Device-Accessibility and Universal Design Information Tool): Usability analysis. Proceedings of the RESNA 29th International Conference on Thriving in Challenging Times: The Future of Rehabilitation Engineering and Assistive Technology. Accessed on 16 Feb. 2018 at https://www.resna.org/sites/default/files/legacy/conference/proceedings/2006/Research/Outcomes/Mendonca.html
[3] Amazon.com, Inc. (2018) Amazon.com. Retrieved from https://www.amazon.com/.
[4] Salvatore, D. (2018) Consumer Reports. Retrieved from https://www.consumerreports.org/cro/index.htm.
[5] Bloom, R.H. (2010). Your 4 Decisive Customer Moments. In The New Experts: Win Today's Newly Empowered Customers at Their 4 Decisive Moments (pp. 25–38). Austin, TX: Greenleaf Book Press.
[6] Williams, D., Johnson, N., Saha, A.K., Spaeth, N., Snyder, T., Tomashek, D., Ahamed, S.I., and Smith, R.O. (2016). xFACT: Developing Usable Surveys for Accessibility Purposes. Proceedings of the RESNA 39th International Conference on Promoting Access to Assistive Technology. Accessed on 16 Feb. 2018 at https://www.resna.org/sites/default/files/conference/2016/cac/williams2.html
[7] Cohen, E., Kuo, D. C., Agrawal, R., Berry, J. G., Bhagat, S. K. M., Simon, T. D., and Srivastava, R. (2011). Children With Medical Complexity: An Emerging Population for Clinical and Research Initiatives. Pediatrics: The Official Journal of the American Academy of Pediatrics. 127(3), 528–538.
[8] LiveCode, Ltd. (2018) LiveCode. Retrieved from https://livecode.com/.
[9] LiveCode, Ltd. (2018) Livecode Lessons: How to read in data from an XML file. Retrieved from http://lessons.livecode.com/m/4071/l/7011-how-to-read-in-data-from-an-xml-file
[10] Smith, R.O., Lemke, M., Mendonca, R., Schwanke, T., and Winters, J. (2005) MED-AUDIT Expert User System (EUS) Taxonomy. Retrieved from https://www.r2d2.uwm.edu/rerc-ami/archive/eustaxonomy.html/ http://www.r2d2.uwm.edu/rerc-ami/archive/eustaxonomy.htmlAcknowledgements
We would like to thank our advisors: Dr. Sheikh Iqbal Ahamed, Dr. Rachel Mendonca, and Dr. Roger O. Smith, for their input during our research and development process – all feedback received was truly invaluable.