Advertisement: Preregister now for the Medicine 2.0 Congress
Usability Study of a Computer-Based Self-Management System for Older Adults with Chronic Diseases
Calvin Or, Ph.D; Da Tao, BEng
Department of Industrial & Manufacturing Systems Engineering, The University of Hong Kong, Pokfulam, China (Hong Kong)
Department of Industrial & Manufacturing Systems Engineering
The University of Hong Kong
8/f., Haking Wong Building
China (Hong Kong)
Phone: 852 2859 2587
Fax: 852 2858 6535
Background: Usability can influence patients’ acceptance and adoption of a health information technology. However, little research has been conducted to study the usability of a self-management health care system, especially one geared toward elderly patients.
Objective: This usability study evaluated a new computer-based self-management system interface for older adults with chronic diseases, using a paper prototype approach.
Methods: Fifty older adults with different chronic diseases participated. Two usability evaluation methods were involved: (1) a heuristics evaluation and (2) end-user testing with a think-aloud testing method, audio recording, videotaping, and interviewing. A set of usability metrics was employed to determine the overall system usability, including task incompletion rate, task completion time, frequency of error, frequency of help, satisfaction, perceived usefulness, and perceived ease of use. Interviews were used to elicit participants’ comments on the system design. The quantitative data were analyzed using descriptive statistics and the qualitative data were analyzed for content.
Results: The participants were able to perform the predesigned self-management tasks with the current system design and they expressed mostly positive responses about the perceived usability measures regarding the system interface. However, the heuristics evaluation, performance measures, and interviews revealed a number of usability problems related to system navigation, information search and interpretation, information presentation, and readability. Design recommendations for further system interface modifications were discussed.
Conclusions: This study verified the usability of the self-management system developed for older adults with chronic diseases. Also, we demonstrated that our usability evaluation approach could be used to quickly and effectively identify usability problems in a health care information system at an early stage of the system development process using a paper prototype. Conducting a usability evaluation is an essential step in system development to ensure that the system features match the users’ true needs, expectations, and characteristics, and also to minimize the likelihood of the users committing user errors and having difficulties using the system.
(JMIR Res Protoc 2012;1(2):e13)
Usability evaluation; self-management; patient participation; chronic disease
With the advent of advanced technology, a number of health information systems have been developed and employed to increase support for patient self-management of chronic disease. However, many of those innovations are not regularly used in care management and some have been abandoned. This non-adoption issue is significant and can largely be attributed to problems with the usability of the technology, such as ineffective system design, lack of ease of use and convenience of access, and a mismatch between the system features and the needs, expectations, and characteristics of the users [1,2]. Even when a technology is adopted, these usability barriers are likely to result in frustration and irritation for the user, in inefficiency and disruption in the care management process, and in a higher likelihood of committing errors .
To avoid these negative outcomes, designers should evaluate and verify system usability during the early stages of system development . This is especially important for health care technologies because their usability can have implications for quality and effectiveness of health care [5-7]. In fact, researchers have directed their efforts at improving the usability of their new health information technology (IT) applications to avoid unintended consequences at rollout [8-14]. For example, Tang and colleagues  applied the heuristics evaluation, a usability engineering method, to examine the usability of a digital emergency medical service system designed for paramedics to input patient data. They uncovered a number of heuristic violations in the user interface design. In another health care IT project, Rose and colleagues  conducted a qualitative study to assess the usability of a Web-based electronic medical record and used the findings to recommend design changes to the system. Similarly, Yen and Bakken  performed a heuristics evaluation and think-aloud test to study the usability of a Web-based communication system for nurse scheduling. They demonstrated that their study was effective in identifying system design problems and obstacles to task performance.
Usability is also important for elderly and disabled people for the following reasons. First, most older adults and others with disabilities are experiencing a decline in their physical and cognitive abilities [15,16]; as a result, they may have more difficulty interacting with technology [17,18]. Second, many technologies are not made to be accessible for these people, making it difficult to use them [18,19]. Third, many of the design guidelines are established for developing products for people with no functional limitations; thus, it is necessary to pay special attention to the usability of the products that are specifically designed for the elderly and disabled. Indeed, a number of researchers who are interested in aging, disability, and technology demonstrate the effectiveness of usability evaluation in technology development [20-25].
Most of these previous works cover either Web sites or health care provider technology, but our study focuses on the usability evaluation of a patient-centered interactive self-management system for older adults with chronic illnesses. We focus on this because we acknowledge the high prevalence of chronic diseases among the elderly  and the potential for using health IT to improve disease self-management and health outcomes of elderly patients [27,28].
Usability evaluation includes a set of techniques for improving the usability of a system through the identification of potential difficulties and problems in using the system [4,29]. Among the various techniques, end-user testing and heuristics evaluation are prevalent and prominent [30,31]. End-user testing examines how effective and efficient a task or process is carried out using the system and explores users’ opinions based on their experience with the system. Heuristics evaluation is performed by usability specialists and focuses on the assessment of the system against a set of human factors design guidelines and heuristics [4,32]. These two methods can be implemented together in a usability evaluation to increase the likelihood of uncovering more design problems [30,33].
Conducting a usability evaluation during the early stages of the development process for a new design is highly recommended . In addition, using paper prototypes to study usability is practical due to their low cost and comparable effectiveness with computer-based prototypes in identifying usability problems [34-38]. This study, which was part of a larger project to develop a computer-based self-management system for older adults with chronic diseases, evaluated the usability problems and weaknesses of the system using a paper prototype test. We first conducted a heuristics evaluation and then end-user testing using the think-aloud method. The objective of the heuristics evaluation was to determine whether the system design characteristics met the human factors design guidelines and principles. The aim of end-user testing was to examine use performance and satisfaction with the system interface among a group of elderly patients with chronic diseases. This usability study analytically discovered design weaknesses in the self-management system and provided directions for system design modifications and for conducting future system analyses.
Materials and Methods
Self-management System Paper Prototype
Our research team has been working on the development of a computer-based, interactive, touchscreen self-management system designed for patient use in their homes. The system allows patients to assess, record, and track their vital signs, including weight, blood pressure, blood glucose level, temperature, and oxygen saturation (SpO2). The assessment records can be saved in the system and retrieved for review. The system can also remind the patients to take their prescribed medications at predetermined times. Figure 1 describes the measurement page for blood pressure. The page displays the blood pressure readings and includes the history data page button. By pressing the button, the users can access the history page and retrieve past blood pressure values from the two-dimensional line chart (see Figure 2). The design of the interface and functions of the other measurement modules (eg, blood glucose and weight) is similar to that of the blood pressure module. The intended users of the system are older adults with common chronic illnesses, such as diabetes, hypertension, and heart disease. The creation of the system interfaces was guided by a set of human factors design principles [4,39-41]. Examples of the principles are (1) match the system to the real world , (2) use recognition rather than recall , (3) reduce short-term memory load , (4) strive for consistency , (5) use compatibility of proximity principle , (6) conceptual compatibility , (7) avoid sound effects , (8) eliminate distracting features , and (9) have a clear and simple page . In this study, we used a paper prototype that consisted of a collection of color-printed screenshots of the system interface to conduct our usability evaluation. This study protocol received the approval of the institutional review board of the University of Hong Kong. Informed consent was obtained from all of the participants.
[view this figure]
|Figure 1. The blood pressure measurement page of the self-management system.|
[view this figure]
|Figure 2. The blood pressure history data page presents the past blood pressure values on a two-dimensional line chart.|
Three important considerations were managed in our heuristics evaluation to ensure study quality: evaluators, heuristics, and evaluation process.
Our heuristics evaluation required the evaluators to be knowledgeable of usability and human factors engineering, be comfortable with health information system design and evaluation, be aware of the characteristics of older people (as the system end users would be elderly patients), and be familiar with the scenarios and environment in which the system would be used. In this study, we employed one “double expert” who had a background in usability and a domain of interest, and two “single experts” with experience in usability and human factors design. All of the evaluators were familiar with the heuristic principles. Nielsen and Mack  recommend using 3-5 “single experts” or 2-3 “double experts” in heuristics evaluations. We believed the number of evaluators in this study and their expertise level to be sufficient for this evaluation.
We evaluated our system interfaces for their conformity to a set of 26 human factors design heuristics (see Table 1) that were identified based on Nielsen , Shneiderman and Plaisant , Czaja and Lee , and Demiris and colleagues . Because the heuristics of Nielsen and those of Shneiderman and Plaisant were general human-computer interface design heuristics, our evaluation also included the principles reported by Czaja and Lee and by Demiris and colleagues who developed heuristics specifically for older adults and elderly patients. The heuristics evaluation was conducted on December 15, 2011.
Three human factors researchers independently evaluated the conformity of the interface design to the 26 heuristics. They determined the conformity by responding “yes” or “no” to each heuristic. A comment section was also provided to collect their specific comments on the design issue associated with each heuristic. The three evaluators then met to discuss all of the comments received, identify the design problems, and give recommendations for system modifications prior to end-user testing.
The end-user testing was performed between January 16 and February 9, 2012, according to the three stages proposed by Nielsen , including preparation, testing, and follow up. Each test lasted approximately 30-40 minutes. The procedures implemented in these three stages are described below.
The preparation stage included participant selection, task design, and data collection.
The study participants were recruited from a non-profit medical organization in Hong Kong that provides medical services to the community in the Hong Kong East Cluster. The inclusion criteria for study participation included the following: (1) age 55 or older, (2) diagnosis of any chronic disease, (3) normal vision or corrected-to-normal vision, (4) no cognitive or physical impairment, and (5) the ability to read Traditional Chinese.
The participants performed two practice tasks followed by 11 experimental tasks related to disease self-management (see Table 2). The tasks included a set of navigation tasks (tasks 1, 4, 5, 7, and 10) and a set of information search and simple cognitive tasks (tasks 2, 3, 6, 8, 9, and 11). In the navigation tasks, the participants were asked to access the measurement modules. To do this, they needed to search for and “press” the button associated with the module. In the information search and simple cognitive tasks, the participants were required to visually search for the measurement values (eg, blood glucose level) and to determine whether the values were normal based on the general “normal value range” presented on the interface.
Several performance measures were collected, including task incompletion rate, task completion time, frequency of error, and frequency of help. Task incompletion rate was defined as the percentage of participants who went through the task but were not able to complete it. Task completion time was the mean time it took to complete the task. The amount of time the participants had to complete the tasks was not limited, but they were instructed to try their best to perform the tasks. They were also asked to report to the research assistant (RA) if they were unable to complete the tasks. Frequency of error (nerror) was defined as the total number of errors made on the task by all of the participants who went through the task (errors included choosing a wrong button, unable to find and interpret the information correctly, etc). The participants were corrected and were asked to try again when they made an error. Frequency of help (nhelp) was defined as the total number of times that all participants needed help on the task.
In addition, a questionnaire was administered in a face-to-face interview to examine the following variables: participant satisfaction with the system design (17 items), the perceived usefulness of the system (4 items), the perceived ease of use of the system (4 items), and the intention to use the system (1 item). The questionnaire was developed based on previous usability and technology acceptance studies [43,44]. Except for intention to use (which was a yes/no item with a follow-up question asking the participants to explain their responses), all other items were rated on 7-point Likert scales ranging from 1 = very bad to 7 = very good, 1 = strongly disagree to 7 = strongly agree, 1 = very unclear to 7 = very clear, 1 = very inappropriate to 7 = very appropriate, or 1 = very difficult to 7 = very easy. At the end of the interview, two open-ended questions were also asked to elicit the opinions of the participants about the interface design (eg, use of font size, color, and complexity) and about what they liked or did not like with the design.
End-user testing was conducted in a community health service center by two trained RAs. Prior to the start of testing, one RA explained the study objective and research protocol to the participants. After the participants gave informed consent, the RA provided detailed information about the test procedures, described the purpose of the computer-based self-management system, and collected their basic demographical information. During the test, the participants were given two practice tasks to become familiar with the self-management system and the think-aloud method. Following the practice trials, the participants were asked to perform the experimental tasks. They were told to vocalize whatever they saw, did, and felt when performing the tasks. The participants did not go through the information search and simple cognitive tasks if they failed to complete the associated preceding navigation tasks. In this study, all end-user testing was recorded on video. The RAs also took field notes about the participants’ performance and comments. The RAs collected the questionnaire data and participant feedback on the difficulties they noticed when using the system after the completion of the end-user testing.
In the follow-up stage, the study data were analyzed by two RAs using descriptive statistics and simple content analysis . Data from the performance measures were extracted from the videos, and the means/frequencies were examined. Central tendency and distribution of the questionnaire item scores were determined. Audio interview data were transcribed and the content was analyzed. Practice task data were excluded from the data analysis.
[view this table]
|Table 1. The 26 human factors design heuristics used in the heuristics evaluation.|
[view this table]
|Table 2. Self-management tasks used during end-user testing.|
The evaluation results (Table 3) and comments of the three evaluators were discussed and compiled into four categories (Table 4). The evaluation identified some strengths in the system design, such as consistent information presentation and organization, low demand on user short-term and spatial memory load, clearly labeled keys, and consistent operating procedures within and across the system modules. Two types of usability problems were also identified. The first was general usability problems related to insufficient interface design, including unfamiliar terminology, confusing and inconsistent button design, lack of informative feedback for user actions, and a lack of online support and instruction. The second type was age-related usability problems that were more problematic for older adult patients due in part to small text and buttons, inappropriate use of serif font and gradient color, low contrast between information and background, and too much information on the interface. Based on these findings, changes were made to the system design for end-user testing.
A total of 57 eligible older adult patients participated. The first seven were pilot participants to try out the testing procedures through which the experimental design problems were identified and fixed prior to the main test. The other 50 participants completed the main test; only their data were used for analysis. Table 5 presents the characteristics of the participants.
The performance data were analyzed with descriptive statistics. Table 6 shows the results.
[view this table]
|Table 3. Heuristics evaluation results.a|
[view this table]
|Table 4. Interface design strengths and usability problems identified in the heuristics evaluation.|
[view this table]
|Table 5. Study participant characteristics (N = 50).|
[view this table]
|Table 6. Performance measures as assessed via 11 tasks.|
Task Incompletion Rate
All participants completed all of the navigation tasks due to the nature of our experimental design; however, not everyone completed all of the information search and simple cognitive tasks because they failed to complete the preceding navigation tasks. For instance, only 42 participants completed task 6 because 8 participants failed to complete task 5. In the navigation tasks, tasks 1 and 4 yielded a task incompletion rate of 0%. A low incompletion rate (2%, 1/50) was yielded in task 7. However, task 10 had an incompletion rate of 44% (22/50). Task incompletion rates were moderate to high for the information search and simple cognitive tasks, ranging from 17% (8/48) to 50% (14/28), respectively. For example, half of the 28 participants were unable to complete task 11 (50% incompletion rate).
Task Completion Time
Among all of the navigation tasks, tasks 1, 4, and 7, which required the participants to access the measurement modules, yielded the shortest task completion times. The “access the history data page” task and “select the breakfast test time” task appeared to be difficult to perform, with fairly long task completion times. Among the 11 experimental tasks, tasks 2, 3, and 6, which required the participants to indicate a vital sign value and determine whether it was normal, had the shortest completion times. Task 9 (indicate the BMI and determine its normality), task 11 (read the history data chart and find the diastolic pressure value), and task 8 (indicate the weight value) yielded longer completion times.
Frequency of Error
Both navigation errors (eg, choosing wrong navigation buttons, incorrectly recognizing icons and symbols as buttons, and failing to follow the navigation paths) and information processing errors (eg, failing to locate and explain information; being unable to retrieve the measurement values, such as the blood glucose value; and being unable to obtain and comprehend the reference values of normal blood pressure levels) were observed. Overall, 93 errors (highest occurrence among all tasks) were made by 45/50 participants in the “access the history data page” task. The task that required the participants to select (by “pressing”) the “before breakfast” test time for measuring their blood glucose levels yielded the second highest number of errors (39 errors made by 19/50 participants). The information search and simple cognitive tasks yielded a moderate frequency of errors.
Frequency of Help
Similar to the frequency of error finding, tasks 5 and 10 yielded the highest frequency of help, indicating that the tasks were difficult based on our current design. For instance, 28 participants (56%) needed help a total of 60 times when doing the “access the history data page” task.
Satisfaction, Perceived Usefulness, and Perceived Ease of Use
Table 7 presents the central tendency and distribution of the questionnaire responses. The mean scores for satisfaction, perceived usefulness, and perceived ease of use were at least 4.9 (SD 1.4), 6.0 (SD 1.2), and 6.0 (SD 1.2), respectively. All of these were above the midpoint of the scale, indicating that the participant exhibited a positive impression of the system design. Of the 17 satisfaction items, 14 had a mean score of 6.0 or higher. The mean ratings of two satisfaction items (Sat2: the information on the interfaces are overloaded, and Sat9: finding information on this system requires a lot of mental effort) were relatively low, showing that the amount of information on the interface might be excessive and that finding this information required a large amount of mental effort.
[view this table]
|Table 7. Descriptive statistics for satisfaction, perceived usefulness, and ease of use items (1 = negative to 7 = positive).|
Intention to Use the System
Thirty-one (74%) participants expressed their intention to use the actual system for chronic disease self-management in the future, if the system was available. The reasons listed for wanting to use the system were that the system could facilitate their self-management of chronic diseases, such as providing them with specific and updated health information; automatically recording the health information for easy retrieval later saving time on their self-management; and improving communication with their health care providers. For those who said that they would not use the system, cost, unfamiliarity with the technology, and limited space at home for the system were the major reasons cited for non-use.
Comments from Open-Ended Questions
All participants expressed a fondness for the system. They commented that the overall system interface was effective and appealing, the system was simple to use, the information on the interfaces was clearly presented, and using the system for self-management would allow them to obtain useful health information and improve their health conditions. However, comments related to usability problems were also mentioned. They were grouped into four categories and are presented in Table 8. Although some of the problems were similar to those identified in the heuristics evaluation (eg, unfamiliar terminology, small characters and texts, and inconsistent button design), the comments offered more details about the design that enabled us to develop specific directions for system redesign.
[view this table]
|Table 8. Usability problems identified from the open-ended questions.|
This study assessed the interface design of a computer-based chronic disease self-management system using a set of design heuristics and evaluated the performance and perceptions of users about the system. Using the paper prototype, our evaluations quickly and effectively identified the system’s strengths and usability weaknesses.
System Interface Design
Overall, our findings indicated that the participants were basically able to perform the study tasks using the current design. However, we also identified a number of design problems and areas that could be improved to further enhance usability. Moreover, based on our findings, we drew a number of long-reaching and significant implications on usability design guidelines for designing health IT systems for the elderly.
First, all four performance indicators showed that the “access the history data page” task (task 10) was difficult to perform. This was likely due to a design inconsistency where the appearance and position of the history page button was completely different from that of the six main measurement module buttons, as indicated by the findings of the heuristics evaluation and end-user test (ie, ambiguous design of the history page button). Because of this difference, when the participants performed the task, many of them attempted to find the button in the area where the six module buttons were grouped; therefore, the participants did not notice that the button was actually located in a different area of the interface. This inconsistency led to confusion and resulted in additional search efforts that would not be necessary if the location was changed. This finding confirms the design principle that the appearance, position, and configuration should be consistent across objects/displays (eg, buttons, icons) that serve the same basic functions (eg, going to a new page/module).
Second, the blood glucose test time selection task (task 5) was also challenging because it had a similar inconsistent design problem. Furthermore, in the blood glucose module menu, there were a total of six alternative test times available for selection because the timing of the test could be before or after breakfast, lunch, or dinner (see Figure 3). Based on the participants’ comments about the end-user test and our observation, it appeared that the menu offered too many choices that added decision complexity (see the Hick-Hyman Law [46,47]). Additionally, older adults may experience declines in cognitive abilities and eyesight that can make it more difficult to process complex information and locate information on complex interfaces . Our sophisticated menu, with its six options, likely required more visual search and cognitive effort for information processing and may have contributed to the lower task performance of the elderly patients. From this observation, we suggest that the number of choices in a menu/interface be kept to an essential minimum. Therefore, we modified our design such that the system would automatically record the test time.
Third, although the history data chart followed a simple two-dimensional line chart design in which the measurement dates were displayed along the x-axis and the measurement values were plotted along the y-axis, most elderly participants could not easily comprehend the chart and retrieve the values, as indicated by the performance data and the participants’ comments (ie, poor pairing design between the measurement value and its measurement date in the history data chart). This type of graphical representation can be especially difficult for older adults to read and comprehend. This finding suggests that when a graphical representation of measurement data is employed, it should be designed to help improve the older adults’ ability to pair the measurement dates with the corresponding measurement values.
Fourth, readability was another design weakness identified. The size of the fonts (all the Chinese characters) were set at 18 points in the original design. Although the literature recommended that font sizes be at least 14 points (eg, Demiris and colleauges ), the findings of the end-user test showed that when Chinese characters were used, an 18-point font size was too small for the older adults to read due to the crowded strokes in the characters. Moreover, the sizes of the icons and symbols were too small. These findings suggest that the fonts, icons, and symbols should be larger for the elderly population. While the mean score of the satisfaction item that examined the graphic quality (Sat3: overall quality of graphics) was high, the participants’ comments about the end-user test indicated that the picture quality of the icons and symbols was inadequate and that affected the overall readability. Therefore, high image resolution should be used in icons and symbols.
Fifth, regarding the presentation of information, a number of participants expressed their confusion about the pictures that were used to describe the functions of the buttons (eg, a picture of a scale was used to represent weight measurement). The selection and use of these icons should be revisited and meaningful pictures should be used to enhance the conceptual compatibility. Additionally, both the heuristics evaluation and end users’ comments indicated that the abbreviations and some of the medical terminologies used in the interfaces (eg, SpO2 and BMI) were unclear and too technical. The older adults in particular may not have the knowledge to understand the meanings of these terms. Therefore, they should be replaced with plain, non-technical terms that are less ambiguous to users.
[view this figure]
|Figure 3. The blood glucose module menu includes the six buttons for the selection of the six test times of blood glucose.|
Usability Test Methodology and Design
A number of research methodology and design issues are worth discussing because these provide important implications for health information technology usability research. First, although using computer-based interactive system prototypes in the usability test can allow researchers to measure realistic user interactions, evidence shows that paper prototypes are as effective as computer-based prototypes in uncovering usability problems and understanding the users’ subjective evaluation of a system [36,48,49]. Moreover, paper prototypes are less costly and can be created quickly.
Second, many of the previous studies adopted a single usability testing method. Our findings revealed a number of usability issues, not detected in the heuristics evaluation, discovered by the end-user testing. Furthermore, our heuristics evaluation only projected high-level structural usability problems (eg, font size and information grouping problems), whereas the end-user testing allowed us to discover a large number of usability weaknesses at detailed levels. Our study showed that using multiple evaluation approaches could help identify more potential problems and should be a more reliable practice for conducting usability studies (also noted in the literature; see [30,33]).
Third, one of the main criticisms of previous studies on health IT usability has been the lack of a theoretical basis for the development of the study methodology . Our study method and procedures were carefully set up based on systematic usability study guidelines and models as well as empirical research, such as Nielsen and Mack  and Nielsen . These guidelines provided valid directions for our experimental design and prevented erroneous testing protocols and data collection.
Fourth, effective disease self-management systems have the potential to improve care quality and safety [51,52]. However, one cannot meaningfully examine and then be certain of the true value of a newly developed health information system (such as the one in this current study) without having the usability and design problems discovered and eliminated beforehand. For instance, a system with an unpleasant and ineffective interface design found to have no impact on health outcomes could actually be beneficial if the design weaknesses had been overcome prior to the examination. Our study suggests that the usability test is one of the steps that should be performed during the system development process to avoid drawing mistaken conclusions about system effectiveness.
Strengths and Limitations
Our study had several strengths: (1) careful and systematic procedures were adopted in the heuristics evaluation and end-user testing; (2) context-specific consideration was exercised to generate heuristics for the heuristics evaluation and to develop performance measures in the end-user testing; and (3) compared to many other usability studies, our study involved a relatively large sample size, which may have helped identify more usability problems and design weaknesses of our system. However, a few limitations should be noted. First, no alternative system interfaces were assessed; therefore, no comparable results were available on a better interface design approach in our study. This may limit our findings and ideas on system interface improvement. Second, although it was not the focus of this current study, it may be worth considering the effects of user characteristics (eg, age, severity of the chronic conditions, and computer experience) on measurement outcomes. Third, it may be valuable to verify the effectiveness of our design recommendations by conducting iterative usability evaluations. However, we are planning to conduct another round of usability studies using a computerized prototype with the design recommendations incorporated.
An inadequately designed health information system increases the likelihood of the users committing user errors and having difficulties using the system. These issues can be mitigated by identifying a system’s usability problems using heuristics evaluations and end-user tests, and the results of these evaluations can be used for design refinement. Importantly, special attention should be given to the selection of design heuristics for evaluating systems for elderly patients because the general human factors design guidelines may be insufficient for addressing the unique characteristics and capabilities of elderly patients. Furthermore, the design problems discovered in this study allow for the implementation of new design guidelines that are of particular importance for the elderly and can be generalized to other health information systems that are designed for older adult patients.
We thank Priscilla Chan, social service manager of the United Christian Nethersole Community Health Service, and Professor Agnes Tiwari, Head of the School of Nursing at The University of Hong Kong, for their help in the project. This study was supported by the Seed Funding Programme for Basic Research of the University of Hong Kong (Project title: Usability analysis of a home-based electronic monitoring system in elderly patients with chronic disease using a theory-based think-aloud testing protocol; PI: Calvin Or).
Conflicts of Interest
- Jimison H, Gorman P, Woods S, Nygren P, Walker M, Norris S, et al. Barriers and drivers of health information technology use for the elderly, chronically ill, and underserved. Evid Rep Technol Assess (Full Rep) 2008 Nov(175):1-1422. [Medline]
- Or CK, Karsh BT. A systematic review of patient acceptance of consumer health information technology. J Am Med Inform Assoc 2009;16(4):550-560 [FREE Full text] [CrossRef] [Medline]
- Kaufman D, Roberts WD, Merrill J, Lai TY, Bakken S. Applying an evaluation framework for health information system design, development, and implementation. Nurs Res 2006;55(2 Suppl):S37-S42. [Medline]
- Nielsen J. Usability Engineering. San Francisco, CA: Morgan Kaufmann; 1993.
- Gosbee J. Human factors engineering and patient safety. Qual Saf Health Care 2002 Dec;11(4):352-354 [FREE Full text] [Medline]
- Kushniruk A. Evaluation in the design of health information systems: application of approaches emerging from usability engineering. Comput Biol Med 2002 May;32(3):141-149. [Medline]
- Karsh BT. Beyond usability: designing effective technology implementation systems to promote patient safety. Qual Saf Health Care 2004 Oct;13(5):388-394 [FREE Full text] [CrossRef] [Medline]
- Edwards PJ, Moloney KP, Jacko JA, Sainfort F. Evaluating usability of a commercial electronic health record: a case study. Int J Hum-Comput St 2008;66:718-728. [CrossRef]
- Linder JA, Rose AF, Palchuk MB, Chang F, Schnipper JL, Chan JC, et al. Decision support for acute problems: the role of the standardized patient in usability testing. J Biomed Inform 2006 Dec;39(6):648-655. [CrossRef] [Medline]
- McDaniel AM, Hutchison S, Casper GR, Ford RT, Stratton R, Rembusch M. Usability testing and outcomes of an interactive computer program to promote smoking cessation in low income women. Proc AMIA Symp 2002:509-513 [FREE Full text] [Medline]
- Rose AF, Schnipper JL, Park ER, Poon EG, Li Q, Middleton B. Using qualitative studies to improve the usability of an EMR. J Biomed Inform 2005 Feb;38(1):51-60. [CrossRef] [Medline]
- Tang Z, Johnson TR, Tindall RD, Zhang J. Applying heuristic evaluation to improve the usability of a telemedicine system. Telemed J E Health 2006 Feb;12(1):24-34. [CrossRef] [Medline]
- Yen PY, Bakken S. A comparison of usability evaluation methods: heuristic evaluation versus end-user think-aloud protocol - an example from a web-based communication tool for nurse scheduling. AMIA Annu Symp Proc 2009;2009:714-718. [Medline]
- Yen PY, Gorman P. Usability testing of digital pen and paper system in nursing documentation. AMIA Annu Symp Proc 2005:844-848. [Medline]
- Aguero-Torres H, Hilleras PK, Winblad B. Disability in activities of daily living among the elderly. Current Opinion in Psychiatry 2001;14:355-359.
- Atkinson HH, Cesari M, Kritchevsky SB, Penninx BW, Fried LP, Guralnik JM, et al. Predictors of combined cognitive and physical decline. J Am Geriatr Soc 2005 Jul;53(7):1197-1202. [CrossRef] [Medline]
- Fisk AD, Rogers WA, Charness N, Czaja SJ, Sharit J. Designing for Older Adults: Principles and Creative Human Factors Approaches. Hoboken, NJ: CRC Press; 2009.
- Vanderheiden GC. Design for people with functional limitations. In: Salvendy G. editor. Handbook of Human Factors and Ergonomics. New York, NY: John Wiley & Sons; 2006:1387-1417.
- Czaja SJ, Lee CC. Information technology and older adults. In: Sears A, Jacko J. editors. The Human-Computer Interaction Handbook: Fundamentals, Evolving Technologies and Emerging Applications. New York, NY: Lawrence Erlbaum and Associates; 2007:777-792.
- Strothotte T, Fritz S, Michel R, Raab A, Petrie H, Johnson V. Development of dialogue systems for a mobility aid for blind people: initial design and usability testing. New York, NY: ACM; 1996 Presented at: the Second Annual ACM Conference on Assistive Technologies; April 11-12, 1996; Vancouver, BC, Canada. [CrossRef]
- Holzinger A, Searle G, Kleinberger T, Seffah A, Javahery H. Investigating usability metrics for the design and development of applications for the elderly. Lecture Notes in Computer Science 2008;5105:98-105. [CrossRef]
- Mikkonen M, Vayrynen S, Ikonen V, Heikkila MO. User and concept studies as tools in developing mobile communication services for the elderly. Personal and Ubiquitous Computing 2002;6:113-124. [CrossRef]
- Ballinger C, Pickering RM, Bannister S, Gore S, McLellan DL. Evaluating equipment for people with disabilities: user and technical perspectives on basic commodes. Clinical Rehabilitation 1995;9:157-166. [CrossRef]
- Demiris G, Oliver DP, Dickey G, Skubic M, Rantz M. Findings from a participatory evaluation of a smart home application for older adults. Technol Health Care 2008;16(2):111-118. [Medline]
- Good A, Stokes S, Jerrams-Smith J. Elderly, novice users and health information web sites: issues of accessibility and usability. J Healthc Inf Manag 2007;21(3):72-79. [Medline]
- Hung WW, Ross JS, Boockvar KS, Siu AL. Recent trends in chronic disease, impairment and disability among older adults in the United States. BMC Geriatr 2011 Aug;11(47) [FREE Full text] [CrossRef] [Medline]
- Jackson CL, Bolen S, Brancati FL, Batts-Turner ML, Gary TL. A systematic review of interactive computer-assisted technology in diabetes care: interactive information technology in diabetes care. J Gen Intern Med 2006 Feb;21(2):105-110. [CrossRef] [Medline]
- Goldberg LR, Piette JD, Walsh MN, Frank TA, Jaski BE, Smith AL, WHARF Investigators. Randomized trial of a daily electronic home monitoring system in patients with advanced heart failure: the Weight Monitoring in Heart Failure (WHARF) trial. Am Heart J 2003 Oct;146(4):705-712. [CrossRef] [Medline]
- Rubin J, Chisnell D, Spool JM. Handbook of Usability Testing: How to Plan, Design, and Conduct Effective Tests. Indianapolis, IN: Wiley Pub; 2008.
- Tan WS, Liu D, Bishu R. Web evaluation: Heuristic evaluation vs. user testing. Int J Ind Ergon 2009;39(4):621-627. [CrossRef]
- Gosbee J, Gosbee LL. Usability evaluation in health care. In: Carayon P, ditor. Handbook of Human Factors and Ergonomics in Health Care and Patient Safety. Boca Raton, FL: CRC Press; 2011:543-556.
- Nielsen J, Molich R. Heuristic evaluation of user interfaces. New York, NY: ACM Press; 1990 Presented at: CHI Conference on Human Factors in Computing; April 01-05, 1990; Seattle, WA p. 249-256. [CrossRef]
- Jaspers MW. A comparison of usability methods for testing interactive health technologies: methodological aspects and empirical evidence. Int J Med Inform 2009 May;78(5):340-353. [CrossRef] [Medline]
- Snyder C. Paper Prototyping: The Fast and Easy Way to Design and Refine User Interfaces. San Francisco, CA: Morgan Kaufmann; 2003.
- Rudd J, Stern K, Isensee S. Low vs. high-fidelity prototyping debate. interactions 1996;3(1):76-85. [CrossRef]
- Walker M, Takayama L, Landay JA. High-fidelity or low-fidelity, paper or computer? Choosing attributes when testing web prototypes. 2002 Presented at: the Human Factors and Ergonomics Society 46th Annual Meeting; Sep 30-Oct 4, 2002; Baltimore, MD p. 661-665. [CrossRef]
- Virzi RA, Sokolov JL, Karis D. Usability problem identification using both low- and high-fidelity prototypes. 1996 Presented at: the SIGCHI Conference on Human Factors in Computing Systems; April 13-18, 1996; Vancouver, BC, Canada p. 236-243. [CrossRef]
- Olmsted-Hawala EL, Romano JC, Murphy ED. The use of paper-prototyping in a low-fidelity usability study. 2009 Presented at: IEEE International Professional Communication Conference; July 19-22, 2009; Waikiki, HI p. 1-11. [CrossRef]
- Shneiderman B, Plaisant C. Designing the User Interface: Strategies for Effective Human-Computer Interaction. Boston, MA: Addison Wesley; 2005.
- Sanders MS, McCormick EJ. Human Factors in Engineering and Design. New York, NY: McGraw-Hill; 1993.
- Demiris G, Finkelstein SM, Speedie SM. Considerations for the design of a Web-based clinical monitoring and educational system for elderly patients. J Am Med Inform Assoc 2001;8(5):468-472 [FREE Full text] [Medline]
- Nielsen J, Mack RL. Usability Inspection Methods. New York, NY: Wiley; 1994.
- Grenier AS, Carayon P, Casper GR, Or C, Burke LJ, Brennan PF. Usability evaluation of an Internet-based health information/communication system for CHF patients. 2006 Presented at: the 16th Triennial World Congress of the International Ergonomics Association; July 10-14, 2006; Maastricht, the Netherlands p. 10-14.
- Davis FD. Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Quarterly 1989;13(3):319-340. [CrossRef]
- Krippendorff K. Content Analysis: An Introduction to Its Methodology. Thousand Oaks, CA: Sage Publications, Inc; 2012.
- Hick WE. On the rate of gain of information. Q J Exp Psychol 1952;4(1):11-26. [CrossRef]
- Hyman R. Stimulus information as a determinant of reaction time. J Exp Psychol 1953 Mar;45(3):188-196. [Medline]
- Catani MB, Biers DW. Usability evaluation and prototype fidelity: users and usability professionals. 1998 Presented at: Human Factors and Ergonomics Society 42nd Annual Meeting; October 5-9, 1998; Chicago, IL p. 1331-1335. [CrossRef]
- Sefelin R, Tscheligi M, Giller V. Paper prototyping - what is it good for? A comparison of paper- and computer-based low-fidelity prototyping. 2003 Presented at: CHI Conference on Human Factors in Computing Systems; April 5-10, 2003; Fort Lauderdale, FL p. 778-779. [CrossRef]
- Yen PY, Bakken S. Review of health information technology usability study methodologies. J Am Med Inform Assoc 2012;19(3):413-422 [FREE Full text] [CrossRef] [Medline]
- Chaudhry B, Wang J, Wu S, Maglione M, Mojica W, Roth E, et al. Systematic review: impact of health information technology on quality, efficiency, and costs of medical care. Ann Intern Med 2006 May;144(10):742-752. [Medline]
- Jamal A, McKenzie K, Clark M. The impact of health information technology on the quality of medical and health care: a systematic review. HIM J 2009;38(3):26-37. [Medline]
|BMI: body mass index|
|IT: information technology|
|RA: research assistant|
|SpO2: oxygen saturation|
|Edited by G Eysenbach; submitted 25.05.12; peer-reviewed by R Holden, A Roundtree; comments to author 19.06.12; revised version received 18.07.12; accepted 23.08.12; published 08.11.12|
Please cite as:
Or C, Tao Da
Usability Study of a Computer-Based Self-Management System for Older Adults with Chronic Diseases
JMIR Res Protoc 2012;1(2):e13
END, compatible with Endnote
BibTeX, compatible with BibDesk, LaTeX
RIS, compatible with RefMan, Procite, Endnote, RefWorks
Add this article to your Mendeley library
Add this article to your CiteULike library
Copyright©Calvin Or, Tao D. Originally published in JMIR Research Protocols (http://www.researchprotocols.org), 08.11.2012.
This is an open-access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Research Protocols, is properly cited. The complete bibliographic information, a link to the original publication on http://www.researchprotocols.org, as well as this copyright and license information must be included.