Keywords

Clinical dashboard, Home care services, Nursing informatics, Usability evaluation

 

Authors

  1. Dowding, Dawn PhD, RN, FAAN
  2. Merrill, Jacqueline A. PhD, MPH, RN, FAAN, FACMI
  3. Barron, Yolanda MS
  4. Onorato, Nicole BS
  5. Jonas, Karyn MSN
  6. Russell, David PhD

Abstract

The introduction of electronic health records has produced many challenges for clinicians. These include integrating technology into clinical workflow and fragmentation of relevant information across systems. Dashboards, which use visualized data to summarize key patient information, have the potential to address these issues. In this article, we outline a usability evaluation of a dashboard designed for home care nurses. An iterative design process was used, which consisted of (1) contextual inquiry (observation and interviews) with two home care nurses; (2) rapid feedback on paper prototypes of the dashboard (10 nurses); and (3) usability evaluation of the final dashboard prototype (20 nurses). Usability methods and assessments included observation of nurses interacting with the dashboard, the system usability scale, and the Questionnaire for User Interaction Satisfaction short form. The dashboard prototype was deemed to have high usability (mean system usability scale, 73.2 [SD, 18.8]) and was positively evaluated by nurse users. It is important to ensure that technology solutions such as the one proposed in this article are designed with clinical users in mind, to meet their information needs. The design elements of the dashboard outlined in this article could be translated to other electronic health records used in home care settings.

 

Article Content

Health information technology systems are widespread across healthcare settings. In the United States, while most hospitals have electronic health record (EHR) systems,1 their use is less common (although increasing) in other settings, such as home care2 and residential care.3 The value of EHRs remains an open question, and their introduction has produced significant challenges for clinicians, particularly regarding effective integration of the technology into their workflow.4,5 Nurses, for example, often have to mentally integrate information derived from technology systems in different locations6 and express a need for support in synthesizing the available information to paint a "picture of the patient" over time.7 In an effort to help clinicians integrate data regarding diagnoses and treatment interventions for patients across time, some studies developed techniques for visual display of patient data both at the individual and group levels.8-11 These systems typically integrate data from a variety of electronic data sources in hospital settings to provide visualizations of varying complexity to clinicians. The assumption of such initiatives is that the visualizations will help clinicians with retrieving data and making clinical decisions. However, to date, there has been limited evaluation of the impact of such systems in practice settings.11

 

To an increasing extent, dashboards are being deployed to visually summarize data relevant to individual clinicians to support decisions at the point of care. Dashboards can reduce cognitive overload and improve users' ability to interpret and remember relevant data.12,13 Dashboards may be used for many purposes in healthcare settings, for example to provide feedback on how well clinicians adhere to clinical practice guidelines,14 to encourage clinicians to carry out evidence-based care,15 or to combine and display information about a patient's condition to support clinical decisions.6,16,17 Initial evaluations of dashboards have documented reduced response times for finding relevant information,6,12,18,19 improved accuracy in information retrieval,6,18,19 and increased adherence to evidence-based care interventions.12

 

A significant number of studies have been conducted in hospital settings to explore the use of visualized data to support clinical care.6,9,14-16 However, to our knowledge, only one previous study focused on home healthcare.18 Approximately 4.9 million individuals in the US in 2014 received care at home,20 and this number is likely to increase as the population ages.21 Home care nurses face challenges that are similar to those in an acute care environment when accessing relevant information at the point of care to enable appropriate decisions. However, they also have specific challenges; they see patients less frequently (there can be considerable gaps of time between home care visits), and there is often very little continuity among the nurses who make patient visits across an episode of home care.22 In a companion study, we asked a group of home care nurses in the US to identify information they required to help them provide care for patients with heart failure23 who were at increased risk of hospital readmission from a home care setting,24,25 and therefore the importance of this information to nurses. The results of the study were used to design a prototype dashboard focused on tracking and monitoring of patient information (ie, weight and vital signs) over time. In this article, we briefly describe the design process used to develop the dashboard prototype and present the findings from our usability evaluation of the dashboard.

 

METHODS

Study Context

The study was conducted in a large not-for-profit home care agency in the Northeastern US. In 2016, at the time of the study, the agency served more than 142 000 patients and employed more than 1400 RNs and licensed practical nurses. The agency's internally developed EHR system was used by all home care nurses at the point of care. Data collected via the electronic system included the Outcomes and Assessment Information Set (OASIS), which is a mandated assessment carried out by all Medicare-certified home care agencies on admission, at multiple points during care, and at discharge. In addition, nurses documented their interventions and observations associated with every home care visit.

 

Participants

Home care nurses working in the home care agency participated in both the dashboard design process and the usability evaluation. Nurses were volunteers recruited via email or after clinical team meetings. Two nurses were recruited as codesigners for the study, participated in contextual interview sessions, and completed the usability evaluation. Ten nurses provided rapid feedback on the paper prototype dashboards, and 20 nurses completed the usability evaluation only (32 nurses in total across all three study components). In addition, three experts were recruited externally to conduct a heuristic evaluation of the dashboard prototype. Criteria for selection as an expert were status as a nurse, publication in the field of informatics, and some expertise in the field of data visualization. Dashboard design commenced in September 2016, and the usability evaluation took place during the period between January and February 2017.

 

All elements of the study were approved by the relevant institutional review boards.

 

Dashboard Design Process

In a companion study, nurses participated in focus groups to identify what information was required to assist them in delivering effective evidence-based care to patients with heart failure (the detailed methods and results of the focus group study are published separately).23 We also conducted a study to explore nurses' ability to understand visualized information (such as line graphs and bar graphs; full information on the methods and results of this study are also published separately).26 The results of these two studies were used to inform the design of the dashboards presented in this article. Based on the focus group findings, dashboards were developed to assist nurses with the monitoring of patient weights and vital signs between visits. The dashboards incorporated different display options to accommodate individual variation in comprehension of graphical information, based on the results of our study.26

 

We used a design science framework approach to develop the prototype dashboards.27,28 This approach emphasizes the importance of collaboration with the end-user of the technology (in this case home care nurses) and is iterative in nature. A sample of two home care nurses worked as "codesigners" with the research team to develop the final prototype dashboard, as presented in the following paragraphs.

 

Contextual inquiry was used to observe nurses conducting normal home care visits, with follow-up interviews that explored how they did their work. A researcher accompanied each nurse to two patient visits (two nurses, with a total of four visits and observations), observed them performing care, took notes, and then asked questions to verify that the researcher's understanding of the nurses' work was correct, and to document the nurses' views and comments.

 

Using information from the contextual inquiry, paper prototypes of potential dashboard designs were developed and evaluated in an iterative cycle using rapid feedback from a sample of 10 nurses. The paper prototypes were formatted on poster boards. Nurses provided written consent to participate in the study. They were asked to write down their thoughts and feedback on the prototype design (including layout, colors, and information presented in the dashboard) and suggestions for improvements. In general, the feedback sessions lasted approximately 5 to 10 minutes.

 

Based on observations, interviews, and rapid feedback on paper prototypes, a final version of the dashboard was developed. The prototype was Web-based using a widely available tool, InVision (InVision, New York, NY; http://invision.com), and designed to look as if it were integrated into the agency EHR system. All data for the prototype dashboard were fabricated (based on data that would be seen by nurses in a home care agency) and preprogrammed into the dashboard design. The interface for the dashboard was programmed to provide interactivity (to enable switching between data displays).

 

Usability Evaluation Methods

We used the Tasks, Users, Representations, and Functions (TURF) framework29 to structure the usability evaluation. The interactive Web-based prototype dashboard was evaluated for function (how useful is it?), users (how satisfying do the users find it?), representations, and task (how usable is it?). Participants (n = 20) were given a written series of tasks to complete using the prototype. The first set of tasks focused on extracting information from the data display (eg, providing a value for the blood pressure measurement on a specific date). A second set of tasks focused on dashboard functionality (ie, switch between graph types, select different data to display, and navigate between the front patient case screen and the dashboard located in the patient's notes to record vital signs). Participants were asked to complete the tasks on the worksheet, writing down information they extracted from the dashboard where appropriate. While they completed the tasks, their interactions with the dashboard interface were recorded using Morae software (TechSmith, Okemos, MI), including the time participants started and finished the tasks, verbalizations, and on-screen activity such as mouse and keyboard input.

 

Usefulness was assessed by analyzing participant data (n = 20) recorded as they interacted with the dashboard during the evaluation session and comparing how they used the dashboard to the functionality built into the dashboard. For example, we recorded whether participants used radio buttons to navigate among different vital signs measures, if they used the mouse to hover over graphs to identify a specific value associated with a data point, or if they moved between bar and line graph data displays. Whether the participants completed each task with ease (without assistance) or with difficulty (defined as requiring assistance from the research team to complete the task) or failed to complete was noted as part of the evaluation.

 

Satisfaction (how useful the users found the system to be) was evaluated using two validated questionnaires: the 10-item System Usability Scale (SUS)30 and the short-form 50-item Questionnaire for User Interaction Satisfaction (QUIS).31,32 The SUS is a flexible questionnaire designed to assess any technology and is relatively quick and easy to complete. It consists of 10 statements that are scored on a 5-point scale of strength of agreement, with final scores (after transformation of the scores as described in the Data Analysis section) ranging from 0 to 100. A higher score indicates better usability. As a general rule, a system that has a score above 70 has acceptable usability; a lower score means that the system needs more scrutiny and continued improvement.33 The QUIS was developed by a group of researchers at the University of Maryland and is specifically designed to measure user satisfaction with various components of a technology system, which includes both overall system satisfaction and specific interface factors such as screen design and system terminology.34 The questionnaire can be configured so that it fits the needs for user interface analysis. The original full QUIS has 11 subcomponents, with more than 120 questions covering aspects of usability including learning, terminology, and system capabilities. It has extremely high reliability (Cronbach's [alpha] = .95)34 and has been used successfully for user evaluations across a variety of different technology systems in healthcare.35-38 For this study, the QUIS short form was used to reduce burden on study participants. We used the QUIS subcomponents relevant to the dashboard evaluation, which included part 3: overall user reactions (five questions); part 4: screen (four questions); part 5: terminology and dashboard information (two questions); part 6: learning (three questions); and part 7: system capabilities (two questions). For each subcomponent, participants rated the dashboard on a scale from 1 to 9 (a summary of each question and the differential response items are provided in Table 2).

  
Table 2 - Click to enlarge in new windowTable 2 Summary of SUS and QUIS scores

Usability was evaluated using heuristic evaluation and task analysis.29 Heuristic evaluation was conducted by experts who were provided with an extended task list designed to enable them to explore the full functionality of the dashboard and a heuristic evaluation checklist developed for the study.39 The checklist consisted of seven general usability principles (visibility of system status, match between system and the real world, user control and freedom, consistency and standards, recognition rather than recall, flexibility and efficiency of use, aesthetic and minimalist design/remove the extraneous) with 40 usability factors (eg, Does every screen have a tile or header that describes its contents? Do the selected colors correspond to common expectations about color codes?) and three visualization-specific usability principles (spatial organization, information coding, orientation) with nine usability factors (eg, Are symbols appropriate for the data represented?). If the factor was present then the evaluator gave a score of 1 (yes), and if it was not present, a score of 0 (no).

 

Task analysis compared user performance in terms of time on task. The actual time that home health nurses spent using the dashboard was compared to the total time it took a group of expert users (three of the study co-authors) to complete a set of tasks. The lower the difference between the two groups, the more usable the system was considered to be.

 

Usability Evaluation Data Analysis

Usefulness: The audio recordings of participant interactions with, and feedback regarding, the dashboard underwent content analysis. Three members of the team independently categorized statements from the audio recordings before meeting to reach consensus on the categorization for each comment.

 

Satisfying: The SUS was scored by converting responses to a 0- to 4-point scale (4 was the most positive response). The converted responses were added and multiplied by 2.5 as per the scoring instructions, giving a range of possible values from 0 to 100. Descriptive statistics were used to summarize the SUS scores across all evaluators of the system. The means and standard deviations for each item on the QUIS were calculated and then graphically displayed, providing an overall profile of areas that participants identified as being particularly good or bad.

 

Usability: The output from a heuristic evaluation is a summary list of usability problems identified by the group of evaluators. The scores for each heuristic were calculated by dividing the total number of factors (points) awarded by the total number available. The higher the score, the more usable the system was considered to be.

 

Three experts who did not participate in the heuristic evaluation were recorded interacting with the dashboard to complete the task list created for the usability evaluation. The total time to complete the task list for each expert was calculated. The mean of the expert times was then used as the expert model. Total time using the dashboard was identified for each participant in the usability evaluation. Descriptive statistics were used to summarize these data, including the average, maximum, and minimum completion times for the task list, and compared to the expert model. Users' ability to accurately extract information from the dashboard was also recorded.

 

RESULTS

Prototype Dashboard Design Features

Following the contextual inquiry observation, interviews, and two rounds of rapid feedback on paper prototypes, the Web-based interactive prototype was developed (an example of the dashboard display for weight is provided as Figure 1). Suggestions extracted from the data resulted in the development of the following areas of functionality for the prototype:

  
Figure 1 - Click to enlarge in new windowFIGURE 1. Example of prototype dashboard display.

* movement enabled between different visualizations of the data (switching between a line graph and a bar graph);

 

* data displayed for different vital signs (using the radio buttons);

 

* information displayed in both the front screen of the patient EHR and in the patient notes for vital signs (where nurses commonly document measurements); and

 

* measurements color-coded (eg, red for measurements outside of guidelines, yellow for weight measurements above initial weight but not outside guidelines, green for measurements inside guidelines).

 

 

Usability Evaluation

Participants

A total of 22 nurses completed the usability evaluation (the two codesigners plus 20 participants recruited for the usability evaluation). They were predominantly female (n = 20 [91%]) with a mean age of 51 (SD, 10.0) years. They were ethnically diverse (32% white non-Hispanic, 32% Asian, 23% African American, and 14% Hispanic/other race/ethnicity) and experienced (78% had >=10 years of nursing experience).

 

Usefulness

The majority of participants (91%) were able to use the dashboard immediately, and easily used radio buttons to switch between data elements and icons to navigate between the line and bar graphs (96%) (Table 1). Participants liked the functions that enabled them to see trends for vital signs over time, without having to search back through previous notes. "It has, like, past vital signs, which is really helpful, instead of going back and forth" (Nurse 32). "Because sometimes you're like, oh, what was the weight the other day, but if you have the graph in front of you, then you can see the change right away" (Nurse 40).

  
Table 1 - Click to enlarge in new windowTable 1 Task Completion

The color-coded data (red, yellow, green) were received positively; participants noted that this helped them to pay attention to the data. The built-in decision support (highlighting with red color when a measurement was outside recommended guidelines) was also identified as an important element of dashboard functionality. "Just looking at numbers, numbers, numbers, there's something robotic, you don't pay attention, but with colors it makes it stand out more, you pay attention" (Nurse 36). "It's like ok now I see everything I need to know and the fact that it has the blood pressure parameters on there when you go into the blood pressure graph, that's good too because now I can see, ok, well this one was a little off, let me call the doctor" (Nurse 52).

 

More than half of participants had difficulty (50%) or failed to complete (16%) the task that required navigating from the initial front dashboard display screen to the patient notes area of the EHR to document vital signs. This was partly due to the functionality of the Web-page interface used in the study, which did not behave the same as the agency's EHR. Of the individuals who did navigate to the charting screens, another two participants were unable to complete documentation (again, due to a lack of congruence between the prototype Web page and the agency's EHR screen).

 

There was some disagreement in the verbal feedback on the overall usability of the system. Some participants reported that they found it easy to use without training, and others stated a need for training and practice. "I don't think you need any experience, I thought I navigated it quite well" (Nurse 48). "For someone like me it was easier, but for some of the older nurses it might take more time to kind of get the hang of it but I think it should get easier for them too" (Nurse 52). "I'm not the most computer savvy so I would just have to practice" (Nurse 39). "Just teach us like how to navigate and where we are supposed to find this and that because, you know the only thing I know with computers is the one I do with my notes and my visit, that's it" (Nurse 41).

 

In the course of the usability evaluation, participants suggested further functionality to be built into the dashboard before deployment. This included the ability to change font size on the display, the addition of blood glucose readings to the charts, and editing the graphs to make the relationship between vital sign records and dates clearer. "If it can be done with blood sugars too, that would be good" (Nurse 52). "I think for diabetic patients that might be helpful, to see what the sugars were last week instead of going back" (Nurse 32). "But seeing like over here, like the 23rd, the graph went like that and to kind of see down here, it's kind of hard to see, from this side of the graph over to where the number was" (Nurse 48)

 

Satisfying

The dashboard had a mean SUS score of 73.2 (SD, 18.8) and a median score of 70. Means of QUIS ratings of the dashboard prototype (scale from 1 to 9: word on the left = 1, word on the right = 2) are summarized in Table 2. Dashboard elements given the highest ratings were terminology and dashboard information (mean, 7.8 [SD, 1.5]) and the dashboard screen (mean, 7.7 [SD, 1.2]). Elements with the lowest ratings were the overall user reactions (mean, 6.1 [SD, 1.0]) and learning the dashboard (mean, 7.5 [SD, 1.3]).

 

Usability

The usability issues identified by the three experts who participated in the heuristic evaluation were classified as being either a minor usability issue (n = 5) or a cosmetic problem only (n = 12), in the areas of the flexibility and efficiency of dashboard use and user control and freedom (Table 3). Specific issues raised included placing the icons for graph selection (line graph or bar graph) closer to the graph (cosmetic problem only) and using a contrasting color to highlight which graph is selected (minor issue). The heuristic evaluators also made suggestions regarding the icon for the line graph (cosmetic problem only), graying in the legend to convey that it is not interactive (cosmetic problem only), and the ability to (1) choose more data on one graph and (2) show a broader or narrower date range (minor issue).

  
Table 3 - Click to enlarge in new windowTable 3 Heuristic Evaluation Ratings

Participants' average time to complete all dashboard tasks was 5.7 (SD, 2.4) minutes compared to 1.4 (SD, 0.6) minutes for the expert users. All nurses took more time than the average expert user. The average time deviation between nurses and expert users was 4.3 (SD, 2.4) minutes. The ability of participants to navigate the dashboard and extract accurate data from the displays was assessed in the task form. Questions specifically related to the ability to interpret data in the dashboards had accuracy ratings between 55% and 100%. Overall, 100% of participants were able to identify a patient's weight; however, only 55% (n = 12) were able to accurately identify a patient's temperature. The actual patient temperature was 99.7[degrees]F, which displayed if the mouse point hovered over the reading on the chart. However, if participants only viewed the graph without the mouse hovering feature, then they either gave a reading of 100[degrees]F (n = 6 [27%]) or 99[degrees]F (n = 2 [9%]). One participant gave a reading of 99.5[degrees]F.

 

DISCUSSION

This article reports the results of a usability evaluation of a point-of-care dashboard prototype developed for use by home care nurses. The dashboard was designed for integration into an existing, internally developed EHR to support decision making in the care of patients with heart failure. The final dashboard prototype displays data trends for a patient's vital signs and weight across home care visits and incorporates inbuilt alerts (decision support) to indicate to the home care nurse when a patient's measurements are outside the guidelines40 or parameters established by the physician. In general, it is considered that usable products should have SUS scores above 70; our prototype scored 73, suggesting acceptable usability.41 Nurses who participated in the usability evaluation indicated that elements of the dashboard design (such as providing information trends through time on a patient's weight and vital signs, the use of visual displays for the data, and color coding to indicate measurements outside guidelines or recommendations) were perceived to be extremely useful as support for clinical practice.

 

However, before deployment in clinical practice, the dashboard would need considerable refinement, taking into account the results of the usability evaluation and feedback from the expert evaluators regarding design and layout of functions, as well as the nurses' need for training and education on how to use the dashboard effectively. Other functionality, such as providing data on a patient's blood glucose measurements (requested by several nurses in feedback), enhancing the usability of the graph function for temperature readings, and providing the facility to graph multiple indicators on one display (not specifically mentioned by these nurses but accepted practice in other settings), could also be considered.

 

While our study is not the first to develop visualizations to summarize patient trends for home care nurses,18 it is the first to specifically address both the information needs of the nurses and individual differences in the comprehension of visualized information. Because of the nature of home care nursing, where the frequency of visits and lack of continuity vary compared to acute care settings, providing nurses with information over time to enable efficient monitoring of the patient's self-management is vital. The use of visual dashboards, such as the one developed in this study, means that nurses do not have to seek these data across a large number of previous visit notes and are readily alerted if the patient's condition deteriorates.

 

Study Limitations

This study was conducted at a single home care agency located in the Northeastern region of the US, which currently has its own internally developed EHR system. The dashboard that was developed in this study was specifically designed to be integrated into the existing EHR, located in the patient's notes when a nurse opens the first visit documentation screen, and in the visit notes area where nurses document vital signs and weight. Although this is a limitation, the principles underlying development of the dashboard are transferable across different EHR systems. The dashboard should be integrated into nurses' workflow, located at points in the EHR where the information would be most useful; be able to display data according to preference, such as a bar graph or line graph; and provide some decision support and guidance, indicated through the use of appropriate color systems when a patient's measurements fall outside guidelines and parameters.

 

Other limitations include the design method for the prototype dashboard; it utilized an interface based on Web pages, which was not fully integrated into, and lacked the functionality of, the agency's EHR. This caused difficulties for several participants in the usability evaluation when they were asked to navigate from the front display dashboard to the area of the EHR in which they would normally chart a patient's weight and vital signs. In addition, the usability evaluation did not explore the dashboard in actual practice, meaning that it is not clear whether it would integrate effectively with a nurse's actual workflow.

 

Future Research

As noted, before the dashboard developed in this study could be implemented in a practice setting, it would need refinement and the addition of a training module. Whether and how the dashboard integrates with nurses' workflow and its potential impact on nurses' ability to retrieve information for use in decision making also need to be explored. Finally, studies to evaluate the long-term impact of the dashboard implementation on patient healthcare outcomes are needed.

 

CONCLUSION

The use of visualization techniques such as dashboards is increasing in response to clinician needs for summarized, easily interpreted patient information at the point of care. In this study, we developed a dashboard for use by home care nurses that included nurses in all aspects of the design process. It is important to ensure that technology solutions, such as the one proposed here, are designed with clinical users in mind, to meet their information needs. The design elements of the dashboard, which was rated as usable by nurses, could be translated to other EHRs used in home care settings.

 

References

 

1. Charles D, Gabriel M, Searcy T. Adoption of electronic health record systems among U.S. In: NonFederal Acute Care Hospitals: 2008-2014. Washington, DC: Office of the National Coordinator for Health Information Technology; 2015. [Context Link]

 

2. Bercovitz AR, Park-Lee E, Jamoom E. Adoption and Use of Electronic Health Records and Mobile Technology by Home Health and Hospice Care Agencies. Hyattsville, MD: National Center for Health Statistics; 2013. [Context Link]

 

3. Davis JA, Zakoscielna K, Jacobs L. Electronic information systems use in residential care facilities: the differential effect of ownership status and chain affiliation. Journal of Applied Gerontology. 2016;35(3): 331-348. [Context Link]

 

4. Council NR. In: Stead WW, Lin HS, eds. Computational Technology for Effective Health Care: Immediate Steps and Strategic Directions. Washington, DC: The National Academies Press; 2009: 120 p. [Context Link]

 

5. Armijo D, McDonnell C, Werner K. Electronic Health Record Usability: Interface Design Considerations. Rockville, MD; October 2009. [Context Link]

 

6. Koch SH, Staggers N, Weir C, Agutter J, Liu D, Westenskow DR. Integrated information displays for ICU nurses: field observations, display design and display evaluation. In: Proceedings of the Human Factors and Ergonomics Society Annual Meeting; San Francisco. 2010;54(12): 932-936. [Context Link]

 

7. Jeffrey AD, Novak LL, Kennedy B, Dietrich MS, Mion LC. Participatory design of probability-based decision support tools for in-hospital nurses. Journal of the American Medical Informatics Association. 2017;24(6): 1102-1110. [Context Link]

 

8. Wang TD, Plaisant C, Quinn AJ, Stanchak R, Murphy S, Shneiderman B. Aligning temporal data by sentinel events: discovering patterns in electronic health records. In: Conference Proceedings of the 2008 Conferene on Human Factors in Computing Systems; Florence, Italy. New York, NY: ACM Digital Library; 2008: 457-466. [Context Link]

 

9. Bui AA, Aberle DR, Kangarloo H. TimeLine: visualizing integrated patient records. IEEE Transactions on Information Technology in Biomedicine. 2007;11(4): 462-473. [Context Link]

 

10. Plaisant C, Milash B, Rose A, Widoff S, Shneiderman B. LifeLines: visualizing personal histories. In: Tauber MJ, ed. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems; Vancouver, Canada. New York, NY: ACM; 1996: 221-227. [Context Link]

 

11. Hirsch JS, Tanenbaum JS, Gorman SL, et al. HARVEST, a longitudinal patient record summarizer. Journal of the American Medical Informatics Association. 2015;22: 263-274. [Context Link]

 

12. Dowding D, Randell R, Gardner P, et al. Dashboards for improving patient care: review of the literature. International Journal of Medical Informatics. 2015;84: 87-100. [Context Link]

 

13. Wilbanks BA, Langford PA. A review of dashboards for data analytics in nursing. Computers, Informatics, Nursing. 2014;84(11): 545-549. [Context Link]

 

14. Zayfudim V, Dossett LA, Starmer JM. Implementation of a real-time compliance dashboard to help reduce SICU ventilator-associated pneumonia with the ventilator bundle. Archives of Surgery. 2009;144(7): 656-662. [Context Link]

 

15. Linder JA, Schnipper JL, Tsurikova R, et al. Electronic health record feedback to improve antibiotic prescribing for acute respiratory infections. The American Journal of Managed Care. 2010;16(12 Suppl HIT): e311-e319. [Context Link]

 

16. Effken JA, Loeb RG, Kang Y, Lin Z-C. Clinical information displays to improve ICU outcomes. International Journal of Medical Informatics. 2008;77: 765-777. [Context Link]

 

17. Lee S, Kim E, Monsen KA. Public health nurse perceptions of Omaha system data visualization. International Journal of Medical Informatics. 2015;84: 826-834. [Context Link]

 

18. Mamykina L, Goose S, Hedqvist D, Beard DV. CareView: analyzing nursing narratives for temporal trends. In: CHI '04 Extended Abstracts on Human Factors in Computing Systems. New York, NY: ACM; 2004: 1147-1150. [Context Link]

 

19. Koopman RJ, Kochendorfer KM, Moore JL, et al. A diabetes dashboard and physician efficiency and accuracy in accessing data needed for high-quality diabetes care. Annals of Family Medicine. 2011;9(5): 398-405. [Context Link]

 

20. Harris-Kojetin L, Sengupta M, Park-Lee E, et al. Long-term care providers and services users in the United States: data from the National Study of Long-term Care Providers, 2013-2014. National Center for Health Statistics Vital Health Statistics. 2016;3(38). [Context Link]

 

21. Vincent GK, Velkoff VA. The Next Four Decades: The Older Population in the United States: 2010 to 2050. Washington, DC: U.S. Census Bureau; 2010. [Context Link]

 

22. Russell D, Rosati RJ, Rosenfeld P, Marren JM. Continuity in home health care: is consistency in nursing personnel associated with better patient outcomes? Journal for Healthcare Quality. 2011;33(6): 33-39. [Context Link]

 

23. Dowding DW, Russell D, Onorato N, Merrill JA. Perceptions of home care clinicians for improved care continuity: a focus group study. Journal for Healthcare Quality. 2017; In Press. [Context Link]

 

24. Jencks SF, Williams MV, Coleman EA. Rehospitalizations among patients in the Medicare fee-for-service program. The New England Journal of Medicine. 2009;360(14): 1418-1428. [Context Link]

 

25. Fortinsky RH, Madigan EA, Sheehan TJ, Tullai-McGuinnes S, Kleppinger A. Risk factors for hospitalization in a national sample of Medicare home health care patients. Journal of Applied Gerontology. 2014;33(4): 474-493. [Context Link]

 

26. Dowding D, Merrill JA, Onorato N, Barron Y, Rosati RJ, Russell D. The impact of home care nurses' numeracy and graph literacy on comprehension of visual display information: implications for dashboard design [published online ahead of print April 27, 2017]. Journal of the American Medical Informatics Association. 2017. [Context Link]

 

27. Hevner A, Chatterjee S. Design science research in information systems. In: Hevner A, Chatterjee S, eds. Design Research in Information Systems. Integrated Series in Information Systems. Heidelberg, Germany: Springer-Verlag; 2010: 9-22. [Context Link]

 

28. Hevner A, Chatterjee S. Design science research frameworks. Design research in information systems: theory and practice. In: Hevner A, Chatterjee S, eds. Design Research in Information Systems. Integrated Series in Information Systems. Heidelberg, Germany: Springer-Verlag; 2010: 23-31. [Context Link]

 

29. Zhang J, Walji M. TURF: toward a unified framework of EHR usability. Journal of Biomedical Informatics. 2011;44: 1056-1067. [Context Link]

 

30. De Souza RN. 'This child is a planned baby': skilled migrant fathers and reproductive decision-making. Journal of Advanced Nursing. 2014;70(11): 2663-2672. [Context Link]

 

31. Van McCrary S, Green HC, Combs A, Mintzer JP, Quirk JG. A delicate subject: the impact of cultural factors on neonatal and perinatal decision making. Journal of Neonatal-Perinatal Medicine. 2014;7(1): 1-12. [Context Link]

 

32. Chin J, Diehl V, Norman K. Development of an instrument measuring user satisfaction of the human-computer interface. In: Proceedings of the SIGHCI Conference on Human Factors in Computing Systems. 1988:213. [Context Link]

 

33. Bangor A, Kortum P, Miller J. An empirical evaluation of the System Usability Scale. International Journal of Human Computer Interaction. New York, NY: ACM Digital Library; 2008;24(6): 574-594. [Context Link]

 

34. Chin J, Diehl V, Norman K. Development of an instrument measuring user satisfaction of the human-computer interface. In: CHI '88 Proceedings of the SIGHCHI Conference on Human Factors in Computing Systems. Washington, DC: ACM; 1988: The New Yorker. [Context Link]

 

35. Harper B, Slaughter L, Norman K. Questionnaire Administration Via the WWW: A Validation and Reliability Study for a User Satisfaction Questionnaire. Toronto, ON, Canada: WebNet 97, Association for the Advancement of Computing in Education; 1997. [Context Link]

 

36. Hortman P, Thompson C. Evaluation of user interface satisfaction of a clinical outcomes database. Computers, Informatics, Nursing. 2005;23(6): 301-307. [Context Link]

 

37. Johnson TR, Zhang J, Tang Z, Johnson C, Turley JP. Assessing informatics students' satisfaction with a web-based courseware system. International Journal of Medical Informatics. 2004;73: 181-187. [Context Link]

 

38. Moumame K, Idri A, Abran A. Usability evaluation of mobile applications using ISO 9241 and ISO 25062 standards. Springerplus. 2016;5: 548. [Context Link]

 

39. Dowding D, Merrill JA. Heuristics for Evaluation of dashboard visualizations. Presented at the American Medical Informatics Association Annual Symposium; November 4-8, 2017; Washington, DC. [Context Link]

 

40. Yancy C, Jessup M, Bozkurt B, et al. 2013 ACCF/AHA guideline for the management of heart failure: a report of the American College of Cardiology Foundation/American Heart Association Task Force on Practice Guidelines. Journal of the American College of Cardiology. 2013;62(16): e147-e239. [Context Link]

 

41. Bangor A, Kortum P, Miller J. An empirical evaluation of the System Usablity Scale. International Journal of Human Computer Interaction. 2008;24(6): 574-594. [Context Link]