Keywords

clinical reasoning, nursing students, confirmatory factor analysis

 

Authors

  1. HUANG, Hui-Man

ABSTRACT

Background: There is no instrument currently available to assess the essential nursing competency of clinical reasoning (CR).

 

Purpose: The purpose of this study was to develop and test the psychometric properties of CR assessment instrument appropriate for use with nursing students across different types of programs.

 

Methods: H. M. Huang et al.'s (2018) Framework of Competencies of Clinical Reasoning for Nursing Students was used to guide this study. Two rounds of Delphi study and confirmatory factor analysis (CFA) were conducted to test content and construct validity. Internal consistency was tested for reliability.

 

Results: The four-domain, 16-item Likert-scale Clinical Reasoning Scale (CRS) was developed. One thousand five hundred four nursing students currently enrolled in three different types of nursing programs completed the CRS. The content validity index was .85-1.0, the CFA indicated goodness of fit, and the Cronbach's [alpha] score range was .78-.89.

 

Conclusion: The CRS is a valid and reliable tool for assessing CR in nursing students in different types of nursing program.

 

Article Content

Introduction

Critical reasoning (CR) is a "complex cognitive process that uses formal and informal thinking strategies to gather and analyze patient information, evaluate the significance of this information and weigh alternative actions" (Simmons, 2010, p. 1155). CR is a required and essential nursing competency emphasized in nursing professional guidelines (National Organization of Nurse Practitioner Faculties, 2017), certification/licensure requirements (National Council State Boards of Nursing, 2019), and educational standards (American Association of Colleges of Nursing [AACN], 2021). Moreover, CR is a vital attribute that distinguishes professional nurses from ancillary care providers (AACN, 2021; Simmons, 2010). Good CR in nurses has been associated with high-quality nursing care, patient well-being, and positive patient outcomes (Jang et al., 2021; Kao et al., 2022; Manetti, 2018). The importance of CR in nurses is especially amplified in complex healthcare environments characterized by rapidly changing system/patient needs, high-stakes practice, technological innovation, and healthcare information explosions (AACN, 2021; Kavanagh & Szweda, 2017; Kim et al., 2022).

 

The recent COVID-19 pandemic has further highlighted the critical need for CR competency in nurses (Mariani, 2021). To meet both professional and ethical requirements, nurses should have CR skills before entering practice settings. An instrument that accurately identifies the gaps in CR is critical for faculty and students to reach consensus on CR learning needs. Therefore, the purpose of this study was to inductively develop an instrument based on H. M. Huang et al.'s (2018) Framework of Competencies of Clinical Reasoning for Nursing Students to assess the CR of nursing students studying in different types of nursing programs and to test the psychometric properties of this instrument using confirmatory factor analysis (CFA).

 

Literature Review

In the nursing literature, CR is often used synonymously with critical thinking and clinical judgment (Klenke-Borgmann et al., 2020). However, Klenke-Borgmann et al. contended that CR, critical thinking, and clinical judgment are three distinct terms. Specifically, critical thinking has been described as a broad term "that includes reasoning both outside and inside of the clinical setting. Clinical reasoning and clinical judgment are key pieces of critical thinking in nursing" (Alfaro-LeFevre, 2019, p. 6). Critical thinking has been defined as a knowledge-based, situation-independent cognitive process used to analyze empirics to make clinical judgments and solve problems (Mohammadi-Shahboulaghi et al., 2021), whereas CR has been defined as a specific concept addressing the clinical thinking processes nurses use at points of care (Alfaro-LeFevre, 2019). CR has been further described as a nonlinear "cyclical nursing process within the limits of patients' circumstances and nurses' knowledge or experience" (Hong et al., 2021, p. 1). The term "clinical judgment" has been defined as the result and end point of clinical thinking and CR (Klenke-Borgmann et al., 2020) and is thus the context-specific results and outcomes achieved at points of care. Despite the effort of scholars to distinguish these three concepts, they remain intertwined with overlapping elements that address the process of reaching a context/circumstance-specific clinical decision at the point of care.

 

Although the cultivation of CR in nursing education is required by nursing accreditation organizations (AACN, 2021) and critical to promoting patient outcomes, many new nursing graduates and nursing students still lack the preparation and confidence necessary to apply CR in practice settings (Kavanagh & Szweda, 2017; Killam et al., 2011). In Kavanagh and Szweda's study of 5,000 newly graduated nurses, 23% were unable to identify a change in patient condition or distinguish levels of urgency most of the time, whereas 54% were unable to manage patient problems. Nursing students have been found to lack practice-ready competency (Al-Moteri et al., 2019; Jarvelainen et al., 2018). Al-Moteri et al.'s systemic review synthesized study findings on nurses' recognition and responses to patient deterioration in the presence of worsening condition in practice in seven countries. Nurses' failure to recognize the antecedents of patient deterioration and related judgment errors was found to be an issue across all seven countries, which highlighted the significance of adequate CR preparation in nursing students before entering practice. Studies on nursing students' CR ability have also identified significant deficits in their abilities to manage deteriorating patients in practice settings (Jarvelainen et al., 2018). The effective application of CR requires nursing students to possess an ability "to gather the right cues, based on the right reason to execute the right action for right patient at the right time" (Levett-Jones et al., 2010, p. 515). The CR literature highlights the necessity of having valid and reliable CR assessment tools to accurately identify students' learning needs in CR and effectively guide the design of focused nursing curricula (Menezes et al., 2015).

 

Tanner's (2006) clinical judgment model and Levett-Jones et al.'s (2010) clinical reasoning model are theoretical models that have been used to develop nursing curricula and andragogical approaches to facilitate growth in CR and to promote improvements in clinical decision making. Both models have been used extensively to guide the development of CR instruments. Lasater Clinical Judgment Rubric (LCJR; Lasater, 2007) is a tool that has been a frequently used in CR assessment. LCJR was developed within the framework of Tanner's clinical judgment model and based on the responses of 24 baccalaureate nursing students to simulated scenarios to assess their clinical judgment. The resultant LCJR is a 5-point Likert scale instrument comprising 11 dimensions, four developmental phases (exemplary, accomplished, developing, and beginning), and 44 descriptors for each dimension at each developmental phase. The instrument has been translated into Dutch (Vreugdenhil & Spek, 2018), Chinese (Yang et al., 2019), Spanish (Roman-Cereto et al., 2018), Korean (Shin et al., 2015), and other languages to assess nursing students' clinical judgment competency in simulation experiences. Although LCJR has shown to be effective in assessing the clinical judgment competency of students in simulated experiences, researchers have noted that rater training is essential to ensure interrater reliability (K. A. Adamson et al., 2012). Other CR instruments are relatively more limited in their validity for student self-evaluation (e.g., Nurses' Clinical Reasoning Scale [CRS; Liou et al., 2016]; LCJR [Lasater, 2007]). Furthermore, CR instruments have been utilized for diverse and broad purposes such as skills testing for specific content areas (e.g., key-feature questions [Nayer et al., 2018] and script concordance tests [Aubart et al., 2021]), nursing admission (e.g., reasoning skills test [Vierula et al., 2021]), and simulations (e.g., LCJR [Lasater, 2007]). However, assessment of the inherent CR process in nursing has been limited. Nursing education systems in different country settings are diverse in terms of both program lengths and structures. A clinically validated, nursing-discipline-specific instrument for CR assessment in students of diverse types of nursing programs is critically needed (Griffits, 2017).

 

Conceptual Framework

H. M. Huang et al.'s (2018) framework of CR competencies for nursing students was used to guide this study. CR competency in nursing students is described as "an ongoing, interactive and dynamic process that continuously undergoes adjustments and modifications depending on changes in the clinical context" (H. M. Huang et al., 2018, p. 115). The conceptual framework used in this study consists of four domains of CR, including awareness of clinical cues, confirmation of clinical problems, determination and implementation of actions, and evaluation and reflection. The two to four indicators in each domain result in 13 interconnected and interwoven indicators of CR competencies that together may be used to assess the CR competency of the respondent. The first domain "awareness of clinical cues" includes four indicators: "possession of keen observation, application of past life experiences, possession of professional healthcare knowledge and skills, and willingness to facilitate patients with problem-solving" (H. M. Huang et al., 2018, p. 112). The second domain "confirmation of clinical problems" consists of four indicators: "search for clinical cues, interpret the meaning of clinical cues, connects theories with practice, and recognize important clinical problems" (H. M. Huang et al., 2018, p. 112). The third domain "determination and implementation of actions" encompasses three indicators: "determination of priority, verification of hypothetical answers, and solution to patients' problems" (H. M. Huang et al., 2018, p. 112). The fourth domain "evaluation and reflection" includes two indicators: "evaluation of the effectiveness of problem-solving and self-evaluation and improvement" (H. M. Huang et al., 2018, p. 112).

 

Methods

Development of the CRS involved four phases (see Table 1), including (a) developing the CR domains and the CRS items, (b) testing content validity, (c) testing construct validity, and (d) testing reliability. The data collection period was from November 2016 to June 2018.

  
Table 1 - Click to enlarge in new windowTable 1 Development of the Clinical Reasoning Scale (CRS)

Phase 1: Developing the CR Domains and CRS Items

The CRS items were developed based on H. M. Huang et al.'s (2018) framework of competencies of clinical reasoning for nursing students.

 

Phase 2: Testing Content Validity

Two Delphi study rounds involving seven experts in nursing education were conducted. All of the experts were seasoned clinicians and educators with 14-30 years of experience in nursing higher education (six full professors and one assistant professor). Six of the seven experts held a doctorate in nursing. The experts were asked to evaluate whether the CRS items accurately reflected the four domains, with the content validity index (CVI) showing the extent to which they collectively agreed upon the "representativeness" (items appropriately reflect the CR domains), "suitability" (items suitably measure nursing students' CR), and "clarity" (items are clearly stated and easy to understand) of the CRS items (Polit & Beck, 2017). The experts ranked the representativeness, suitability, and clarity of each CRS item on a 5-point Likert scale (1 = irrelevant and should be deleted; 2 = seemingly relevant but large-scale revision required; 3 = relevant but in need of small adjustments; 4 = relevant, but needs rewording; 5 = relevant, clear, and precise). Items with a mean score of 4.0 or above were retained, items with a score range of 3.1-3.9 were modified, and items with a mean score less than 3.0 were deleted.

 

Phase 3: Testing Construct Validity

The factor structure of the instrument was tested using CFA run on LISREL (Linear Structural Relations) software, Version 9.30 (Scientific Software International, Inc., Lincolnwood, IL, USA). CFA may be used to verify if an instrument is theory based, with results providing stronger evidence than exploratory factor analysis in support of the construct validity of the factor structure (Brown, 2015). CFA was performed to assess the structure of the CRS and identify the optimal model. The main goodness-of-fit indicators for the CRS were the goodness-of-fit index (GFI), adjusted GFI (AGFI), root mean square error of approximation (RMSEA), and Akaike information criterion (AIC). Goodness of fit is shown if GFI and AGFI are > .90, RMSEA is < .05, and AIC close to zero (Grave & Cipher, 2017). Items were deleted if factor loadings were less than .4 or greater than .75 (F. M. Huang, 2009).

 

Sample size was determined as at least 10 cases per variable based on Nunnally's (1967) principle for adequate sample size. Thus, 440 students from each nursing program type, including 5-year associate degree in nursing (ADN) programs, 4-year bachelor's degree in nursing (BSN) programs, and 2-year RN-to-BSN programs, were necessary to ensure good reliability and validity. Considering an estimated 20% attrition rate, the researchers intended to recruit 528 students from each of nursing program type (1,584 participants in total). Finally, a convenience sample of 1,550 nursing students were recruited across the three types of nursing programs from 10 universities in Taiwan. The inclusion criteria were nursing students who were (a) enrolled in the last semester of their nursing program, (b) of full-time status, and (c) aged 20 years or older.

 

Phase 4: Testing Internal Consistency and Reliability

The researchers tested the internal consistency and reliability using correlation coefficient (Cronbach's [alpha]) and item-total correlations. Cronbach's [alpha] >= .8 is an indicator of good reliability. The item-total correlation was >= .3, indicating high internal consistency (Burns et al., 2020).

 

Ethical Considerations

This study was approved by the Research Ethics Committee of National Taiwan University (201509ES002) before data collection. The study was conducted in accordance with ethical principles. The researchers explained the study procedures and obtained informed consent from the participants. Participation was voluntary, and the participants could withdraw at will from the study at any time. None of the participants were enrolled in the researchers' courses.

 

Results

Phase 1

Forty-four items were developed for the CRS, including 11 items under the "awareness of clinical cues" domain, 13 items under the "confirmation of clinical problems" domain, 11 items under the "determination and implementation of actions" domain, and nine items under the "evaluation and self-reflection" domain.

 

Phase 2

The first round of the Delphi study resulted in 12 items being deleted to avoid abstraction and duplication and two new items being added to comprehensively address the domain of "determination and implementation of actions." Consequently, 34 items were kept in the instrument. The item CVI averaged between .42 and 1.0. The scale CVI was .87.

 

The second round of the Delphi study confirmed that the items accurately reflected the concept and domains of CR for nursing students. Four items were deleted because they lacked specificity. The content validity of the CRS item-CVI averaged between .85 and 1.0, and that of the scale CVI averaged .98. The resulting CRS included 30 items with four domains (refer to Table 1 for details).

 

Phase 3

The 30-item CRS was examined for construct validity using CFA. The participants were asked to complete the 30-item CRS. Furthermore, 1,504 nursing students (response rate = 97%) completed the 30-item CRS. The participants were students from 5-year ADN programs (n = 548), 4-year BSN programs (n = 478), and 2-year RN-to-BSN programs (n = 478). A strong majority of the participants were female (91.7%, n = 1,379), and the average age of the sample was 21.25 (SD = 1.05) years. All were full-time students aged 20 years or older. The Kaiser-Meyer-Olkin test result was .95. Bartlett's test of sphericity was significant for the entire scale (p < .001), indicating an adequate sample number for the CFA (Kline, 2015).

 

Four models were tested (Table 2). Model 1 with 30 items showed poor model fit ([chi]2 = 3871.38, p < .001, GFI = .85, AGFI = .82, RMSEA = .08, and AIC = 930). Five items were deleted because of ambiguous wording and having a factor loading < .40. The remaining 25 items were tested using a four-factor framework in Model 2. The results still showed model misfit. Thus, the correlation coefficient was checked for each domain, and two items with factor loadings > .75 were deleted. Next, the remaining 23 items were tested using a four-factor framework Model 3, which showed a better fit ([chi]2 = 1627.02, p < .001, GFI = .91, AGFI = .89, RMSEA = .07, and AIC = 552). On the basis of the results, to correct the variable with the largest modification index value and, theoretically, to eliminate items with modification index values > 3.84 (F. M. Huang, 2009), seven items and one indicator ("willingness to facilitate patients with problem-solving") were deleted. The results from the final model showed significantly better goodness of fit (Figure 1; [chi]2 = 435.38, p < .001, GFI = .97, AGFI = .95, RMSEA = .049, and AIC = 272). The CRS showed goodness of fit for the model and met the required criteria for construct validity. The results of the RMSEA, root mean square, GFI, AGFI, nonnormed fit index, normed fit index, and AIC further supported the acceptable fit of Model 4 (Table 2). The final version of the CRS includes 16 items with a four-factor framework. The error variances were around .15-.24, with no negative values. The factor loadings were less than .4, indicating that the items are effective in detecting the latent variables of clinical reasoning (F. M. Huang, 2009). This scale clustering of 16 items accounted for 49.03% of the variance of clinical reasoning competence, which explains the total variance in the measured concept.

  
Table 2 - Click to enlarge in new windowTable 2 Goodness-of-Fit Statistics for the Comparative Models of the Clinical Reasoning Scale
 
Figure 1 - Click to enlarge in new windowFigure 1. Measurement Model of the Clinical Reasoning Scale (CRS)

Phase 4

During Phase 4, the researchers tested the internal consistency of each domain. The Cronbach's [alpha] for the entire scale (N = 1,504) was .894. The Cronbach's [alpha] for the four domains were .801, .789, .830, and .839. The item-total correlations were between .627 and .728 (p < .01). The result of the CRS item analysis met the required criteria for internal consistency.

 

Therefore, the final CRS is a four-domain, 16-item instrument that measures nursing students' CR using a 5-point Likert scale (1 = strongly disagree, 2 = disagree, 3 = neutral, 4 = agree, and 5 = strongly agree; Table 3). Each domain consists of four items, and the CRS takes approximately 5-10 minutes to complete. The total possible score for the scale ranges from 16 to 80, with higher scores indicating better clinical reasoning readiness. All of the items use positive descriptions only without any negative questions.

  
Table 3 - Click to enlarge in new windowTable 3 Clinical Reasoning Scale for Nursing Students

A total of 1,504 nursing students completed the CRS. The item average score was 3.96. The item average scores for students from the 5-year ADN programs, 4-year BSN programs, and 2-year RN-to-BSN programs were 3.81, 3.96, and 4.11, respectively. The total CRS scores for the 2-year RN-to-BSN students were significantly higher than the scores for the 5-year ADN and 4-year BSN students (F = 10.07, p < .001; see Table 4). Scores of the four CRS domains revealed a consistent pattern across the three types of nursing programs. For all of the participants, the domain with the highest scores was the "awareness of clinical cues" domain (4.01 +/- 0.48), followed by the "evaluation and reflection" (3.98 +/- 0.52), "determination and implementation of actions" (3.93 +/- 0.51), and "confirmation of clinical problems" (3.88 +/- 0.49) domains. This means that the participants perceived being aware of clinical cues as their most proficient CR ability. Notably, "confirmation of clinical problems" was the weakest CR competency for participants in all program types (see Table 4).

  
Table 4 - Click to enlarge in new windowTable 4 Clinical Reasoning Scale Scores in Nursing Students of Three Types of Nursing Programs

Discussion

In this study, the four-domain, 12-indicator, 16-item CRS instrument was developed to assess nursing students' CR competency. The CR instruments in the literature differ widely to address a diverse range of purposes. Because of the unique scope of the nursing profession, only nursing CR instruments were included in this discussion. Among the nursing CR assessment tools currently available, the LCJR is a frequently used instrument in nursing education, particularly for student simulation experiences. There are similarities and differences between the LCJR and the CRS. The four domains of the CRS are similar to the components of the LCJR (noticing, interpreting, responding, and reflecting; Lasater, 2007). However, the indicators used in the two instruments are different. The 11 indicators of the LCJR were derived from observations of students' simulation experiences (Lasater, 2007), whereas the 12 indicators of the CRS were developed inductively using a qualitative study involving clinically and academically experienced nursing educators and validated for goodness of fit with the theoretical model using CFA. One of the challenges of employing the LCJR is the need for extensive training (K. Adamson, 2016; Victor-Chmil & Larew, 2013). The LCJR consists of four domains, 11 indicators, and four developmental phases (exemplary, accomplished, developing, and beginning), which generate data in 44 columns describing each indicator at different developmental phases. Most of the descriptions for each column include more than one behavioral manifestation of the indicator, which may contribute to difficulties in selecting the appropriate column representing students' CR abilities. Compared with the LCJR, the CRS is clear and succinct and has been validated by CFA. Faculty and students may complete this instrument in 5-10 minutes, with the results immediately applicable to identifying areas of improvement in CR in all nursing courses, especially practicums.

 

For all nursing program types, the lowest mean domain score was reported for the "confirmation of clinical problems." In the CR process, "confirmation of clinical problems" allows nursing students to obtain pertinent clinical cues and effectively implement focused interventions to achieve optimal patient outcomes. Related deficits may present challenges to care provision and contribute to unsafe practices. Studies in the literature have identified similar findings. In an integrative literature review conducted by Killam et al. (2011), knowledge- and skill-related incompetence (particularly deficits in cognitive abilities), critical thinking, problem identification, and clinical problem solving were the principle characteristics found to identify unsafe undergraduate nursing students in clinical practice. In Hunter and Arthur's (2016) qualitative exploratory study, graduate nursing students were also found to lack the necessary CR skills for safe practice. Educators must take this and other inadequacies into consideration when designing and implementing nursing curriculums. In addition, the CRS may be used to refine pedagogical approaches and match the learning needs of students. Students who are unfamiliar with "awareness of clinical cues" may be targeted by educators with appropriate approaches that help them identify and recognize important clinical cues, make appropriate clinical judgments, take proper actions, and engage in self-reflection.

 

Limitations

The psychometric properties of the developed CRS instrument were tested on nursing students from three different types of nursing programs in Taiwan. The results may be influenced by cultural differences. Future studies should be conducted that consider the impact of culture on nursing students' CR processes. In addition, pedagogical approaches and modalities that may improve performance in the "confirmation of clinical problems" domain should be investigated to strengthen nursing students' CR ability. Studies that examine the application of the CRS on practicing nurses should also be conducted.

 

Conclusions

The 16-item CRS is a valid and reliable tool for assessing CR in nursing students. Nursing educators may use the CRS to identify strengths and weaknesses in the CR of their students and facilitate student growth with regard to CR competency. Future studies are recommended to investigate educational approaches that cultivate improved clinical reasoning in nursing students.

 

Author Contributions

Study conception and design: HMH, SFC

 

Data collection: HMH

 

Data analysis and interpretation: KCL

 

Drafting of the article: CYH

 

Critical revision of the article: SFC, CHY

 

References

 

Adamson K. (2016). Rater bias in simulation performance assessment: Examining the effect of participant race/ethnicity. Nursing Education Perspectives, 37(2), 78-82. https://doi.org/10.5480/15-1626[Context Link]

 

Adamson K. A., Gubrud P., Sideras S., Lasater K. (2012). Assessing the reliability, validity, and use of the Lasater clinical judgment rubric: Three approaches. Journal of Nursing Education, 51(2), 66-73. https://doi.org/10.3928/01484834-20111130-03[Context Link]

 

Alfaro-LeFevre R. (2019). Critical thinking, clinical reasoning, and clinical judgment: A practical approach (7th ed.). Elsevier. [Context Link]

 

Al-Moteri M., Plummer V., Cooper S., Symmons M. (2019). Clinical deterioration of ward patients in the presence of antecedents: A systematic review and narrative synthesis. Australian Critical Care, 32, 411-420. https://doi.org/10.1016/j.aucc.2018.06.004[Context Link]

 

American Association of Colleges of Nursing. (2021). The essentials: Core competencies for professional nursing education. https://www.aacnnursing.org/Portals/42/AcademicNursing/pdf/Essentials-2021.pdf[Context Link]

 

Aubart F. C., Papo T., Hertig A., Renaud M. C., Steichen O., Amoura Z., Braun M., Palombi O., Duguet A., Roux D. (2021). Are script concordance tests suitable for the assessment of undergraduate students? A multicenter comparative study. Revue de Medecine Interne, 42(4), 243-250. https://doi.org/10.1016/j.revmed.2020.11.001[Context Link]

 

Brown T. A. (2015). Confirmatory factor analysis for applied research (2nd ed.). Guilford Press. [Context Link]

 

Burns N., Grove S., Sutherland S. (2020). The practice of nursing research: Appraisal, synthesis, and generation of evidence (9th ed.). Elsevier. [Context Link]

 

Grave S., Cipher D. (2017). Statistic for nursing research: A workbook for evidence-based pratice (2nd ed.). Elsevier. [Context Link]

 

Griffits S., Hines S., Moloney C., Ralph N. (2017). Characteristics and processes of clinical reasoning in nurses and factors related to its use: A scoping review protocol. JBI Database of Systematic Reviews and Implementation Reports, 15(12), 2832-2836. https://doi.org/10.11124/JBIDRIS-2016-003273[Context Link]

 

Hong S., Lee J., Jang Y., Lee Y. (2021). A cross-sectional study: What contributes to nursing students' clinical reasoning competence? International Journal of Environmental Research and Public Health, 18(13), Article 6833. https://doi.org/10.3390/ijerph18136833[Context Link]

 

Huang F. M. (2009). Theory and application of structural equation modeling (5th ed., pp. 183-194). Wunan (Original work published in Chinese) [Context Link]

 

Huang H. M., Huang C. Y., Lee-Hsieh J., Cheng S. F. (2018). Establishing the competences of clinical reasoning for nursing students in Taiwan: From the nurse educators' perspectives. Nurse Education Today, 66, 110-116. https://doi.org/10.1016/j.nedt.2018.04.007[Context Link]

 

Hunter S., Arthur C. (2016). Clinical reasoning of nursing students on clinical placement: Clinical educators' perceptions. Nurse Education in Practice, 18, 73-79. https://doi.org/10.1016/j.nepr.2016.03.002[Context Link]

 

Jang A., Song M., Kim S. (2021). Development and effects of leukemia nursing simulation based on clinical reasoning. International Journal of Environmental Research and Public Health, 18(8), Article 4190. https://doi.org/10.3390/ijerph18084190[Context Link]

 

Jarvelainen M., Cooper S., Jones J. (2018). Nursing students' educational experience in regional Australia: Reflections on acute events. A qualitative review of clinical incidents. Nurse Education in Practice, 12(31), 188-193. https://doi.org/10.1016/j.nepr.2018.06.007[Context Link]

 

Kao C. C., Chao H. L., Liu Y. H., Pan I. J., Yang L. H., Chen W. I. (2022). Psychometric testing of the newly developed competence scale for clinical nurses. The Journal of Nursing Research, 30(2), Article e198. https://doi.org/10.1097/jnr.0000000000000472[Context Link]

 

Kavanagh J. M., Szweda C. (2017). A crisis in competency: The strategic and ethical imperative to assessing new graduate nurses' clinical reasoning. Nursing Education Perspectives, 38(2), 57-62. https://doi.org/10.1097/01.NEP.0000000000000112[Context Link]

 

Killam L. A., Luhanga F., Bakker D. (2011). Characteristics of unsafe undergraduate nursing students in clinical practice: An integrative literature review. Journal of Nursing Education, 50(8), 434-446. https://doi.org/10.3928/01484834-20110517-05[Context Link]

 

Kim S. M., Kim J. H., Kwak J. M. (2022). Psychometric properties of the Korean version of the Nursing Profession Self-Efficacy Scale. The Journal of Nursing Research, 30(2), Article e197. https://doi.org//10.1097/jnr.0000000000000481[Context Link]

 

Klenke-Borgmann L., Cantrell M. A., Mariani B. (2020). Nurse educators' guide to clinical judgment: A review of conceptualization, measurement, and development. Nursing Education Perspectives, 41(4), 215-221. https://doi.org/10.1097/01.NEP.0000000000000669[Context Link]

 

Kline R. B. (2015). Principles and practice of structural equation modeling (4th ed.). Guilford Press. [Context Link]

 

Lasater K. (2007). Clinical judgment development: Using simulation to create an assessment rubric. Journal of Nursing Education, 46(11), 496-503. https://doi.org/10.3928/01484834-20071101-04[Context Link]

 

Levett-Jones T., Hoffman K., Dempsey J., Jeong S. Y., Noble D., Norton C. A., Roche J., Hickey N. (2010). The 'five rights' of clinical reasoning: An educational model to enhance nursing students' ability to identify and manage clinically 'at risk' patients. Nurse Education Today, 30(6), 515-520. https://doi.org/10.1016/j.nedt.2009.10.020[Context Link]

 

Liou S. R., Liu H. C., Tsai H. M., Tsai Y. H., Lin Y. C., Chang C. H., Cheng C. Y. (2016). The development and psychometric testing of a theory-based instrument to evaluate nurses' perception of clinical reasoning competence. Journal of Advanced Nursing, 72(3), 707-717. https://doi.org/10.1111/jan.12831[Context Link]

 

Manetti W. (2018). Sound clinical judgment in nursing: A concept analysis. Nursing Forum, 54(1), 102-110. https://doi.org/10.1111/nuf.12303[Context Link]

 

Mariani B. (2021). The COVID-19 pandemic and the launch of a new era of education research. Nursing Education Perspectives, 42(2), 68. https://doi.org/10.1097/01.NEP.0000000000000792[Context Link]

 

Menezes S. S., Correa C. G., Silva Rde C. G., Cruz Dde A. (2015). Clinical reasoning in undergraduate nursing education: A scoping review. Revista da Escola de Enfermagem da USP, 49(6), 1037-1044. https://doi.org/10.1590/S0080-623420150000600021[Context Link]

 

Mohammadi-Shahboulaghi F., Khankeh H., HosseinZadeh T. (2021). Clinical reasoning in nursing students: A concept analysis. Nursing Forum, 56(4), 1008-1014. https://doi.org/10.1111/nuf.12628[Context Link]

 

National Council State Boards of Nursing. (2019). Clinical judgment measurement model. https://www.ncsbn.org/14798.htm[Context Link]

 

National Organization of Nurse Practitioner Faculties. (2017). Nurse practitioner core competencies content. https://cdn.ymaws.com/www.nonpf.org/resource/resmgr/competencies/2017_NPCoreComp[Context Link]

 

Nayer M., Glover Takahashi S., Hrynchak P. (2018). Twelve tips for developing key-feature questions (KFQ) for effective assessment of clinical reasoning. Medical Teacher, 40(11), 1116-1122. https://doi.org/10.1080/0142159X.2018.1481281[Context Link]

 

Nunnally J. C. (1967). Psychometric theory. McGraw-Hill. [Context Link]

 

Polit D. F., Beck C. T. (2017). Nursing research generating and assessing evidence for practice. Lippincott Williams & Wilkins. [Context Link]

 

Roman-Cereto M., Garcia-Mayor S., Kaknani-Uttumchandani S., Garcia-Gamez M., Leon-Campos A., Fernandez-Ordonez E., Ruiz-Garcia M. L., Marti-Garcia C., Lopez-Leiva I., Lasater K., Morales-Asencio J. M. (2018). Cultural adaptation and validation of the Lasater clinical judgment rubric in nursing students in Spain. Nurse Educator Today, 64, 71-78. https://doi.org/10.1186/s12909-019-1454-9[Context Link]

 

Shin H., Park C. G., Shim K. (2015). The Korean version of the Lasater clinical judgment rubric: A validation study. Nurse Education Today, 35(1), 68-72. https://doi.org/10.1016/j.nedt.2014.06.009[Context Link]

 

Simmons B. (2010). Clinical reasoning: Concept analysis. Journal of Advanced Nursing, 66(5), 1151-1158. https://doi.org/10.1111/j.1365-2648.2010.05262.x[Context Link]

 

Tanner C. A. (2006). Thinking like a nurse: A research-based model of clinical judgment in nursing. Journal of Nursing Education, 45(6), 204-211. https://doi.org/10.3928/01484834-20060601-04[Context Link]

 

Victor-Chmil J., Larew C. (2013). Psychometric properties of the Lasater clinical judgment rubric. International Journal of Nursing Education Scholarship, 10(1), 1-8. https://doi.org/10.1515/ijnes-2012-0030[Context Link]

 

Vierula J., Talman K., Hupli M., Laakkonen E., Engblom J., Haavisto E. (2021). Development and psychometric testing of reasoning skills test for nursing student selection: An item response theory approach. Journal of Advanced Nursing, 77(5), 2549-2560. https://doi.org/10.1111/jan.14799[Context Link]

 

Vreugdenhil J., Spek B. (2018). Development and validation of Dutch version of Lasater clinical judgment rubric in hospital practice: An instrument design study. Nurse Education Today, 62, 43-51. https://doi.org/10.1016/j.nedt.2017.12.013[Context Link]

 

Yang F., Wang Y., Yang C., Zhou M. H., Shu J., Fu B., Hu H. (2019). Improving clinical judgment by simulation: A randomized trial and validation of the Lasater clinical judgment rubric in Chinese. BioMed Central Medical Education, 19(20), 1-6. https://doi.org/10.1186/s12909-019-1454-9[Context Link]