Print this page Close window


Instrument Development and Testing for Selection of Nursing Preceptors

 

Authors

  1. Cotter, Elizabeth PhD, RN-BC
  2. Eckardt, Patricia PhD, RN
  3. Moylan, Lois PhD, RN

Abstract

The purpose of this pilot study was to develop and test a preceptor selection instrument for validity and reliability. Using a valid and reliable instrument to help identify and select an appropriate nurse preceptor supports the success of both the preceptor and the new nurse graduate. The 14-item Cotter Preceptor Selection Instrument was developed to assess attributes of potential preceptor candidates. Use of a robust and user-friendly instrument can provide nursing leadership with a consistent, measurable, and collaborative process for selecting preceptors.

 

Article Content

Many hospitals are recognizing the value of pairing a new graduate nurse or experienced nurse new to a specialty with a competent preceptor to guide them through orientation over a period of time (Thomas, Bertram & Allen, 2012). Preparing new graduates to become safe and competent independent practitioners is a responsibility that falls mainly on a preceptor (Boyer, 2008; Horton, DePaoli, Hertach, & Bower, 2012). Because of this responsibility, it is important that healthcare organizations develop and implement effective preceptor programs to prepare experienced nurses to guide the novice nurse entering the profession (Hickey, 2009; Horton et al., 2012; Thomas, Bertram, & Allen, 2012). The preceptor process begins with the selection of a nurse with the appropriate skill set for the role of preceptor. Not all nurses are interested in being a preceptor or have the skills and ability to be one. The literature clearly states that preceptors should be selected on the basis of attributes such as clinical competence, effective communication skills, teaching ability, interest in continued professional growth and development, leadership skills, motivation to share clinical experiences, and the ability to objectively evaluate new graduates' performance. In addition, an effective preceptor should be a caring individual (Haggerty, Holloway, & Wilson, 2012; Hilligweg, 1993; Horton et al., 2012; Sandau & Halm, 2011). The right "match" of preceptor and orientee is critical to new hire retention and consequently decreases registered nurse vacancy rates (Barnett, Minnick, & Norman, 2014; Casey, Fink, Krugman, & Propst, 2004). It has also been associated with better outcomes on nurse-sensitive indictors, such as patient falls and medication errors (Cotter & Dienemann, 2016). However, several of these personal attributes, such as motivation, teaching ability, and leadership skills, are latent constructs and cannot always be observed or measured directly. Latent constructs are prone to bias and measurement error unless estimated with psychometrically tested valid and reliable instruments (Allen & Yen, 1979/2002; DeVellis, 2012).

 

The use of a valid and reliable instrument helps to reduce bias and subjective appraisal and assists in accurate identification of the appropriate nurse preceptors, which in turn supports the success of the preceptor and ultimately the new graduate/orientee (Whitehead, Owen, Henshaw, Beddingham, & Simmons, 2016). However, there are few established instruments for preceptor identification, selection, and evaluation (Haggerty et al., 2012). A systematic and focused search of the literature using keywords such as "preceptor selection," "preceptor," "selection," "preceptor program," "preceptor instrument," "preceptor," "nursing" in PubMed, the Cumulative Index to Nursing and Allied Health Literature, Education Resources Information Center, MEDLINE, Health and Psychological Instruments, ProQuest Dissertation and Thesis and a gray literature search including the Directory of Unpublished Experimental Measures identified just one psychometrically validated scale, the Proficiency Profile Self-Appraisal (PPSA) instrument developed by Hilligweg (1993). In discussing the psychometric properties of the PPSA, Hilligweg (1993) reported a Chronbach's alpha of .98 for interrater reliability. The PPSA instrument is over 25 years old and does not fully address the American Nurses Association (ANA) 2010 Nursing: Scope and Standards of Practice, potentially limiting its adequacy in representing the domain of content in the current healthcare environment (Allen & Yen, 1979/2002; DeVellis, 2012). In addition, the length of a tool (more than 50 items) is an often-cited limitation of its utility in practice as a screening measure (Lischka, Mendelsohn, Overend, & Forbes, 2012; Maxwell et al., 2015).

 

Though preceptor evaluation instruments abound in academia and in clinical settings, aside from the PPSA, there is currently no psychometrically validated instrument available to use for selection of preceptors to work with the novice nurse. However, even though the psychometric properties of the PPSA are robust, the estimates may not be applicable to today's complex nursing environment or are not reflective of the current ANA (2010) Nursing: Scope and Standards of Practice. Reliability and validity estimates do not reflect isolated fixed properties of a particular scale but, rather, estimate the interaction between the scale, the sample, and the sample setting, limiting the inference that can be derived from the PPSA estimates of 1993 (Streiner & Kottner, 2014). This instrument will be discussed in greater detail in the following section.

 

BACKGROUND

The importance of a formal preceptor program in developing nurses to the preceptor role is widely recognized (Horton et al., 2012; Speers, Strzyzewski, & Ziolkowski, 2004). Preceptor teaching methodologies and competencies have also been addressed in the literature (Hickey 2009; Richards & Bowles, 2012; Sandau & Halm, 2011; Speers et al., 2004). A quality preceptorship is dependent upon the knowledge and skill set of the preceptor as well as the commitment to the preceptor role (Haggerty et al., 2012). The literature also supports that an effective preceptor experience is critical to the successful transition of the novice nurse (Horton et al., 2012). Last, recommendations in many studies on preceptorship focus on the need for a more clearly defined preceptor selection process (Haggerty et al., 2012; Horton et al., 2012). Research studies that focus on the development and use of preceptor selection instruments are limited. The only psychometrically validated instrument available for use at this time is the PPSA.

 

Speers et al. (2004) identified a preceptor selection criteria checklist, which identified acceptable potential preceptors, as determined by review of the literature. Managers chose nurses for a preceptor development program and completed a preceptor selection criteria form. Demonstration of competent practice, critical thinking, team behavior, positive attitude, ability to provide both positive and negative feedback, leadership skills, continued professional growth, outstanding communication skills, being a positive role model, willingness to share expertise, promoting an environment of learning, and stating an interest in being a preceptor were the selection criteria candidates needed to demonstrate to begin the preceptor training program. Selection criteria were validated by the nurse manager, the educator, and the prospective preceptor. The literature supports that preceptor selection should not be based on the preceptor's availability alone, but rather on having the appropriate skill set (Horton et al., 2012; Pigott, 2001). The use of a preceptor selection instrument can help nurse managers and educators identify potential preceptors to guide the skill acquisition and role development of the novice nurse.

 

Hartline (1993) described another nonpsychometrically validated preceptor selection instrument that was developed by a cardiac stepdown unit's manager, educator, and preceptors. This instrument identifies 15 qualifications for a preceptor based on a literature review and unit needs. The instrument identifies five themes based on the 15 qualifications: nursing process (35%), interpersonal skills (25%), leadership skills (10%), teaching skills (20%), and professional attributes (10%) with each assigned a weight for a total of 100%. The percentage assigned by manager and educator is based on projected frequency of that skill being used by the preceptor. Nurses who chose to precept are required to complete the instrument along with a narrative statement to support each theme. The manager also completes the narrative on each prospective preceptor. The preceptor applicant must score a total score of 80% or greater to be selected as a preceptor (Hartline, 1993).

 

An instrument developed by Hilligweg (1993), the PPSA, was developed to guide nursing leaders in the selection of preceptors. The PPSA may be used as a self-appraisal instrument or as a preceptor selection instrument by the nurse manager. The PPSA is a 91-item instrument that published a measure of interitem reliability (Cronbach's [alpha] = .98). Though a valid and reliable instrument, the PPSA has not been widely adopted in the preceptor selection process. The length of the instrument may preclude its use as a standardized screening instrument (Lischka et al., 2012; Maxwell et al., 2015; Suris, Holder, Holliday, & Clem, 2016). Based on criteria that were identified in the literature as desirable for nurse preceptors, clinical competence, communication skills, teaching ability, interest in professional activities, and leadership ability were selected as attributes upon which to develop the self-appraisal instrument. These attributes compose the five subscale topics that a reviewer must assess to determine the proficiency level of a potential preceptor, using a 5-point Likert scale (1 to 5) for each item (1 = minimum, 2 = average, 3 = superior, 4 = excellent, 5 = not applicable). Reliability test results concluded that the items identified in each subscale of the PPSA instrument are discriminatory indicators for identifying high-quality preceptors: overall Cronbach's alpha coefficient for interitem reliability was .98 (n = 91); Cronbach's alpha coefficient for interitem reliability of subscales were as follows: clinical competence .96 (n = 28 items), communication .95 (n = 20), teaching ability .93 (n = 14), leadership .96 (n = 22), and professional activities .90 (n = 7). The known-groups technique was employed to test validity of the tool. Significant differences were demonstrated in the critical attributes of clinical competence, communication skills, teaching ability, leadership, and professional activities between new nurses and experienced nurses to support the validity of the instrument with comparison on the subscales, F(3,113) = 61.4, p < .001. Content validity of the instrument was conducted by three nurse mangers who served as content experts. The instrument was reported to be easy to use, each item seemed to fit the concept of preceptor role, the performance level assigned using the instrument was congruent with previous performance appraisals for the selected staff nurse, and no items were deleted (Hilligweg, 1993).

 

PURPOSE

The purpose of this methodological study was to evaluate the psychometric properties of the newly developed instrument, the Cotter Preceptor Selection Instrument (CPSI), and to reestablish the reliability of the PPSA with a pilot study in a new population (nurse preceptors and orientees in 2016).

 

STUDY METHODS

Design

The study was a methodological pilot study design assessing psychometric validity and reliability of the CPSI, an investigator-developed instrument. Institutional review board approval was secured from the study site, a hospital located in the northeastern United States, prior to initiating the study. The institutional review board application included confidentiality of collected data and protection of existing evaluative records.

 

Instrument

The CPSI is a 14-item instrument developed using the ANA's (2010) Nursing: Scope and Standards of Practice representing the domain of content in the current healthcare environment. The framework used to construct the CPSI followed the scale development guidelines provided by DeVellis (2012) for a similar instrument. The CPSI was created by the preceptor coordinator based on a literature review of positive preceptor attributes (Horton et al., 2012; Myrick & Barrett, 1992; Speers et al., 2004). The 14 items developed address areas of clinical competence, nursing process, transformational leadership, collaboration/communication, professional development, conflict resolution, commitment, flexibility, empowerment, and values. The CPSI uses a 3-point Likert scale ranging from 1 = needs improvement, 2 = meets expectations, to 3 = exceed expectations. A total score of 35 or greater is required for a nurse to be eligible to become a preceptor (see Table 1). A score of 35 was determined as follows: Using a 1-3 Likert-type scale with 14 items, a score of 48 would be a perfect score representing behaviors above the expected level in all areas. A score of 28 would be the minimal accepted score representing behaviors just meeting the expected level. The midpoint between the high and low score is 35. The principal investigator felt that preceptors have such an important role that they should score above expected level rather than just at the minimum expected level.

  
Table 1 - Click to enlarge in new windowTABLE 1 Cotter Preceptor Selection Instrument

Sample and Setting

The face and content validity of the CPSI were established by a purposive sample of four nursing professional development (NPD) practitioners. The construct validity of the instrument was estimated with a pilot sample of data from nurses (n = 13) from a 420-bed acute care hospital in a suburban setting using a retrospective record review from January 2015 through December 2015.

 

The pilot sample (n = 13) of nurse preceptors was predominantly White (70%), non-Hispanic (90%) women (84%), with an average age of 44 years.

 

Instrument Validation

Face and content validity

The face and content validity of both the CPSI and PPSA were evaluated before further testing of construct validity and piloting of the CPSI. The purpose of the study was to evaluate the psychometric properties of the newly developed tool, the CPSI, and to reestablish the reliability of the PPSA with a pilot study in a new population of nurse preceptors and orientees. Face and content validity, though not strong psychometric estimates, provide an essential foundation of multiple expert opinions regarding the representativeness of the items of the constructs under investigation and the readability and ease of understanding of items (Polit & Beck, 2006; Streiner & Kottner, 2014; Waltz, Strickland & Lenz, 2005).

 

Procedure

The four NPD experts were identified through solicitation within the community of nurse educators in Long Island, New York. One of the responsibilities of the NPD practitioners in a clinical setting is to assist with selection of preceptors for the new nurses beginning employment at the hospital. The NPD practitioners typically select preceptors based on the nurse's ability to teach, previous positive evaluations, and their overall clinical performance. The NPD practitioners were members of the local Education and Practice Council Board. This council is made up of NPD practitioners and nursing education directors from local hospitals and nursing professors from the area's academic partners. The NPD practitioners from the board have more than 10-20 years of education experience in acute care hospitals throughout the area. The principal investigator, also a member of the council, asked for volunteers from this pool of experts to evaluate and provide feedback on the two preceptor selection tools.

 

The four NPD practitioner volunteers evaluated the two preceptor selection tools (the PPSA tool and the CPSI) for face and content validity. Although not an estimate of construct or criterion validity, face validity does provide foundational expert opinion as to the sample of questions on an instrument's construction and adequate representation of the universe of items that could represent the construct of preceptor proficiency (Polit & Beck, 2006; Trochim, 2000). Before assessment for face validity of the instruments was done, a table of evidence synthesizing the literature on preceptor selection was developed and distributed to the group of experts along with the two instruments to be examined to provide most up-to-date research on preceptor selection. The members were then given a 4-week period to review the material individually before they evaluated the instruments electronically.

 

Using four raters, individual item-level content validity indices (I-CVI), scale-level content validity indices (S-CVI), and kappa interitem rater agreement statistics were estimated for each tool. Items were rated as "nonrelevant" (1 or 2 rating) or "relevant" (3 or 4 rating). I-CVI estimates proportion of agreement among raters for each item, S-CVI provided an overall estimate of content validity for the instrument, and a kappa statistic estimates provided a conservative estimate of agreement as it accounts for random chance of agreement among raters. A Fleiss' kappa was estimated as there were four raters (Gwet, 2016; Nunnally & Bernstein, 1994).

 

Each estimate was included in the final determination of face validity of individual items and scale. All scoring sheets were inspected for missing data and/or ambivalent data (midline circled responses). If either were found, the researcher would contact the expert rater for further clarification and a final score on that item. Each item was then scored for the I-CVI statistic and Kappa statistic. The first estimate of the I-CVI was obtained by collapsing the original four level responses into dichotomous responses of "not relevant" rating (original scores 1 or 2) and "relevant" rating (original scores 3 or 4). Next, a kappa statistic was estimated. A Fleiss' kappa statistics (using agreement on dichotomous categories of relevant or not relevant) was estimated as the study included more than 2 expert raters. The S-CVI universal agreement and S-CVI-average were estimated. In addition, an S-CVI universal agreement was calculated as the proportion of items on the scale that received a 3 or 4 rating by each expert. Last, an S-CVI-average was estimated. The S-CVI-average calculation included summing the I-CVIs and then dividing by the number of items. After kappa estimates were conducted, the four NPD practitioners were called individually to discuss the tools and to provide an opportunity to add specific questions, suggestions for further item refinement or development, or any evidence they believed would enrich the CPSI.

 

Construct validity and internal consistency

After the principal investigators established the CPSI's face and content validity, they then evaluated the CPSI for construct validity and concordance with two other evaluation instruments: the preceptor/preceptor candidate's annual performance appraisal and the orientee's evaluation of the preceptor.

 

Procedure. The CPSI's criterion validity was estimated with two other evaluation instruments: the preceptor/preceptor candidate's annual performance appraisal and the orientee's evaluation of the preceptor. These two evaluation tools are measures estimating the perceived competence of a preceptor by the preceptor's direct manager and orientee. Criterion validity is a type of construct validity that provides estimates of a instrument's validity by comparing or contrasting it to other known reliable and valid measures of the same construct (DeVon et al., 2007). The CPSI was tested for criterion validity with a sample that included nurse preceptors who are practicing preceptors and preceptor candidates who were not selected for the role of preceptor by their unit-based council (UBC). The UBC includes unit staff nurses, managers and the NPD practitioner. One of the responsibilities of the UBC is to score the CPSI for each preceptor candidate. A sample of n = 13 was used for this pilot study. This was a pilot study to estimate validity and reliability of an instrument, and though a larger sample size would have added to the precision of the estimates, the sample size is sufficient (Cicchetti, 2001).

 

The NPD practitioners within the hospital conduct a quarterly prescreening of potential preceptors prior to soliciting prospective candidates to apply as preceptors. Though this process excludes nurses who will likely not meet preceptor criteria, it also reduces sample size for piloting of the instrument. Concurrent convergent criterion validity is confirmed when scores on an instrument are positively correlated with significant magnitude to a related criterion at the same point in time (DeVon et al., 2007). To assess this measure, the CPSI score of the preceptor candidate was compared to the preceptor candidate's rating on their annual performance record completed by their unit manager. Concurrent convergent criterion validity estimates were obtained as correlative measures between the CPSI and the preceptor's rating by their orientee using archived retrospective records of two distinct time points a minimum of 6 months apart.

 

A second measure of validity was established between the preceptor's evaluation by the orientee (PEbO) and the preceptor candidates' scores on the CPSI. This second comparative instrument was a measure reflecting preceptor rating by their orientees. The PEbO is a 17-item instrument with the first 12 items in the Likert scale format ranging from 0 = never to 4 = always, and the next five items in Likert-type scale with responses ranging from 1 = needs improvement to 3 = exceeds expectations; total scores on the scale can range from 5 to 63 with the higher score reflecting a higher assessment of the preceptor's abilities. These data were obtained using archived retrospective records of two distinct time points a minimum of 1 month apart. Though these scales are Likert-type scales, there is an assumption of an underlying normal distribution to the constructs being estimated; therefore, Pearson's r estimates were obtained in these analyses (Kampen & Swynjedouw, 2000). Because of the small sample size in this pilot study, for each estimate of convergent validity (r), 95% confidence intervals were constructed to provide more information about the certainty of parameter of the estimate (Streiner & Kottner, 2014; Vetter, 2017). The confidence intervals were obtained by applying a Fisher's Z transformation to the r estimates and then converting back to r estimates (Hjelm & Norris, 1962; Mikulich-Gilbertson, Wagner, Riggs, & Zerbe, 2017).

 

In addition, a measure of internal consistency for the CPSI was estimated using Cronbach's alpha coefficient for interitem reliability. For a pilot instrument, Cronbach's alpha coefficient of .70 is considered acceptable (Streiner, 2003a). To examine items at individual level and their contributions to the overall scale, scale properties with item deleted and item to total scale correlations were estimated.

 

STATISTICAL ANALYSIS AND DATA SECURITY

Systems were in place to ensure the security and integrity of study data. These included a locked file cabinet to house any raw data that were only accessible to the principal investigators. These data were coded and entered into an electronic database that was double password protected. The measures of item and scale validity were calculated with MS Excel and STATA 11 software. De-identified data from existing evaluative tools were entered into a statistical software package (STATA 11). Correlations between total scores on the three instruments (CPSI, annual evaluation of preceptor [APR], PEbO) were estimated. Scores of the preceptors on the CPSI should be similar to scores on the other two evaluation instruments in order for criterion validity to be present. All electronic files were transmitted through a hard drive transfer with encrypted e-mail.

 

RESULTS

Face and Content Validity

The proportion of items on the PPSA instrument on the scale that received a relevant rating of 3 or 4 by each expert was .99, and the estimated average of CVI was 3.636. The proportion of items on the CPSI on the scale that received a relevant rating of 3 or 4 by each expert was 1.00, and the estimated average of CVI was 3.79 (see Table 2). The CPSI had a Fleiss' kappa of .64, which indicates substantial agreement of interrater agreement while accounting for random chance (Nunnally & Bernstein, 1994; Streiner, 2003a).

  
Table 2 - Click to enlarge in new windowTABLE 2 Content Validity Estimates of Instruments

Construct Validity and Internal Consistency

The CPSI was demonstrated to have convergent criterion validity with the APR, r(13) = .56, p = .032, and the PEbO, r(11) = .23, p = .493 (see Table 3). However, the 95% confidence intervals constructed around each parameter estimate are wide, with the estimates around the CPSI and PEbO correlation including 0. Descriptive statistics on each CPSI item were reported; the total score on the CPSI ranged from 39 to 42 (see Table 4). The individual items' means did not vary widely (M range 2.62-3.00). However, item dispersion around the mean reflected more heterogeneity in some items than others (SD range 0.033-0.751).

  
Table 3 - Click to enlarge in new windowTABLE 3 Criterion Validity Estimates of Cotter Preceptor Selection Instrument
 
Table 4 - Click to enlarge in new windowTABLE 4 Descriptive Statistics of Items of Cotter Preceptor Selection Instrument

Interitem reliability of the CPSI was estimated with a Cronbach's alpha coefficient; the pilot sample estimate was .85. An acceptable estimate of interitem reliability on a pilot tool is >.70 (Streiner, 2003b). Item analyses in relationship to total scale demonstrated some items with a weak item to total correlation (see Table 5), but no item was estimated to influence the Cronbach's alpha coefficient to a nonacceptable level of <.70. Further analyses of paired items in subscales estimated a strong significant correlation between the nursing process items, r(13) = .79, p = .008, and the collaboration/communication skills items, r(13) = .68, p = .011, a nonsignificant small correlation between the transformational leadership items, r(13) = .23, p = .669, and a nonsignificant negative correlation between the professional development items, r(13) = .18, p = .552.

  
Table 5 - Click to enlarge in new windowTABLE 5 Estimates of Scale Mean if Item Deleted, Scale Variance if Item Deleted, Corrected Item-Total Correlations, Scale Reliability ([alpha]) if Item Deleted of the Cotter Preceptor Selection Instrument (

DISCUSSION

The results for both the translational and criterion validity of the CPSI demonstrate that the tool can be used as a standardized rating system for preceptor selection. The PPSA and CPSI instruments were evaluated by a group of NPD practitioners. The instruments were scored using individual I-CVI and S-CVI. In addition, Kappa interitem rater agreement statistics were estimated on each instrument. The results indicate that both the CPSI and the PPSA represent the construct being measured. However, the subject burden for preceptor selection committees is often too high due to the length of the PPSA (items) and the amount of time needed to complete it.

 

To estimate convergent validity, a retrospective approach was used to evaluate the CPSI for criterion validity with two other evaluation tools: the APR and the PEbO. The CPSI and the APR were strongly correlated, demonstrating convergent validity between the selection appraisal score of the preceptor and the nurse manager's annual evaluation of the preceptor. There was a small to moderate positive correlation between the CPSI and the PEbO, reflecting some mutual directionality of the orientee's evaluation of the preceptor and the selection appraisal score of the preceptor. However, because of the wide confidence intervals around these point parameter estimates, these measures of criterion validity support the need for further testing of the CPSI in larger and more diverse populations as well as a larger sample of the initial population.

 

There was one subscale of the CPSI that had poor interitem correlation, r(13) = -.18, p = .552. This was under the area of professional development. This may be due to a perception that the two items in the subscale measure two different subconstructs. One item focused on whether the preceptor provides "learning moments" to develop peers, and the other one looked at the preceptor's involvement in "learning activities, committees, and/or staff meetings." This finding needs further testing and refinement if these items are to remain grouped under the same subconstruct.

 

LIMITATIONS

The study had limitations. Although this was a pilot study and there is no consistently recommended minimum sample size for a pilot study, the sample size was small, and replication of the study is needed using a larger sample. In addition, 11 preceptor candidates who achieved a >35 total score on the CPSI by their UBC and two preceptor candidates that achieved <35 total score by the UBC were used in the sample. One reason the number of preceptor candidates not achieving the required total score of <35 on the CPSI was small may be because the NPD practitioners on the unit did preliminary screening of potential preceptor candidates before they were evaluated by the UBC. Because this was a pilot study, additional studies would be of use in further validating the instrument.

 

CONCLUSION

The preceptor selection and evaluation process is not a new concept to NPD practitioners. Having a validated instrument available for selection of preceptors to guide the novice nurse in today's healthcare environment is important. Preceptor selection should not be based on the preceptor's availability, but rather on their having the appropriate skill set (Horton et al., 2012; Pigott, 2001). The CPSI was found to have good construct validity both in translational validity (face and content) and criterion validity, making it a valid scale to aid in preceptor selection. The CPSI could be used by nurse managers and NPD practitioners to select the appropriate candidates to guide skill acquisition and role development of the novice nurse. After further testing and validation in larger and more diverse nurse preceptor populations, the instrument may offer a standardized rating system for preceptor selection.

 

References

 

Allen M. J., Yen W. M. (1979/2002). Introduction to measurement theory. Prospect Heights, IL: Waveland. [Context Link]

 

American Nurses Association. (2010). Nurses: scope and standards of practice. Nursesbooks.org.

 

Barnett J. S., Minnick A. F., Norman L. D. (2014). A description of U.S. post-graduation nurse residency programs. Nursing Outlook, 62(3), 174-184. [Context Link]

 

Boyer S. A. (2008). Competence and innovation in preceptor development: Updating our programs. Journal for Nurses in Staff Development, 24(2), E1-E6. [Context Link]

 

Casey K., Fink R., Krugman M., Propst J. (2004). The graduate nurse experience. Journal of Nursing Administration, 34(6), 303-311. [Context Link]

 

Cicchetti D. V. (2001). The precision of reliability and validity estimates re-visited: Distinguishing between clinical and statistical significance of sample size requirements. Journal of Clinical and Experimental Neuropsychology, 23(5), 695-700. [Context Link]

 

Cotter E., Dienemann J. (2016). Professional development of preceptors improves nurse outcomes. Journal for Nurses in Professional Development, 32(4), 192-197. [Context Link]

 

DeVellis F. (2012). Scale development: Theory and applications (3rd ed.). Los Angeles: Sage. [Context Link]

 

DeVon H. A., Block M. E., Moyle-Wright P., Ernst D. M., Hayden S. J., Lazzara D. J., Kostas-Polston E. A. (2007). A psychometric toolbox for testing validity and reliability. Journal of Nursing Scholarship, 39(2), 155-164. doi:10.1111/j.1547-5069.2007.00161.x [Context Link]

 

Gwet K. L. (2016). Testing the difference of correlated agreement coefficients for statistical significance. Educational and Psychological Measurement, 76(4), 609-637. [Context Link]

 

Haggerty C., Holloway K., Wilson D. (2012). Entry to nursing practice preceptor education and support: Could we do it better? Nursing Praxis in New Zealand Inc., 28(1), 30-39. [Context Link]

 

Hartline C. (1993). Preceptor selection and evaluation: A tool for educators and managers. Journal of Nursing Staff Development, 9(4), 188-192. [Context Link]

 

Hickey M. T. (2009). Preceptor perceptions of new graduate nurse readiness for practice. Journal for Nurses in Staff Development, 25(1), 35-41. [Context Link]

 

Hilligweg U. K. (1993). Selection of preceptors: A reliable assessment tool. Canadian Journal of Nursing Administration, 6(4), 25-27. [Context Link]

 

Hjelm H. F., Norris R. C. (1962). Empirical study of the efficacy of Fisher's z-transformation. Journal of Experimental Education, 30, 269-277. [Context Link]

 

Horton C. D., DePaoli S., Hertach M., Bower M. (2012). Enhancing the effectiveness of nurse preceptors. Journal for Nurses in Staff Development, 28(4), E1-E7, quiz E8-E9. [Context Link]

 

Kampen J., Swynjedouw M. (2000). The ordinal controversy revisited. Quality and Quantity, 34, 87-102. [Context Link]

 

Lischka A. R., Mendelsohn M., Overend T., Forbes D. (2012). A systematic review of screening tools for predicting the development of dementia. Canadian Journal on Aging, 31(3), 295-311. doi:10.1017/S0714980812000220. [Context Link]

 

Maxwell C. A., Mion L. C., Mukherjee K., Dietrich M. S., Minnick A., May A., Miller R. S. (2015). Feasibility of screening for preinjury frailty in hospitalized injured older adults. Journal of Trauma and Acute Care Surgery, 78(4), 844-851. doi:10.1097/TA.0000000000000551. [Context Link]

 

Mikulich-Gilbertson S. K., Wagner B. D., Riggs P. D., Zerbe G. O. (2017). On estimating and testing associations between random coefficients from multivariate generalized linear mixed models of longitudinal outcomes. Statistical Methods in Medical Research, 26(3), 1130-1145. doi:10.1177/0962280214568522. [Context Link]

 

Myrick F., Barrett C. (1992). Preceptor selection criteria in Canadian basic baccalaureate schools of nursing-A survey. The Canadian Journal of Nursing Research, 24(3), 53-68. [Context Link]

 

Nunnally J. C., Bernstein I. H. (1994). Psychometric theory. New York, NY: McGraw-Hill. [Context Link]

 

Polit D. F., Beck C. T. (2006). The content validity index: Are you sure you know what's being reported? Critique and recommendations. Research in Nursing & Health, 29(5), 489-497. [Context Link]

 

Pigott H. (2001). Facing reality: The transition from student to graduate nurse. Australian Journal of Nursing, 8(7), 24-26. [Context Link]

 

Richards J., Bowles C. (2012). The meaning of being a primary nurse preceptor for newly graduated nurses. Journal for Nurses in Professional Development, 28(5), 208-213, quiz 214-215. [Context Link]

 

Sandau K. E., Halm M. (2011). Effect of a preceptor education workshop: Part 2 Qualitative results of a hospital- wide study. Journal of Continuing Education in Nursing, 42(4), 172-181. [Context Link]

 

Speers A. T., Strzyzewski N., Ziolkowski L. D. (2004). Preceptor preparation: An investment in the future. Journal for Nurses in Staff Development, 20(3), 127-133. [Context Link]

 

Streiner D. L., Kottner J. (2014). Recommendations for reporting the results of studies of instrument and scale development and testing. Journal of Advanced Nursing, 70(9), 1970-1979. doi:10.1111/jan.12402. [Context Link]

 

Streiner D. L. (2003a). Starting at the beginning: An introduction to coefficient alpha and internal consistency. Journal of Personality Assessment, 80(1), 99-103. [Context Link]

 

Streiner D. L. (2003b). Being inconsistent about consistency: When coefficient alpha does and doesn't matter. Journal of Personality Assessment, 80(3), 217-222. [Context Link]

 

Suris A., Holder N., Holliday R., Clem M. (2016). Psychometric validation of the 16 Item Quick Inventory of Depressive Symptomatology Self-Report Version (QIDS-SR16) in military veterans with PTSD. Journal of Affective Disorders, 202, 16-22. doi:10.1016/j.jad.2016.05.029. [Context Link]

 

Thomas C. M., Bertram E., Allen R. L. (2012). The transition from student to registered nurse in professional practice. Journal for Nurses in Staff Development, 28(5), 243-249. Doi.org/10.1097/nnd.0b013e31826a009c. [Context Link]

 

Trochim W. (2000). The research methods knowledge base (2nd ed.). Cincinnati, OH: Atomic Dog Publishing. [Context Link]

 

Vetter T. R. (2017). Descriptive statistics: Reporting the answers to the 5 basic questions of who, what, why, when, where, and a sixth, so what? Anesthesia and Analgesia, 125(5), 1797-1802. doi:10.1213/ANE.0000000000002471. [Context Link]

 

Waltz C. F., Strickland O. L., Lenz E. R. (2005). Measurement in nursing and health research (3rd ed.). New York, NY: Springer Publishing. [Context Link]

 

Whitehead B., Owen P., Henshaw L., Beddingham E., Simmons M. (2016). Supporting newly qualified nurse transition: A case study in a UK hospital. Nurse Education Today, 36, 58-63. doi:10.1016/j.nedt.2015.07.008. [Context Link]