Authors

  1. Roe-Prior, Paula PhD, RN

Article Content

In the last issue (Roe-Prior, 2022), I provided the example of a pilot study (DiMattio et al., 2010) that used a survey to evaluate intent to stay at the bedside for baccalaureate-prepared nurses. All the students who had graduated from one northeastern Pennsylvania university nursing program since the program's inception in 1985 were surveyed. Graduates were mailed the Practice Environment Scale (Lake, 2002), a 31-item, five-subscale instrument, derived from the Nursing Work Index, a measure of workplace satisfaction, and a sociodemographic data collection form. In this issue, I would like to describe what to include in the methods or procedures portion of a research study-by using the aforementioned study, a cross-sectional, comparative-descriptive design, as a simple example. In subsequent columns, I will demonstrate how the results of a descriptive study can be the basis for future studies; any new research question generated merits further exploration of the literature, requires new problem and purpose statements, and necessitates a design change including alterations in the methods used to collect data.

  
Figure. No caption a... - Click to enlarge in new windowFigure. No caption available.

Importantly, the design of the study should be described early in the methods section. The choice of design should be consistent with the purpose of the study (Engenberg & Bliss, 2005). Because this was a pilot study (DiMattio et al., 2010), the feasibility of performing a larger study was also being evaluated. Whenever a researcher endeavors to perform a study and the ability to enroll and retain a sufficient sample is unclear, in this example, the survey response rate; or the appropriateness and completeness of an instrument is unknown, in this case, a new data collection form; or uncertainty exists about the sufficiency of resources needed to execute the study, because of the time involved or the financial costs, a pilot study is advised prior to launching a larger study. Other feasibility issues must be considered with other study designs (Polit & Beck, 2017, pp. 623-629). These will be addressed in future columns.

 

After the design, the sample, sample size, sampling strategies, setting, and time period, as well as the inclusion and exclusion criteria, or eligibility criteria, are explained. In the DiMattio et al. (2010) study, all the BSN graduates (720; the sample size) from one northeastern Pennsylvania university (the setting) in the 26 years preceding the study (the time period or context) were mailed a postcard alerting them to and asking them for their participation in the forthcoming survey, the sampling strategy. Because this was a pilot and a descriptive study, sample size calculations were not applicable. In forthcoming columns, with different study designs, sample size calculations will be addressed. Time period and context are important in interpreting results, especially if, for example, a new curriculum had been introduced at the study university, or substantial nursing pay raises were provided at some hospitals during the study period. These changes could confound interpretation of the study's results.

 

Because the study sample was selected nonrandomly, it is referred to as a nonprobability or convenience sample. The inclusion criterion was all graduates of one BSN program; the exclusion criterion was any master's program graduates. Specifying reasonable eligibility and justifying eligibility criteria based on clinical knowledge and the literature review are important considerations both to establish the legitimacy of the sampling and to provide a measure of researcher control (Polit & Beck, 2017, pp. 249-250, 252; Bliss & Savik, 2005). I remember once reviewing a study, not for this journal, whereby when the researcher realized, as she had already been cautioned, that her sample would be difficult to recruit given the proposed intervention, she opted instead to pad the sample size by enrolling participants with a diagnosis other than one for which the study had been designed and institutional review board approval given. Besides the ethical concern, any data collected by enrolling another diagnosis would have been uninterpretable. My point being, it never hurts to perform a pilot study before launching a full-scale research study. As you know, time and resources are precious. Also, we have a responsibility to our research participants who are giving their own time to ensure that the sacrifices they are making are for a worthwhile purpose, to advance patient care and nursing science.

 

The above is a good segue into what else should also be included in the methods section: the mention that both institutional review board approval and participant consent have been obtained "prior" to data collection. In the sample study, participants were informed that returning the survey assumed consent. Intervention designs should also include a description of the person obtaining consent, the researcher or otherwise, under what circumstances, and if participants will be recompensed in some way. Also helpful to mention is that patient data will be deidentified, how and where it will be stored, and who has access to the files.

 

Next, the procedures for collecting the data should be described. In the example study, the initial postcard and mailing of the survey packet and the follow-up reminder letter all served to improve response rates. Had more resources been available, the ideal would have been to remail the entire packet because, as we all know, things get misplaced. From the review of the literature, the researchers identified several demographic and practice variables that were felt to influence intent to stay, and these questions were included in the survey. The type of information requested was described, and the practice patterns of those still in besides nursing were elicited. An additional question, asking participants how satisfied they were in their present jobs and a description of the scale used (a 4-point Likert scale, with 1 = very satisfied and 4 = very dissatisfied), was also included.

 

When writing the methods section, it is important to briefly explain each instrument, how it is scored and what a high low score means as well as the reliability and validity of each instrument. The reliability is the consistency of the measurement of a study variable on repeated administration. The reliability coefficient is reported for previous studies and calculated for the proposed study. Scores closer to 1 imply higher reliability. Validity, very simply, is the ability of an instrument to measure what it claims to measure. There are several ways to determine the validity of an instrument, which are beyond the scope of this discussion.

 

Finally, the statistical analysis plan should be described under the methods section. The research question and study design drive the plan as do the level of measurement of the study variables. Frequencies, or counts, are reported for categorical variables, that is, a discrete variable such as gender or blood type, and descriptive statistics, like means, for a continuous variable, that is, age (Polit & Beck, 2017, pp. 721, 726). If the continuous variables are not normally distributed, a nonparametric test may be proposed, but the researcher would not know this until after data collection and preliminary data analysis, therefore the reference to an analysis "PLAN." This does not mean that the researcher has the freedom to alter the analysis if the hoped-for results are not obtained, but rather that a less robust nonparametric test, such as a chi-square or Mann-Whitney test-less powerful but with less stringent assumptions-might need to be substituted for its parametric equivalent, for example, a t test or analysis of variance, for data that do not meet the assumption of a normal distribution (Polit & Beck, 2017, p. 384). The study cited (DiMattio et al., 2010) described the type of statistical program to be used for data analysis (SPSS), although not the version, and mentioned that frequencies and descriptive statistics would be calculated for whole-group variables and the independent t test when comparing two groups (Polit & Beck, 2017, p. 384), those intending to leave hospital nursing and those who intended to stay.

 

Because of the design of this study and the fact that it was a pilot study, the methods section was short and sweet yet complete. (A poet, and a biased one at that?) Other, more complex study designs require more detailed methods section. But regardless of the type of study design, the goal of the methods section is to provide enough detail for study replication, allow a reader sufficient information to critique the research, and allow an evaluation of the feasibility of incorporating any study findings into practice (Coverdale et al., 2013).

 

References

 

Bliss D. Z., Savik K. (2005). Writing a grant proposal-Part 2: Research methods. Journal of Wound, Ostomy, and Continence Nursing, 32(4), 226-229. [Context Link]

 

Coverdale J. H., Roberts L. W., Balon R., Beresin E. V. (2013). Writing for academia: Getting your research into print: AMEE Guide No. 74. Medical Teacher, 35(2), e926-e934. https://doi.org/10.3109/0142159X.2012.742494[Context Link]

 

DiMattio M. J., Roe-Prior P., Carpenter D. R. (2010). Intent to stay: A pilot study of baccalaureate nurses and hospital nursing. Journal of Professional Nursing, 26, 278-286. [Context Link]

 

Engenberg S., Bliss D. Z. (2005). Writing a grant proposal-Part 1: Research methods. Journal of Wound, Ostomy, and Continence Nursing, 32(3), 157-162. [Context Link]

 

Lake E. T. (2002). Development of the practice environment scale of the nursing work index. Research in Nursing & Health, 25, 176-188. [Context Link]

 

Polit D. F., Beck C. T. (2017). Nursing research: Generating and assessing evidence for nursing practice (10th ed.). Walters Kluwer. [Context Link]

 

Roe-Prior P. (2022). Introduction to research design. Journal for Nurses in Professional Development, 38(6), 378-379. [Context Link]