Authors

  1. Lucasey, Beth MA, BSN, RN

Article Content

As orthopaedic nurses we know the reality: Imbalance = Fall = Fracture = Surgery = Dependence. And we know from experience that it takes time and effort to identify persons at risk for falling. Time we don't always take. Sometimes it is just easier to lump all persons over the age of 65 as "potential fallers" and refer them to a physical therapist for instruction.

 

To separate out the differences between fallers and nonfallers becomes instantly complicated by the multitude of compounding variables within human study groups. It is much easier to study animal rat models confined within their laboratory cages exercising on treadmills, eating premeasured caloric meals, and resting at predetermined intervals as the lab lights are dimmed. These rat subjects are free from disability, fast foods diets, the stress of stock market declines, and baby sitting grandchildren. But rather than being overwhelmed by all these variables, a study design exists that allows for these human variables: the quasi-experimental design.

 

We can classify designs into a simple threefold classification by asking some key questions. First, does the design use random assignment to groups?

 

If random assignment is used, we call the design a randomized experiment or a true experiment. If random assignment is not used, then we have to ask a second question: Does the design use either multiple groups or multiple ways of measurement? If the answer is yes, we would label it a quasi-experimental design. If no, we would call it a nonexperimental design.

 

So a quasi-experimental design is one that looks a bit like an experimental design but lacks the key ingredient - random assignment. Some scientists refer to them as "queasy experiments" because they give the purists a queasy feeling. With respect to internal validity, they often appear to be inferior to randomized experiments. But there is something compelling about these designs: taken as a group, they are easily more frequently implemented than their randomized cousins.

 

Probably the most commonly used quasi-experimental design is the nonequivalent groups design. In its simplest form it requires a pretest and a posttest for treated and comparison groups. It's identical to the Analysis of Covariance design except that the groups are not created through random assignment.

 

Authors Robinson et al. chose this design to execute their research questions and determine risk factors that orthopaedic nurses can include in their client assessment. These identifiers can help the nurse make appropriate referrals to a physical therapist.

 

The jury is still out as to whether quasi-experimental designs can adequately control selection bias. It is safe to conclude that experimental designs are superior in this critical respect. But the many problems associated with experiments render them impractical for many if not most evaluations. Quasi-experimental designs are often the best practical approach to take for evaluations of health and education programs.

 

Do we really need more nursing research? Why bother when we have a nursing shortage and are all so frustrated by health care cutbacks and insurance pressures? Maybe the answer to the question is that research gives us data to support cost-effective treatment plans that provide financial benefits in the long run.

 

Research results arm nurse managers with financial negotiating ammunition when dealing with the hospital administration. Without these results, the nurse manager is left to defend her patient care plans with anecdotal personal experiences. While these experiences might be true, they are disregarded as war stories in business meetings.

 

The contrived conversation below illustrates well the different perceptions of nurses and outside consultants when it comes to assessing the merits of a program.

 

Nurse manager: Why do we need to evaluate our physical therapy referral program? We have a good handle on what's going on with our program and our patients, and we know we are very successful.

 

Outside Consultant: Because you never know if it was your program or something else that produced the success you are claiming.

 

Nurse manager: Of course we know it's our program. What else would cause all these people to fall less?

 

Outside consultant: Maybe the more motivated clients are more compliant?

 

Nurse manager: That's nonsense. Anyway, we know we have to have our program evaluated. But why do we need to go through the trouble of finding a comparison group to do an evaluation?

 

Outside Consultant: Let's say that 6 months after clients finish physical therapy, 70% are falling less. Would you consider that proof your program is a success? What proportion of these individuals might be having fewer falls now if they hadn't gone through the physical therapy instruction?

 

Nurse manager: I'm not sure, but it wouldn't be as high as 70%, I can tell you that.

 

Outside Consultant: Well, you don't really know that though. For all you know, 80% might be falling less now if they hadn't taken physical therapy instruction.

 

Nurse manager: No way. We are providing a valuable service and are really helping our clients.

 

Outside Consultant: That may be so, but you haven't proven it. And the insurance companies need to know with certainty how successful you have been. They need to know they are getting bang for their buck.

 

Nurse manager: They are, I assure you.

 

Outside consultant: Okay, let's say you are doing some good, that individuals who go through physical therapy instruction are indeed falling less than if they hadn't been instructed. How much of an effect are you having? Would half of them have fallen less anyway? One-third? Two-thirds? You can't know that unless you do an evaluation that includes a comparable group of people who haven't taken physical therapy instruction.

 

Nurse manager: Even if half of them had fallen less without the instruction, isn't raising that proportion to 70% worth it?

 

Outside Consultant: I don't know. What did it cost for that incremental 20%? And how long will the effects of training last?

 

Nurse manager: Our program is well worth the money [horizontal ellipsis]

 

Nurses live and work with the patients every day. They care about their program and work hard to make it a success. They strongly believe they are doing a good job, and they understandably resent any implication that they are not.

 

Evaluators usually have no connection with the program, and, more important, no stake in its survival (which sometimes leads to an underestimation of the threat that evaluations can impose on program management and staff). They know that managers are heavily invested in their program and that a manager's assessment of the program - even one aided by reliable monitoring data - will not be accepted by program sponsors as a valid test of whether the program is meeting its objectives and is worth what it costs. And they know that many different factors that are unrelated to the design of the program can affect the outcomes of any social program and can easily lead to unwarranted conclusions about the program.

 

Many of us have experienced the "business consultants" who sweep through hospitals to evaluate the health care delivery system. We all know the results - layoff by attrition and hiring freezes throughout physical therapy, nursing, laboratory, and radiology personnel. Research data proving cost-effectiveness is the only weapon we have to explain to the consultants how the health care team works. Nursing research results explain the bottom line.