Authors

  1. Surko, Michael PhD
  2. Lawson, Hal A. PhD
  3. Gaffney, Susan MS
  4. Claiborne, Nancy PhD

Abstract

Community-based partnerships (CBPs) focused on youth development (YD) have the potential to improve public health outcomes. These partnerships also present opportunities for the design and implementation of innovative, community-level change strategies, which ultimately may result in new capacities for positive YD. Evaluation-driven learning and improvement frameworks facilitate the achievement of these partnership-related benefits. Partnerships are complex because they embody multiple levels of intervention (eg, youth-serving programs, youth participation as partners or evaluators, network development for collaborative projects and resource sharing, YD-oriented organizational or community policy change). This inherent complexity transfers to evaluations of CBPs. This article provides resources for meeting evaluation-related challenges. It includes a framework for articulating relevant evaluation questions for YD-oriented CBPs, a summary of relevant types of evaluation studies, and practical solutions to common evaluation problems using targeted evaluation studies. Concrete examples of relevant, small-scale evaluation studies are provided throughout.

 

Article Content

Youth development (YD) programs have the potential to improve a variety of adolescent health behaviors. These behaviors correspond to important public health goals such as drug and alcohol use, cigarette smoking, violence, and high-risk sexual behavior (see, eg, Catalano and colleagues1).

 

This potential of YD programs is enhanced by community-based partnerships (CBPs) focused on positive YD. These partnerships typically consist of a variety of community stakeholders (partners), including social service providers, local government officials, public school representatives, law enforcement officials, representatives from faith-based organizations, community members, and young people performing important youth leadership roles.2 Reciprocally, CBPs' potential is maximized when they are guided by the principles, values, and practices associated with a positive YD perspective.3

 

In brief, positive YD and CBPs constitute a powerful match. Together they have the potential to improve public health outcomes, especially adolescent health outcomes (as other articles in this journal supplement indicate). At the same time, this match may result in new community capacities for healthy development.

 

Clearly, this enormous potential needs to be tapped, and evaluation-driven learning and improvement strategies provide one key for doing so. New evaluation frameworks that respond to the challenges associated with the inherent complexity in CBPs focused on YD are needed.

 

Notably, youth-focused CBPs represent complex interventions. They focus on positive outcomes for young people, including healthy and constructive behavioral habits, which lead to stable, productive lives as family members and adult citizens. In addition to focusing on improved outcomes for young people, these CBPs are designed to produce positive, lasting change in organizations and, at the same time, develop new capacities in entire communities. Sustainable organizational and community changes include the capacity to both support, and be guided by, youth in families, organizations, and neighborhood communities. Policies that facilitate sustainability and systems change constitute another group of important positive impacts. Without such partnerships, it is unlikely that results will be sustained in organizations and communities.

 

To achieve these multiple benefits, CBPs often employ multiple-level improvement strategies. These strategies include implementing youth-serving programs, cultivating youth participation as partners or evaluators, developing resource networks for collaborative projects and resource sharing, and promoting YD-oriented change in community organizations.

 

While this multilevel, complex approach provides rich opportunities for promoting community-level change, it also presents significant challenges for evaluation designs. Evaluating these partnerships can be complex for several reasons. For example, the partnerships have multilevel strategies and multiple goals; partners may not agree on all goals; activities directed toward goals may be poorly specified; some outcomes are long-term and difficult to measure; and the partnership, by itself, may not reach enough young people to change developmental outcomes in the community. Perhaps above all, resources for CBPs are often limited, making large-scale studies by internal or external evaluators impractical.

 

Despite these challenges, evaluation-driven learning and improvement strategies comprise an essential component of any CBP. Evaluation-related learning and improvement are especially important for CBPs designed to yield policy changes and new, sustainable community capacities, which require resource reallocations.

 

How, then, should leaders of YD-oriented CBPs frame and select their evaluations? This question structures the ensuing review. Its aim is to enable CBP partners and public health practitioners who work with CBPs to make informed, solid choices about their evaluation designs. Toward this end, this review provides three evaluation resources: (1) a framework for identifying relevant evaluation questions for YD-oriented CBPs; (2) evaluation designs requiring only modest resources, along with their characteristic strengths and limitations; and (3) practical solutions to common challenges in the evaluation of CBPs, using targeted evaluation studies.

 

First, a word of caution: This review does not provide a comprehensive inventory of every evaluation framework and alternative. As suggested by its title, this review targets selected evaluation frameworks and methods. It also presents key issues that partners need to address in order to reap the potential of evaluation-driven learning and improvement strategies. Foremost among these issues is the need for evaluations to address the distinctive aspects of YD-focused CBPs.

 

Distinctive Aspects of YD-focused Partnerships

As described elsewhere in this journal supplement, a YD-based approach to CBPs differs from more traditional social service approaches to young people's health and risk behaviors. These traditional approaches tend to be deficit-oriented; they focus primarily on needs and problems needing to be fixed. Moreover, these traditional approaches tend to be single-issue-oriented and categorical. Arguably evaluations are easier to design and conduct in these traditional approaches.

 

What special features of YD-oriented CBPs must evaluations address? The most important of these features may be summarized in bulleted form.

 

* A Strengths-based Approach in which young people are viewed as assets awaiting nurturing development (in lieu of "walking clusters of needs and problems"). Karen Pittman and others have summarized young people's strengths as comprising five Cs: Confidence, Character, Connection, Competence, and Contribution.4 Here, programs, services, and activities offered through the partnership support the development of strengths and skills in young people. With respect to evaluation, youth and adults may receive evaluation training and work together to generate findings and recommendations about a youth-serving program. By giving youth and adults an opportunity to interact in the evaluation process, youth can develop new skills and feel valued and respected. These skills can also be used to advance the mission of the partnership.

 

* A Focus on Social Settings, with special attention to improvements in the places in which young people spend their time. CBPs work both directly with young people and on settings outside of the partnership (eg, family, school, community) to create opportunities to promote healthy development. For example, youth may survey their peers in a community and identify the need for a place to socialize and study after school, to avoid some of the problems associated with unstructured time and environments. In response, the CBP might leverage the resources of local business and community entities to develop an unused public space for use by young people after school. (This also would be a measurable outcome in the evaluation process.)

 

* Multiple Life Areas and multiple evaluation outcomes representing these areas. These outcomes and the areas they represent connect several behavioral domains, rather than focusing on only one problem to be avoided or rehabilitated.

 

* Interdependence Among Life Areas and Outcomes. All life areas are viewed as related and interdependent, thereby requiring comprehensive, interdependent interventions. For example, the National Research Council/Institute of Medicine report on community YD programs3 identified the following personal and social assets as important for positive development: (a) health habits and health risk management; (b) vocational skills; (c) critical thinking, reasoning, and decision-making skills; (d) emotional self-regulation skills; (e) mastery motivation and positive achievement motivation; (f) sense of personal autonomy and responsibility for self; and (g) attachment to prosocial, conventional institutions such as faith-based organization, school, and nonschool youth programs. This range of outcomes spans health, mental health, education, prevention of problem behaviors, job readiness, and other traditional categories of work with young people. As a result, multisector CBPs often incorporate a broad range of resources and capacities, and this must be considered in their evaluation.

 

* Bridging Systems, which avoid and prevent institutional "silos," also enabling partners to work together across long-standing boundaries. This cross-boundary work, usually facilitated by intermediary people (boundary crossers), advances the priorities of all partners and the organizations that they represent. These bridging systems derive from the fact that many adolescent health and behavioral issues are closely interconnected and interdependent5 (as indicated above). For example, recognizing that many YD outcomes are interrelated, health department and juvenile justice staff may collaborate to promote civic engagement (along with their agencies' prioritized outcomes). The conditions that pave the way for successful interorganizational collaboration are complex and not always present, so it may be appropriate for the work of partners who do not normally collaborate to be examined separately.

 

* Nontraditional Partners who are not typically viewed as social service providers. These partners include volunteer firefighters, EMTs, law enforcement officials, and businesses. They may partner with traditional youth-serving organizations (after-school program providers, schools, faith-based organizations, and recreation-focused organizations) to create novel YD experiences for young people. For example, volunteer firefighters or EMTs may establish a youth auxiliary to provide young people with opportunities to learn new skills and play a valuable role in the community. A local sheriff provides opportunities for youth to ride in patrol cars and "shadow" officers as they do police work. Businesses provide shadowing or apprenticeship experiences in which participating youth could improve their employability. These multiple possibilities signal evaluation challenges.

 

* Young People Actively Promote Their Own Positive Development, rather than being passive recipients of services. From a YD perspective, higher levels of developmental assets (eg, relationships with caring adults) are associated with lower rates of risky behaviors such as unprotected sex, substance use, violence, and school dropout.6 Developmental assets are therefore cultivated to reduce health risks. In the YD approach, youth are viewed as experts and as valuable resources in their families, schools, and communities, making important contributions in each of these social settings. At the same time, they build developmental assets. Several types of roles are commonly created for young people in a community-based YD approach. They may conduct service projects, act as key informants in otherwise adult-driven activities, or serve as coplanners who share power with adults. Sometimes they serve as coleaders of organizational, community, and policy changes. Some youth roles require more experience and time commitment than others; therefore, it is important to assure that the available roles provide for both more and less intense participation-not all young people are interested in coleading efforts; therefore, less intensive types of involvement such as time-limited service projects also need to be offered so as to engage a broad cross section of young people.7 As noted by Powers and Tiffany in this journal supplement, youth can be trained as researchers, and therefore closely involved in the evaluation process. All these experiences and opportunities serve to build young people's developing competencies, and reduce the likelihood that they will engage in risky behaviors.

 

 

Responding to These Unique Features and Creating a Framework for Evaluation

The first step in the evaluation of a CBP is to define the partnership's aims, especially how these aims are connected to the partnership's structure, resources, goals, bridging mechanisms, and operational plans. Once the partnership's overall aims, structures, and plans have been articulated, the next step is to spell out how all of the partnership's efforts fit together to advance toward its aims. Then specific methods and data sources need to be identified.

 

Evaluation challenges are associated with the development and functioning of a CBP. Their complexity is revealed in their multiple phases of preparation, planning, project implementation, and organizational development. Importantly, CBP development is not necessarily linear or sequential; some development phases occur simultaneously and recursively.

 

Evaluations need to be able to accommodate this complexity, which means that evaluators need to be prepared for it. First and foremost, it is nearly impossible for one evaluator to capture all of the complexity of a CBP. A team of evaluators, one that employs multiple methods and has access to many data sources, is needed.

 

In addition, evaluators, like the partners they study, must remain flexible and adaptable. As the partnership changes, they also must adapt their strategies and methods. This entails developing meta-evaluation methods (evaluating the evaluation and the evaluators) for self-initiated improvements.

 

Over the last 10 years, planners and evaluators have developed a special conceptual system for partnership evaluations and their accompanying logic. This system increasingly is known as a "theory of change approach." This approach to evaluation planning has been widely used in public health contexts with complex community-based change initiatives focusing on the population, health system, and environment.8-15 It requires evaluators to elicit from community partners their "theory of change," that is, how the partners will get from "here" (the current state of affairs) to "there" (the idealized, future state described by the desired outcomes).

 

In other words, a CBP's change theory is a formal, explicit representation of how the partnership's programs, services, and activities comprise a coherent, comprehensive approach for achieving desired outcomes. This theory of change earns its name as "a theory" insofar as the outcomes it prioritizes and the strategies it designs and implements to achieve them await empirical confirmation via the evaluation. When evaluators work effectively with community partners, the change theory they develop for these partners draws causal links between the CBP's early efforts, activities that take place once the CBP is established and working well, and its short-term and long-term outcomes. This kind of evaluation enables learning and improvement insofar as it yields information that enables a CBP to systematically assess which components of its initiative have been implemented and are working, what activities led to what outcomes, and what contextual conditions have affected the initiative's progress toward its goals.13,14

 

Clearly, this theory of change approach to CBP evaluation is action-oriented and tailored to the local partnership contexts. Although each theory of change has the long-term aim of ultimately contributing to the development of understanding-oriented, scientific theory, theory of change evaluations are not the equivalent of conventional, understanding-oriented, scientific theories. After all, CBPs vary by place, context, and timing; they do not provide the controlled, ideal conditions of the laboratory. More important, the end product of a theory of change evaluation is action (in the form of specific improvements to the CBP) rather than enhanced understanding alone.

 

Evaluators employing theory of change evaluations thus face several related challenges. For example, they must elicit the theory of change from multiple, diverse stakeholders; achieve some consensus on the dominant theory; communicate this theory to the partners; and then seek consensus validation of it.

 

To communicate the change theory to CBP partners and community members and, at the same time, to obtain consensus-oriented validation, evaluation teams typically construct a logicmodel. A logic model is, in essence, a formal diagram. It illustrates the logical, causal connections among key aspects of the CBP's plan.12

 

Because evaluators construct a logic model to represent the partnership's theory of change, it is not surprising to learn that the terms "logic model" and "theory of change" are used interchangeably in the evaluation and partnership planning literatures.10 In fact, evaluators schooled in this approach make a fine-grained, important distinction. The logic model is a simplified, temporary, and ever-evolving construction, which evaluators feed back and forth to the partners. Over time and with successive iterations, what starts out as a simplified logic model evolves into a complex theory of change.11,15 This "back-and-forth," evaluation-related learning and improvement dynamic animates the partnership, the evaluation itself, and their relationship. Above all, it enables the CBP to chart its course deliberately and logically with the evaluation documenting each successive adjustment and achievement for partners and for other key audiences (such as policy makers).12

 

Figure 1 provides a logic model for a hypothetical, YD-focused CBP. This model is hypothetical because specific aspects of partnerships vary on the basis of the local context, developmental status of the partnership, and, of course, partnership aims and goals.

  
Figure 1 - Click to enlarge in new windowFIGURE 1. Generic logic model of a youth development (YD)-oriented community-based partnership.

Note that Figure 1 identifies exemplary preconditions such as existing resources, partners' preexisting relationships with community key leaders and the fit between a YD agenda and partners' existing mandates, all of which provide foundation for the partnership. As the partnership matures, external facilitators may be needed to provide technical assistance and training around YD language and concepts, and to offer lessons from other YD-focused CBPs.

 

The third box in the figure represents CBP structures for governance, operations, and sustainability. In this hypothetical example, the CBP has two main aims: to promote the well-being of young people directly, and to promote a YD approach among community organizations that engage young people.

 

In the two boxes at the top right, short-term outcomes such as confidence, competence, and character are expected to result from participation in the CBP's programs and activities. These outcomes then increase the likelihood that young people experience positive outcomes in early adulthood (eg, stable employment).

 

In the two boxes at the bottom right, the CBP expands and promotes YD-focused organizational change and policy change (eg, formation of a youth advisory group with decision-making power). These community infrastructure changes then lead to increased supports, opportunities, and services for young people in addition to the ones which the CBP creates directly.

 

The three main advantages of articulating an explicit change theory via a simplified logic model are as follows: (1) planning, implementation, evaluation, and continuous improvement are integrated; (2) leaders can determine whether there is support from research, or local evaluation findings, for the approach they are using; and (3) logic-model building activities among the partners facilitate unity of purpose, promoting consensus validation. Thus, the logic model both derives from, and drives, the partnership's activities and priorities.

 

Constructing a change theory and logic model

A change theory and the logic model that represents it are best constructed by planners and/or evaluators with experience using this evaluation methodology. Although one person can accomplish this task, oftentimes a team of people is required, if for no other reason than the practical challenges of interviewing all key partners. Table 1 presents examples of the kinds of probing questions used by planners and evaluators to elicit people's ideas about what ultimate results are intended, and how their work is expected to lead to those results. A vision statement and plan for partnership activities, either ratified by the full partnership or merely in draft form, can also be an excellent source of material. If one does not yet exist, composing a draft can be useful.

  
Table 1 - Click to enlarge in new windowTABLE 1 Questions used to uncover components of the change theory

In many CBPs, the overall change theory developed by evaluators represents the first, formal presentation of the change-oriented logic for the partnership. Even after the initial articulation, some parts of the logic model may remain ambiguous and somewhat vague. This problem is particularly common if the partnership is new.

 

For example, new partners may be able to specify early activities and the ultimate outcomes that are intended, but they are not able to identify the intermediate steps that will lead to the intended long-term outcomes.16 After these partners have engaged their intended population and implemented their interventions, it can be easier for them to articulate potential intermediate outcomes.17 To wit: a CBP may decide that initial activities and outcomes should focus on increasing the number and scope of after-school and summer opportunities for youth. The ultimate long-term outcomes that the CBP seeks may be increased youth participation in community life and better youth-adult communications. At the start, the intermediate outcomes may be vague, but as more youth become involved, it may become clearer that the intended intermediate outcomes are providing youth with leadership experiences, including making decisions about the direction of activities supported by the CBP.

 

In this way, a CBP's goals and methods often become clearer as partners begin to work together and the partnership begins to "gel." Theory of change evaluators may facilitate this partnership-related learning, development, and improvement.

 

When the change theory is first produced, its aims, outcomes, and causal links may be based on work done elsewhere. Once evidence begins to accumulate from evaluations in the local community, components of the change theory will be either validated, if they are consistent with findings, or revised if they are not. In eliciting components of the change theory from partners, multiple propositions supporting various parts of the model, which may be in conflict with one another, may be identified.

 

For example, one partner may believe that issuing stipends for participating in a civic service project indicates that young people's time should be valued in the same way as adults' time, whereas another partner may believe that stipends undermine civic participation for its own sake. The evaluators circulate conflicting principles, with the possibility that consensus is reached. Weiss18 recommends maintaining multiple possibilities until the accumulation of evidence suggests that one or more are refuted and can be dropped. Even then, evaluators are faced with developing a dominant theory of change, expressed in a single logic model, when in fact multiple versions of this change theory continue to exist.

 

With every iteration of the evaluation, the change theory is updated on the basis of evaluation findings. Each logic model therefore constitutes the most up-to-date ideas about how CBP activities may lead to results for young people. Before evaluation evidence is available, many of the links in the model are based on experience and evidence from other settings. For example, knowledge about the probable impacts of mentoring programs may come from results achieved in other sites. Although useful, this information cannot substitute for evidence that has been collected locally about the CBP's efforts.

 

Key Evaluation Levels for CBPs and Useful Evaluation Designs

Five levels of evaluation are relevant with respect to the evaluation of YD-oriented CBPs. These are (1) partnership development, (2) program implementation, (3) short- and long-term outcomes for youth, (4) organization-level impacts, and (5) community-level impacts. Each merits additional discussion.

 

Partnership development level

When a CBP serves as the forum for planning YD interventions, the group process within the partnership is a key determinant of success, including structures and processes for collaborative leadership. Partners' perceptions of confidence in the leadership, ability to get things done, helpfulness of other partners in getting one's own work done, and importance of the partnership's goals represent important information to obtain from CBP members during partnership development. Perhaps above all, the group's ability to achieve consensus and, at the same time, resolve conflicts determines its efficacy and effectiveness.5

 

As a partnership develops, it is useful to survey members to identify partnership strengths and areas for improvement. Many tools exist for this purpose, including a Web-based product from the New York Academy of Medicine19 (http://www.casch.org) that identifies key issues on which partners agree and those that deserve attention by the CBP leaders. For example, a CBP in which partners report low conflict, but also perceive that its overall goals are not very important is liable to experience low commitment and/or attrition among partners. In such a case, CBP leaders would be well-advised to work with partners to revisit the CBP's overall purpose and goals.

 

Evaluation design: Partnership process surveys

Partnership development and process measures describe the functioning of a partnership, identify areas for growth, and target problem areas. As described in detail by Carter and colleagues (in this journal supplement), the Assets Coming Together (ACT) for Youth initiative, established in 2000 by the New York State Department of Health, used multisector CBPs as the vehicle for positive YD-oriented change statewide. To track the process within CBPs, evaluators surveyed CBP partners in the initiative's first and fifth years, to examine perceptions of the development, functioning, and effectiveness of the partnerships, as well as change over time, and contributors to sustainability.

 

A limitation of partnership process surveys is the potential for bias. Bias is especially likely if potentially negative results would be disseminated to funders or community members. Advantages of this design include easy targeting of measures to the specific interests of the stakeholders/evaluators, and the ability to ask consistent questions over time to clarify the developmental process of partnership formation.

 

Program implementation level

Typically, specific programs and services are implemented as a result of a CBP's work. Often there is a desire to immediately evaluate their success in creating positive results. However, such an outcome study is inappropriate without first performing an implementation evaluation. That is, if the way in which the program has been implemented is not known, positive, negative, or nonexistent impacts cannot be attributed to the program. Before effects can be attributed to a program, the intervention actually delivered to participants must be known. Determination of the duration and intensity of involvement in program activities, training of program staff, if required, and the use of program materials must be made first. Only after a rigorous description of the program as implemented exists, can the connection between program participation and participant outcomes be explored.

 

Therefore, a lack of desired outcomes may not indicate a flawed program design. This "outcome problem" may stem from the fact that the design was not fully implemented, or it was not delivered with the required frequency, intensity, and duration (ie, sufficient "dosage" was not achieved).

 

For example, if young people participating in a community service project are found to have not gained new skills or benefited the community, but the majority of them attended less than a quarter of intended sessions, the program may nonetheless have promise. Increasing program participation by youth, either by selecting participants differently or by interacting with them differently once they enter the program, may be all that is required to achieve the intended results.

 

Evaluation design: Embedded evaluation studies

Nearly every CBP has informal or formal systems for collecting information about its programs, services, and activities. The data embedded in these systems can be used to address two common problems with evaluation of CBPs: modest resources for evaluation, and evaluation that begins after YD work has been under way for some time. If information systems can "bank" evaluation information from the beginning of the partnership, fewer resources are needed for data collection, and more time is freed up for analysis of the information and utilization of the findings by partners. This information can be as simple as an enrollment form for program participants, a feedback form completed by participants at the end of each program cycle, or partnership meeting minutes with attendees listed. More elaborate measures can be collected routinely.

 

It is important that planners and evaluators give some thought to how each piece of information will be used. This entails knowing something about the users' needs, aspirations, requirements, and accountabilities. Then data gathering and report writing can proceed responsively, and, at the same time, the evaluation truly becomes embedded in meaningful ways in the partnership.

 

Systems for reporting program progress to funders represent information-collection systems that are embedded in the CBP's usual operations and can provide useful evaluation information. For example, the ACT for Youth initiative requires CBPs to report program activities and progress on five YD-oriented outcomes. When CBPs designed interventions tailored to their specific context (eg, a 100-square-block urban neighborhood, or a rural county), the activities and goals were diverse, so comparing across CBPs using a uniform reporting system posed challenges. However, the reports allowed tracking of progress and activities over time, both for individual CBPs and for the initiative as a whole. The routine collection of such data also helped CBP leaders reinforce YD language and concepts within their partnership.

 

Short-term and long-term outcome level

Once implementation dynamics have been assessed and the delivery of a sufficient "dosage" of program activities and involvement has been assured, the program is ready for outcome measurement. Outcome measurement is guided by a logic model representing the developing theory of change.

 

For example, if the program is designed to provide young people with experiences that help them develop leadership, positive self-identity, responsibility, and positive relationships with adults, these outcomes should be measured first because they are the first link in the causal chain to more distal, long-term outcomes. If young people in the program realize these positive short-term developmental outcomes, then it is appropriate to later measure longer term outcomes, such as reduction in problem behaviors or improvement in life outcomes, such as high school graduation rates. If they are not achieving these shorter term outcomes, then changes in problem behavior cannot be attributed to the program; the change theory may need to be examined to include other aspects of the program, not included in the original model, that may be helping reduce problem behavior.

 

Whenever possible, shorter term outcomes such as demonstrations of new skills should be incorporated into the evaluation. These "small wins"20 can help bring to life for partnership members the sometimes uneven and wandering trajectory of ultimately positive cognitive, social, and behavioral development.

 

In cases where program implementation and short-term outcomes for young people have been documented, links between short-term and long-term outcomes can be explored. CBPs should look for the linkage between the outcomes that they achieve with youth in the present (both intended and unintended) and the likelihood of continued positive outcomes in the future. For example, young people involved consistently for several years in a youth-adult council could be surveyed 5 years later to determine their likelihood, in comparison with peers who were not involved in YD activities, to go on to higher education, be employed full-time, or report satisfaction with the direction of their life. An evaluation of this kind should not be undertaken unless the young people have received the interventions consistently and with sufficient intensity over time to change long-term outcomes. Young people who receive interventions in multiple settings (eg, both during school and in out-of-school-time programs) may be the best candidates for this kind of study.

 

Getting good data is just part of the challenge. Making solid, warranted attributions is equally challenging. Twin problems loom in the background of every evaluation. One is the problem of false positives whereby the evaluators attribute program effects when, in fact, none exist. The other is the problem of false negatives whereby evaluators determine that the program did not have positive effects, when, in fact it did.

 

For example, although a program may ultimately reduce teen pregnancy or substance abuse, a young person's status on those outcomes might not be attributed to the program. After all, youth have lives outside CBPs and their programs and services. And, some youth come to these programs and services because of the fit between their aspirations and characteristics and what the partnership offers. Called "selection effects," this special recruitment power of some partnerships makes it dangerous to attribute youth's positive health behaviors to the partnership (because youth with positive behaviors were attracted to the partnership).

 

The ideal standard is long-term, longitudinal evaluations of CBPs. However, longitudinal studies are costly and complex. Therefore, they are less commonly used in evaluations of CBPs.

 

In addition, because the selection of an appropriate comparison group can be difficult, such studies should be pursued only in consultation with an evaluator experienced in this type of research. However, when extant data can be used to document outcomes over time, the opportunities may be worth considering. For example, if families tend to stay in a community, program participation and youth outcomes in the middle-school years could be compared with rates of high school completion and/or GED attainment.

 

Evaluation design: Program participant outcome studies

Program participant outcome studies involve measuring short-term, and sometimes long-term, outcomes with youth in program settings and capturing information about program dynamics and their impact on the participants. If an implementation evaluation has been completed, the results can also be used to inform program providers and funders about the success and impact of the program. For example, the national mentoring program Big Brothers/Big Sisters conducted a participant outcome study to measure the program's effect on at-risk youth, using a combination of surveys, focus groups, and one-on-one interviews. The participant group was composed of those who were immediately matched with a volunteer mentor, and the control group was composed of young people who were assigned to an 18-month waiting list before receiving a mentor. After 18 months, participants had less drug use, better school attendance, better academic achievement, and a greater sense of self-competence than did members of the control group. However, when the evaluators compared improvements in communication with adults, visits to colleges, or time spent reading, they did not find significant differences between the two groups.21 As a result, program staff renewed their focus on those elements of the program that contributed to achieving desired outcomes, and reconsidered their focus on areas where little impact was evident.

 

Overall, the main advantages of participant outcome studies are the use of a control or comparison group and the application of quantitative measures, both of which strengthen the causal connections that can be drawn between program participation and outcomes. The incorporation of both quantitative and qualitative information from multiple perspectives (eg, youth, family, and case managers) enhances the meaning of outcomes for a wide variety of audiences. A limitation of these studies is the difficulty in linking specific program elements to specific outcomes. For example, in the evaluation of Big Brothers/Big Sisters, the design focused on the experiences of program participants and less on the program's elements. Therefore, additional research would be needed to identify the components of a program to add, or modify, if desired outcomes were not achieved (eg, if school attendance increased, but drug use did not decrease).

 

Organization-level impacts

Because a YD approach requires the adoption of new language, concepts, short-term goals, and ways of working, CBP member organizations may need to build capacity in one or more of these areas. When capacity-building efforts are successful in a CBP organization, the impacts on other organizations in the community can be profound. For example, when youth are added to planning councils and governing boards of local organizations, this change can have a significant impact on the kinds of programs and services offered and the staff who provide them. Programs and services tend to model YD principles, and youth often are trained and employed as program leaders and service resources. Documenting these organizational changes related to YD-informed work can structure capacity-building efforts and disseminate successes. On the other hand, the evaluation must be tailored for this purpose, and typically this involves some trade-offs regarding alternative evaluation priorities and possibilities.

 

Evaluation design: Organizational self-assessment

Organizational self-assessment tools are used to look inward to understand an organization's readiness to incorporate YD language, concepts, and ways of working, or its progress toward these goals (see, eg, the article by Schulman in this journal supplement for case studies of organizational self-assessment). Members of the organization, sometimes in conjunction with young people or other outside stakeholders, review organizational structures, resources, and practices to identify possible improvements or organizational changes.

 

For example, the Promising and Effective Practices Network (PEPNet) Self-Assessment for Organizations, developed by the National Youth Employment Coalition and available on-line at http://www.nyec.org/, is an assessment that helps programs evaluate how successfully they are promoting YD within their organizations. This detailed assessment helps participants frame questions and guidelines such as how to measure goals, successes, and outcomes. This tool also helps organizations self-evaluate using categories such as purpose and activities, organization and management, YD, and workforce development. The YD component assesses youth and adult relationships, family and peer support, and supportive services and opportunities for youth.

 

The benefits of this type of study are as follows: (1) organizational leaders can use the assessment to communicate a commitment to change and mobilize participation from multiple levels in the organization; (2) the process can occur at multiple points in time to make comparisons and track progress, many tools are available at little or no cost, and they can be used as is, or adapted for specific settings; and (3) acting on the results of an organizational self-assessment can increase the investment of staff in change efforts because it communicates the value the organization places on staff feedback. A potential limitation is that if an assessment is conducted and results are not either reported back to participants or used for decision making, disillusionment and mistrust among the stakeholders involved may follow.

 

Community-level impacts

When CBPs are successful, YD-informed language and concepts are incorporated into wider community infrastructure (eg, community policies and governance structures, relationships between organizations, funding streams). Sometimes formal coalitions develop. Although credit for some changes can be hard to attribute because of the multiple, often simultaneous contributions of many partners, YD-related community infrastructure changes are important to document, from the standpoint of both tracking successes and planning further CBP infrastructure-building efforts.

 

Evaluation design: Population monitoring

If YD data are collected about young people across a community, county, or state, a CBP may be able to make use of the data for planning and monitoring purposes. One strength of such data collection and management systems is that they generate information about a specific population, while allowing comparison with other similar communities elsewhere.

 

For example, Maine Marks is a project of the Children's Cabinet of Maine's Governor's Office (http://www.mainemarks.org/). Since 2001, this initiative has collected data on social indicators in order to track the well-being of Maine's young people. Students in grades 9-12 are surveyed by telephone each year. The survey uses questions developed by the Search Institute (http://www.search-institute.org/) to explore the attitudes and experiences of youth in an effort to connect those with longer term indicators of success in life. For example, because dietary and exercise habits established early in life are significant predictors of obesity in adulthood, the survey measures these indicators in the category entitled, "children and youth are respected, safe and nurtured in their communities."

 

Population monitoring has several advantages. These advantages include (1) combining positive YD indicators with traditional prevention indicators to create a fuller picture of youth outcomes; (2) making data accessible to multiple audiences (policy makers, direct service providers, parents) in a Web-based format; and (3) producing trend data that allow tracking of long-term outcomes and improvements for the future.

 

Legislators use Maine Marks data to guide policy-making and identify spending priorities, and state agencies use the information for strategic planning and performance budgeting. Social service and other nonprofit agencies use Maine Marks for program planning and grant planning. Maine citizens use the system to learn about Maine's youth and how they are cared for via state systems. The method's strengths are that partnerships can potentially gain access to high-quality tools that have been constructed by experts. Also, expertise in data analysis is not needed by the partnership to utilize results, and systems may allow communities to compare their community characteristics with those of similar communities elsewhere. This can be helpful in executing programs or attracting attention to problems/successes in the community. Limitations of large-scale indicator projects like Maine Marks include limited number of indicators, and sometimes an inability to generate community-specific information about youth outcomes (eg, by zip code or school district).

 

Evaluation design: Community assets and needs assessments

Community assets and needs assessments are tools used to describe the environments of partnerships and the populations they serve. A good assessment allows leveraging of positives for greater success and targeting of weak areas needing support and investment. On the basis of the research of Kretzmann and McKnight, the Asset-Based Community Development Institute has developed tools that are available on-line and can be used by lay people to assess the assets of a community to be used in planning and directing change-oriented projects (http://www.northwestern.edu/ipr/abcd.html). Using a self-report questionnaire, the primary tool gathers information about individual skills, community skills, interests and experience, and personal information from the participants. The results are used to produce a picture of the community's strengths and challenges. Workbooks guide leaders in using assessment results for action planning.

 

Limitations include costs for the planning tools, and a time- and labor-intensive planning process. The time investment can cut both ways however; participating in an extensive assessment process can also contribute to buy-in from stakeholders who remain engaged throughout. And there is another advantage: whereas many study designs give only a retrospective picture, this type of model creates a picture for the future to guide planning. Strengths of this evaluation design include (1) involving a wide variety of audiences that make up a community (eg, children, elderly, those of limited education), (2) providing a method for generating plans from the study's findings, and (3) allowing for repeated assessments as new perspectives and audiences are identified.

 

Youth as Coevaluators

Youth engagement and leadership are important evaluation enhancements for each of the evaluation designs described above. Including young people in planning, implementing, interpreting, and disseminating evaluations is both useful pragmatically and consistent philosophically with the YD approach. As detailed by Powers and Tiffany in their case studies of youth as YD researchers (in this journal supplement), engaging young people in evaluating CBPs serves several YD-related purposes and offers several advantages. These advantages include (1) reinforcing the value of young people as resources for the community; (2) adding the perspective of an important stakeholder group to the evaluation that may increase its ecological validity; and (3) allowing young people to develop new technical and marketable employment skills, leadership experiences, and decision-making capacity while engaging in civic activities.

 

Trade-offs are involved. A significant constraint of youth-led evaluation is the requirement for training and follow-up for both youth and the adults who are working with them. Youth need training in the required planning, evaluation, and presentation skills; adults need training in how to effectively share responsibility and decision-making power with youth.

 

One of the CBPs in the ACT for Youth initiative, the Partnership for Youth and Community Empowerment (PYCE), conducted a youth-led evaluation project in 2002. Youth involved in the partnership's programs were employed as evaluators to identify key issues for youth in their urban neighborhood. Youth received ongoing training in the design and implementation of a large-scale evaluation project, and then conducted surveys with young people and adult community members about issues related to health and education. The results of engaging youth in collecting data about themselves and their communities included highly credible and actionable data about the strengths and needs of young people in the community as well as public visibility for a youth-led community service project. The participating youth increased their analytical and problem-solving skills, and gained experience working in project teams. A second youth-led evaluation project from the same partnership trained youth in logic modeling for the purpose of program management and design, with similarly positive results.22

 

Another youth as coevaluators example can be seen in the work of the California-based Adolescent Health Work Group (AHWG) and the San Francisco Health Plan (SFHP). In 2001, the AHWG (http://www.ahwg.net/ and the SFHP spearheaded a project called Healthy Realities, a youth-led assessment of clinics serving adolescent patients aged 12-21. Over the course of several months a group of young adults played central roles in designing each survey instrument and conducted a full-scale assessment of more than 10 San Francisco-based clinics to evaluate their overall accessibility and youth friendliness. The evaluation focused on the ease of making appointments, staff friendliness, waiting room atmosphere, and confidentiality of health-related information. Evaluation results were communicated back to clinics through report cards that rated each center on clinic policies, atmosphere, and ease of making appointments. Following the evaluation, young people and staff of AHWG worked one-on-one with clinic staff providing them with intensive consulting and training in order to improve youth friendliness in several identified areas.

 

Applying and Using Evaluation Results

CBP development takes time, resources, and a long-term commitment from its key stakeholders. A well-planned evaluation strategy can support planning, implementation, and continuous quality improvement of the wide range of strategies that CBPs employ. When evaluation studies are conducted, results from each component should be used to reinforce, revise, or expand elements of the CBP's change theory. We conclude with an illustration of this approach.

 

A successful example can be seen in one CBP's efforts to engage youth and expand youth services in a medium-sized, industrial-based city. The CBP consisted of partners from education, human service, local government, and other sectors. Youth services stakeholders in this community enjoyed preexisting, trusting relationships with each other on the basis of years of referral and networking. Although these stakeholders had never attempted an organized community-wide change effort, a core group formed a partnership and adopted the Search Institute's positive YD and asset-building approach. Rather than target direct service provision, the CBP focused on two groups: human service organizations and community institutions that provide services, and youth who benefit from these services; and the communities that provide the environment in which these two populations interact. Their efforts fell into six interrelated categories: (1) visioning, mission, and planning; (2) organizational preparation and readiness; (3) partnership development; (4) identifying small wins and approaches for success; (5) "in-flight" corrections; and (6) publicizing and celebrating successes.

 

Because CBP partners were not able initially to achieve consensus regarding common goals and strategies, they organized a facilitated retreat. The results were an organizational structure (including membership criteria and expectations for participation) and an agreed-upon aim of implementing an evidence-based, data-driven, and outcomes-focused change process to promote community mobilization and capacity building. The CBP promoted the community-wide adoption of a YD "lens" for reviewing and improving supports, opportunities, and services for youth, with an initial focus on education and public awareness of the YD perspective and partnership-building.

 

A central tool in this effort was a community-wide survey about the status of young people that generated tremendous interest and built momentum for change. To promote cohesion and shared purpose within the partnership, partners adopted common YD language and concepts, and agreed on common program outcomes to measure youth assets. Members then began recruiting powerful partners who gave legitimacy to their collaboration and provided access to important political interests, funding sources and influential institutions. These new members participated in regular meetings, committed support and resources, and provided access for the collaboration's activities. An important benchmark for promoting a YD perspective was realized when important community funding sources linked youth asset outcomes to funding requirements.

 

After evaluating the CBP's change efforts, partners discovered that education about YD alone was not sufficient to meet community-wide change goals (eg, proportion of programs including youth in the planning and delivery of programs, number of community-based programs implemented meeting youth needs, and number of youth on community planning boards). In particular, although many agencies had incorporated aspects of a YD approach, they were having difficulty institutionalizing data-driven programs. In response, the CBP arranged for human service organizations and community institutions to receive additional technical assistance in organizational capacity-building and ongoing mentoring for technology transfer. Youth also received additional training in advocacy and organizational change skills, and community coalitions and faith-based groups received training in community mobilization and advocacy. After addressing this barrier, the CBP achieved better results when cross-systems, intercommunity YD programming began to occur.

 

This focus on capacity-building, instead of direct service provision, proved to be successful. In fact, the changes persisted in the face of a subsequent severe fiscal crisis. Celebration of CBP successes was interwoven with school improvement efforts, and the emphasis on a data-driven approach led to favorable media and community attention. Community-wide youth survey results allowed CBP members to speak authoritatively about the strengths and needs of young people in the community. Youth service organizations and community coalitions and groups were able to share measurable program outcomes with collaboration members, community members, and the media.

 

Once evidence of the CBP successes emerged, many partners had strong incentives both individually and organizationally to publicize them. Because many youth programs had a data-collection component built in, there was a continual "supply" of new programs and outcomes that demonstrated successes and community-wide support. Positive publicity was therefore easily generated and momentum was maintained.

 

Conclusion

This review has indicated that embedded evaluations for continuous learning and improvement provide one key to unleashing the considerable potential of CBPs focused on YD. Starting with a theory-of-change logic model elicited from key stakeholders either before the CBP is formed, after it has been launched, or both, targeted (specially tailored) evaluations can be partnership assets and drivers.

 

Evaluations of YD-focused CBPs mirror the complexity of the partnerships they study and guide. To manage their complexity, teams of evaluators are needed. Evaluators competent to work interactively with community partners, including youth who serve as coevaluators, constitute an important priority. Important meta-evaluation lessons learned are among the benefits of these complex evaluations of equally complex partnerships.

 

The novelty and complexity of these new evaluations do not rule out conventional evaluation methods. To the contrary, this review has provided several examples of conventional methods, and it has illustrated their import. The key is for evaluators and partnership leaders to make good choices about which methods to include.

 

Of course, the question of methods is dependent on fundamental questions of the aims, missions, goals, objectives, and accountabilities of the partnership. As with any evaluation, what matters is what needs to be measured, and this entails sorting out the essential priorities from a multitude of alternatives. Here, collaborative processes and products, emblematic of successful CBPs, also are indicative of successful evaluations.

 

Aside from the generic principles and issues presented in this review, there are no easy recipes, no cookie-cookie evaluation designs for every CBP. Because each CBP is at least somewhat distinctive, unique elements are to be expected in each evaluation design. In practical terms, this means that the partnership leaders and evaluators must make good decisions. This review has achieved its primary aim if it has provided a context and a foundation for helping planners and evaluators make solid decisions, ones that enable them to develop, implement, and benefit from targeted evaluations of CBPs focused on YD.

 

REFERENCES

 

1. Catalano R, Berglund M, Ryan J, Lonczak H, Hawkins J. Positive youth development in the United States: research findings on evaluations of positive youth development programs. Ann Am Ac Pol Soc Sci. 2004;591:98-124. [Context Link]

 

2. Mitchell R, Agle B, Wood D. Toward a theory of stakeholder identification and salience: defining the principle of who and what really counts. Ac Mgmt Rev. 1997;22:853-886. [Context Link]

 

3. National Research Council and Institute of Medicine Committee on Community-Level Programs for Youth. Community Programs to Promote Youth Development. Washington, DC: National Academy Press; 2002. [Context Link]

 

4. Pittman K, Irby M, Tolman J, Yohalem N, Ferber T. Preventing Problems, Promoting Development, Encouraging Engagement: Competing Priorities or Inseparable Goals? Washington, DC: The Forum for Youth Investment Impact Strategies Inc; 2003. [Context Link]

 

5. Lawson H. The logic of collaboration in education and the human services. J Interprof Care. 2004;18:225-237. [Context Link]

 

6. Scales P, Leffert N. Developmental Assets: A Synthesis of the Scientific Research on Adolescent Development. Minneapolis, Minn: Search Institute; 1999. [Context Link]

 

7. Clayton S, Bolcoa J, Loots B, Lee C. Involving Youth in Public Policy. San Francisco, Calif: California Adolescent Health Collaborative; 2001. [Context Link]

 

8. US Department of Health and Human Services. Introduction to Program Evaluation for Public Health Programs: A Self-Study Guide. Atlanta, Ga: Centers for Disease Control and Prevention; 2005. [Context Link]

 

9. Spiegel J, Bonet M, Annalee Y, Tate RB, Concepcion M, Canizares M. Evaluating the effectiveness of a multi-component intervention to improve health in an inner-city Havana community. Int J Occup Environ Health. 2003;9:118-127. [Context Link]

 

10. National Crime Prevention Council. Embedding Prevention in State Policy and Practice: First Annual Evaluation Report. Vol 1. Washington, DC: Association for the Study and Development of Community; 2002. [Context Link]

 

11. Bloomberg L, Ganey A, Alba V, Quintero G, Alcantara LA. Chicano-Latino Youth Leadership Institute: an asset-based program for youth. Am J Health Behav. 2003;27(S1):S45-S51. [Context Link]

 

12. Barnes M, Sullivan H, Matka E. The Development of Collaborative Capacity in Health Action Zones: A Final Report From the National Evaluation. Birmingham, UK: University of Birmingham; 2004. [Context Link]

 

13. Nilsen P. Evaluation of community-based injury prevention programmes: methodological issues and challenges. Int J Inj Control Saf Promot. 2005;12(3):143-156. [Context Link]

 

14. Marks D, Sykes C. Evaluation of the European Union programme of community action on health promotion, information, education and training 1996-2000. Health Promot Int. 2002;17(2):105-118. [Context Link]

 

15. Lafferty C, Mahoney C. A framework for evaluating comprehensive community initiatives. Health Promot Prac. 2003;4(1):31-44. [Context Link]

 

16. Connell J, Kubisch A. Applying a theory of change approach to the evaluation of comprehensive community initiatives: progress, prospects, and problems. In: Fulbright-Anderson K, Kubisch A, Connell J, eds. New Approaches to Evaluating Community Initiatives: Theory, Measurement and Analysis. Vol 2. Washington, DC: Aspen Institute; 1998:15-45. [Context Link]

 

17. Fulbright-Anderson K, Kubisch A, Connell J, eds. New Approaches to Evaluating Community Initiatives: Vol 2, Theory, Measurement and Analysis. Queenstown, Md: Aspen Institute; 1998. [Context Link]

 

18. Weiss CH. Nothing as practical as good theory: exploring theory-based evaluation for comprehensive community initiatives for children and families. In: Connel JP, Kubisch AC, Schorr LB, Weiss CH, eds. New Approaches to Evaluating Community Initiatives: Concepts, Methods, and Contexts. Washington, DC: Aspen Institute; 1995:65-92. [Context Link]

 

19. Center for the Advancement of Collaborative Strategies in Health. Partnership Self-Assessment Tool 2.0. Available at: http://www.partnershiptool.net/. Accessed November 19, 2005. [Context Link]

 

20. Weick KE. Small wins: redefining the scale of social problems. Am Psychol. 1984;39(1):40-49. [Context Link]

 

21. Tierney J, Grossman J, Resch N. Making a Difference: An Impact Study of Big Brothers/Big Sisters. Philadelphia: Public/Private Ventures; 1995. [Context Link]

 

22. Peake K, Gaffney S, Surko M. Capacity-building with community-based youth workers. J Pub Health Manag Prac. 2006;(suppl):S65-S71. [Context Link]