Search form

Documenting Progress and Using Feedback

OVERVIEW AND EVIDENCE BASE

What do we mean by this process?

Documenting Progress and Using Feedback is a process of gathering and using "data as an energizer" (Hogan & Murphey, 2002). When groups engage in this process, they measure, communicate, and use early and ongoing indicators of progress to assess and improve an initiative (rather than waiting until the intervention is over to assess what has changed). Because it can take many years to change population-level outcomes, indicators of long-term outcomes are not very useful to guide day-to-day activities and adjustments. This process helps groups to engage in early and ongoing communication about goals, their project's theory of change, accomplishments, and their current and potential contribution toward making a difference. Participation in this process can help initiatives identify and market successes that can help in Sustaining the Work.

Documenting Progress and Using Feedback is a key process to help groups understand what they are doing, how it contributes to their goals, and areas for adjustment.

How it works

Community initiatives are complex and dynamic, and present ongoing opportunities for a "continuous learning orientation" (Foster-Fishman, Berkowitz, Lounsbury, Jacobson, & Allen, 2001). As their work unfolds, participants often adapt to shifting conditions, dialogue about problems, and seek information and expertise to improve their intervention. By engaging in a process of Documenting Progress and Using Feedback, community initiatives consistently seek and respond to information and evaluation data (e.g., more intermediate outcomes or program effects) to improve their functioning and impact. This focus on documentation of more intermediate outcomes can help to (a) Document progress (e.g., community and system changes), (b) celebrate accomplishments, (c) identify barriers to progress, and (d) redirect efforts to potentially more effective activities (Fawcett et al., 1996; Francisco, Paine, & Fawcett, 1993).

Although the functional mechanisms have not been explicitly tested, Documenting and Using Feedback may help groups to:

  • Enhance the involvement of affected groups and other stakeholders in all aspects of the participatory action research process (Boothroyd, Fawcett, & Foster-Fishman, 2004; Fawcett, Boothroyd, Schultz, Francisco, Carson, & Bremby, 2003; Green, XXX; Minkler & Wallerstein, 2003; Whyte, XXXX).
  • Enhance the functioning of a partnership by helping to identify and provide feedback on what is (and is not) working (e.g., Goodman, Wandersman, Chinman, Imm, & Morrissey, 1996; Rowe, 1997).
  • Characterize and enhance program delivery to optimize effects (e.g., Shaw, Rosati, Salzman, Coles, & McGeary, 1997).
  • Overcome barriers as they arise, promote accountability, and achieve targeted goals (Foster-Fishman, Berkowitz, Lounsbury, Jacobson, & Allen, 2001).
  • Use clear benchmarks for progress (as indicated by action and evaluation plans) in an effort to be accountable to the partnership's theory of change and the community's commitment to change (e.g., Ploeg et al., 1996; Roussos & Fawcett, 2000). This process can help to illuminate the partnership's theory or logic model of action, and communicate data on the dynamic process of unfolding change (e.g., Roussos & Fawcett, 2000). See Developing a Framework or Model of Change It can also help groups to avoid "data warehousing" and, rather, be focused, not all-encompassing, in their critical reflection and sense-making efforts (Hogan & Murphey, 2002).
  • Characterize, analyze, and make sense of an initiative's often dynamic contribution to complex problems or goals (Fawcett, Boothroyd, Schultz, Francisco, Carson, & Bremby, 2003; Fawcett, Schultz, Carson, Renault, & Francisco, 2003; Paine-Andrews et al., 2002). As such, the process can also illuminate community readiness and capacity to change environments and improve population-level outcomes.
  • Shift attention from less intensive behavior change strategies (i.e., providing information to raise awareness among individuals) to more intensive behavior change strategies (i.e., policy changes) that target the broader environment for change (Florin, Mitchell, & Stevenson, 1993; Merzel & D'Affliti, 2003)
  • Promote the effort's credibility and justify sustainability though the achievement and documentation of "quick wins" or intermediate goals (Roussos & Fawcett, 2000; Foster-Fishman, Berkowitz, Lounsbury, Jacobson, & Allen, 2001).
  • Estimate exposure or dose that might assist in differentiating between Intervention failure ("this intervention does not work") and implementation failure ("this intervention is not being implemented as we or previous research had suggested") (Shortell et al., 2002; Sorensen, Emmons, Hunt, & Johnston, 1998; Kreuter, Lezin, & Young, 2000).

Summative evaluations at the end of a community intervention's funding often indicate modest impacts due to weaknesses in delivery such as limited duration, intensity, insufficient scope of activities, and inadequate penetration into the community (Merzel & D'Affliti, 2003). Accordingly, the regular process of Documenting Progress and Using Feedback can enable collaborators to regularly review what they are doing, what it means, and how they can adjust to improve - rather than conclude at the end that, upon review, their activities were insufficient.

Empirical and Experiential Evidence

An initiative is unable to assess the degree of implementation or its impact without criteria and data for documentation, assessment, and evaluation. More importantly, this kind of regular assessment is critical for taking corrective/adaptive actions. For example, in the COMMIT randomized trial of 22 communities for tobacco use cessation, smoking behaviors of heavy smokers - the primary target population - did not change (COMMIT Research Group, 1995a, 1995b). Upon data analyses at project completion, intervention worksites reported an average level of project implementation at only 37% (e.g., implementation of cessation workshops or smoke-free policies). Similarly, other evaluation data from summative project evaluations repeatedly indicated that participants reported exposure to intervention activities that reflected less intensive forms of behavior change strategies. For example, in the Pawtucket Heart Health Program, 55% of program participants reported receiving screening services only, while only 10% of the city's population participated in exercise programs over the project's seven year duration (Elder et al., 1986). In the Minnesota Heart Health Program, 60% of adults participated in screening services, 87% reported exposure to mass media health messages, yet only 4.1% of smokers participated in smoking cessation programs (Lando et al., 1995). Alternatively, early and ongoing documentation and feedback regarding the reach and penetration of intervention components may have suggested the need to improve the quality and intensity of interventions that were necessary for changing complex behaviors such as heavy smoking.

Similarly, the Fighting Back Initiative mobilized citizen task forces around a broadly defined and community-driven intervention focused on alcohol and other drug use prevention (Saxe et al., 1997). Communities had extreme latitude in implementing programs, policies, and practices to address outcomes. Summative evaluations indicated that, after ten years, intervention sites were not significantly different from matched comparison communities on 30-day alcohol use, binge drinking, and other indicators (Hallfors et al., 2002). At the project's end, experts in the field, in an attempt to describe the dose of the intervention, assigned implementation scores based on the relative strength of strategy implementation within a site (i.e., from zero, indicating no activity, to 5, indicating extensive activity). Across 14 communities, average implementation scores ranged from 1.1 to 3.9. Most sites scored minimally (2 or less) on their use of practices such as reducing youth access to tobacco and alcohol. The most used strategy across all sites was public information, one of the weakest forms of behavior-change strategies. More regular feedback regarding the intensity of intervention delivery could have prompted sites to adjust to more effective program strategies.

By contrast, Kansas communities in the School-Community Sexual Risk Reduction Initiative participated in a comprehensive and ongoing evaluation. Sites tracked community change - new or modified programs, policies, and practices - as a metric that reflects changing conditions in the environment that support widespread behavior change and subsequent population-level outcomes. Monthly, sites documented and reflected on the (a) amount (number reported), (b) intensity (use of behavior change strategies beyond providing information, (c) duration (length of time in place), and (d) penetration (delivery through different sectors and geographic areas). Unlike evaluations of previous community initiatives, investigators were able to establish a link between these intermediate outcomes and birth rates. Results indicated that birth rates decreased in Target Area A and not Target Area B. The intervention was more intensive and comprehensive in terms of attention to program components, risk factors, and distribution by sector in Area A versus B (Paine Andrews et al., 2002).

Overall, the study of community initiatives, especially systematic measurement of their complex and dynamic interventions and intermediate outcomes, can aid in better understanding their effectiveness. The process of Documenting Progress and Using Feedback can help to distinguish intervention failure from implementation failure, often called Type 3 error (Cook & Campbell, 1979). Without data on implementation of the intervention and intermediate outcomes, it is difficult to establish an association between an intervention and reported effects. Similarly, in the face of no results, it can be unclear if the study design was without merit, implementation was flawed, or large populations were reached by a weak intervention (Boothroyd, Fawcett, & Foster-Fishman, 2003).

Implications for Research and Practice

At present, much of the information available on Documenting Progress and Using Feedback does not explicitly manipulate or test this process and its effects on community change and improvement. Although this process has been identified in several empirical and experiential reviews as a key ingredient for advancing change, there is a need for more systematic evaluations of its effects. Such research would provide a better understanding of the factors that enable communities to come together and address shared problems and goals.

Some key research questions include: (a) What is the most effective dose of a community intervention (e.g., duration, penetration, exposure)? (b) How might interactions between intervention dose and population characteristics affect program planning and implementation? (c) What quantitative and qualitative methods best address stakeholders' questions and interests? (d) How, by whom, and under what circumstances are data best communicated to ensure maximum impact? And (e) How can we measure and analyze how intervention components are contributing to best practices for prevention?

Overall Recommendation for Practice

Based on research and experience, we highly recommend Documenting Progress and Using Feedback as a key process to advance community change and improvement.