Search form

12. Evaluating the Initiative

This toolkit aids in developing an evaluation of a community program or initiative.

  1. Identify key stakeholders and what they care about (i.e., people or organizations that have something to gain or lose from the evaluation). Include:
    1. Those involved in operating the program or initiative (e.g., staff, volunteers, community members, sponsors, and collaborators)
    2. Those prioritized groups served or affected by the effort (e.g., those experiencing the problem, public officials)
    3. Primary intended users of the evaluation (e.g., program or initiative staff, community members, outside researchers, funders).
       
      Related resourses:
      Developing a Plan for Identifying Local Needs and Resources
      Understanding and Describing the Community
      Understanding Community Leadership, Evaluators, and Funders: What Are Their Interests?
      Choosing Evaluators
       
  2. Describe the program or initiative’s framework or logic model (e.g., what the program or effort is trying to accomplish and how it is doing so). Include information about:
    1. Purpose or mission (e.g., the problem or goal to which the program, effort, or initiative is addressed)
    2. Context or conditions (e.g., the situation in which the effort will take place; factors that may affect outcomes)
    3. Inputs: resources and barriers (e.g., resources may include time, talent, equipment, information, money, etc.). Barriers may include history of conflict, environmental factors, economic conditions, etc.
    4. Activities or interventions (i.e., what the initiative will do to effect change and improvement) (e.g., providing information and enhancing skills; enhancing services and support; modifying access, barriers and opportunities; changing the consequences; modifying policies and broader systems)
    5. Outputs (i.e., direct evidence of having performed the activities) (e.g., number of services provided)
    6. Intended effects or outcomes
      1. Shorter-term (e.g., increased knowledge or skill)
      2. Intermediate (e.g., changes in community programs, policies, or practices)
      3. Longer-term (e.g., change in behavior or population-level outcomes)
         
        Related resources:
        Developing an Evaluation Plan
        Proclaiming Your Dream: Developing Vision and Mission Statements
        Developing a Plan for Identifying Local Needs and Resources
        Identifying Community Assets and Resources
        Identifying Targets and Agents of Change: Who Can Benefit and Who Can Help
         
  3. Focus the evaluation design - what the evaluation aims to accomplish, how it will do so, and how the findings will be used.
    Include a description of:
    1. Purpose or uses: what the evaluation aims to accomplish. Purposes may include: 1) Gain understanding about what works, 2) Improve how things get done, 3) Determine the effects of the program with individuals who participate, 4) Determine the effects of the program or initiative on the community
    2. Evaluation questions: Indicate what questions are important to stakeholders, including those related to:
      1. Process measures
        • Planning and Implementation Issues: How well was the initiative planned and implemented? Did those most affected contribute to the planning, implementation and evaluation of the effort? How satisfied are participants with the program?
      2. Outcome measures
        • Attainment of objectives (e.g., How well has the program or initiative met its stated objectives?)
        • Impact on participants (e.g., How much and what kind of a difference has the program or initiative made for its prioritized groups?)
        • Impact on community (e.g., How much and what kind of a difference has the program or initiative made on the community? Were there any unintended consequences, either positive or negative?)
    3. Methods: what type of measurement and study design should be used to evaluate the effects of the program or initiative? Typical designs include case studies and more controlled experiments. By what methods will data be gathered to help answer the evaluation questions? Note appropriate methods to be used including:
      1. Surveys about satisfaction and importance of the initiative
      2. Goal attainment reports
      3. Behavioral surveys
      4. Interviews with key participants
      5. Archival records
      6. Observations of behavior and environmental conditions
      7. Self-reporting, logs, or diaries
      8. Documentation system and analysis of contribution of the initiative
      9. Community-level indicators of impact (e.g., rates of HIV)
      10. Case studies and experiments
         
        Related resources:
        Our Evaluation Model: Evaluating Comprehensive Community Initiatives
        A Framework for Program Evaluation: A Gateway for Tools
        Measuring Success: Evaluating Comprehensive Community Health Initiatives
        Providing Feedback to Improve the Initiative
        Gathering and Using Community-Level Indicators
        Rating Member Satisfaction
        Conducting Interviews with Key Participants to Analyze Critical Events
        A Framework for Program Evaluation
        Reaching Your Goals: The Goal Attainment Report
        Rating Member Satisfaction
        Constituent Survey of Outcomes: Ratings of Importance
        Rating Community Goals
        Gathering Information: Monitoring Your Progress
        Behavioral Surveys
         
  4. Gather credible evidence- decide what evidence is, and what features affect credibility of the evaluation, including:
    1. Indicators of success - specify criteria used to judge the success of the program or initiative. Translate into measures or indicators of success, including
      1. Program outputs
      2. Participation rates
      3. Levels of satisfaction
      4. Changes in behavior
      5. Community or system changes (i.e., new programs, policies, and practices)
      6. Improvements in community-level indicators
    2. Sources of evidence (e.g., interviews, surveys, observation, review of records). Indicate how evidence of your success will be gathered
    3. Quality – estimate the appropriateness and integrity of information gathered, including its accuracy (reliability) and sensitivity (validity). Indicate how quality of measures will be assured.
    4. Quantity – estimate what amount of data (or time) is required to evaluate effectiveness.
    5. Logistics – indicate who will gather the data, by when, from what sources, and what precautions and permissions will be needed.
       
  5. Outline and implement an evaluation plan. Indicate how you will:
    1. Involve all key stakeholders (e.g., members of prioritized groups, program implementers, grantmakers) in identifying indicators of success, documenting evidence of success, and sense making about the effects of the overall initiative and how it can be improved.
    2. Track implementation of the initiative’s intervention components
    3. Assess exposure to the intervention
    4. Assess ongoing changes in specific behavioral objectives
    5. Assess ongoing changes in specific population-level outcomes
    6. Examine the contribution of intervention components (e.g., a program or policy) and possible improvements in behavior and outcomes at the level of the whole community/population
    7. Consider the ethical implications of the initiative (e.g., Do the expected benefits outweigh the potential risks?)
       
  6. Make sense of the data and justify conclusions. Indicate how each aspect of the evaluation will be met.
    1. Standards – values held by stakeholders and how they will be assured. Indicate how each key standard will be assured:
      1. Utility standards: to ensure that the evaluation is useful and answers the questions that are important to stakeholders, including:
        • Information scope and selection: Information collected should address pertinent questions about the program, and it should be responsive to the needs and interests of clients and other specified stakeholders.
        • Report clarity: evaluation reports should clearly describe the program being evaluated, including its context, and the purposes, procedures, and findings of the evaluation.
        • Evaluation impact: evaluations should be planned, conducted, and reported in ways that encourage follow-through by stakeholders, so that the evaluation findings will be used.
      2. Feasibility standards: to ensure that the evaluation makes sense and its steps are viable and pragmatic, including:
        • Practical procedures: the evaluation procedures should be practical, to keep disruption of everyday activities to a minimum while needed information is obtained.
        • Political viability: the evaluation should be planned and conducted with anticipation of the different positions or interests of various groups.
        • Cost effectiveness: the evaluation should be efficient and produce enough valuable information that the resources used can be justified.
      3. Propriety standards: to ensure that the evaluation is ethical and that it is conducted with regard for the rights and interests of those involved, including:
        • Service orientation: evaluations should be designed to help organizations effectively serve the needs of all participants.
        • Formal agreements: the responsibilities in an evaluation (what is to be done, how, by whom, when) should be agreed to in writing, so that those involved are obligated to follow all conditions of the agreement, or to formally renegotiate it.
        • Rights of participants: evaluation should be designed and conducted to respect and protect the rights and welfare of all participants in the study.
        • Complete and fair assessment: the evaluation should be complete and fair in its examination, recording both strengths and weaknesses of the program being evaluated.
        • Conflict of interest: conflict of interest should be dealt with openly and honestly, so that it does not compromise the evaluation processes and results.
      4. Accuracy standards: to ensure that the evaluation findings are considered correct. Indicate how the accuracy standards will be met, including:
        • Program documentation: the intervention should be described and documented clearly and accurately, so that what is being evaluated is clearly identified.
        • Context analysis: the context in which the initiative exists should be thoroughly examined so that likely influences on the program’s effects can be identified.
        • Valid information: the information gathering procedures should be chosen or developed and then implemented in such a way that they will assure that the interpretation arrived at is valid.
        • Reliable information: the information gathering procedures should be chosen or developed and then implemented so that they will assure that the information obtained is sufficiently reliable.
        • Analysis of quantitative and qualitative information: quantitative information (i.e., data from observations or surveys) and qualitative information (e.g., from interviews) should be appropriately and systematically analyzed so that evaluation questions are effectively answered.
        • Justified conclusions: the conclusions reached in an evaluation should be explicitly justified, so that stakeholders can understand their worth.
    2. Analysis and synthesis – indicate how the evaluation report will analyze and summarize the findings.
    3. Sensemaking and interpretation – how will the evaluation report communicate what the findings mean? How will the stakeholders use the information to help answer the evaluation questions? How will the group communicate what the findings suggest?
    4. Judgments – statements of worth or merit, compared to selected standards. How will the group communicate what the findings suggest about the value added by the effort?
    5. Recommendations – how will the group identify recommendations based on the results of the evaluation?
       
  7. Use the information to celebrate, make adjustments, and communicate lessons learned. Take steps to ensure that the findings will be used appropriately, including:
    1. Design – communicate how questions, methods, and findings are constructed to address agreed-upon uses
    2. Preparation – anticipate future uses of findings; how to translate knowledge into practice
    3. Feedback and sense-making – how communication and shared interpretation will be facilitated among all users
    4. Follow-up – support users’ needs during evaluation and after receiving findings, including to celebrate accomplishments and make adjustments
    5. Dissemination – communicating lessons learned to all relevant audiences in a timely manner
       
 

Black and white image of woman staring out of a window with the text "Featured Toolkit Example" above it.

Depression Self-Management for Rural Women with Disabilities

The Center for Research on Women with Disabilities designed and implemented this project at nine centers for independent living.

Read more.

 

Mountain view in Finland with the text "Featured Toolkit Example" above it.

The North Karelia Project

In the late 1960s and into the '70s, Finland had the world's highest death rate due to cardiovascular disease (CVD).

Read more.

Image of a stack of books with an apple on top of them.

Sixteen training modules
for teaching core skills.
Learn more.


Image of cute girl with glasses and a book on top of her head.

Seeking supports for evaluation?
Learn more.

Image of hands with a seedling in them and the words Donate Now below them.

The Tool Box needs your help
to remain available.

Your contribution can help change lives.
Donate now.