|Learn about the importance of evaluating community initiatives and follow step-by -step methods in this chapter to evaluate your group's work.|
What does it mean to evaluate a community initiative?
Why should you evaluate a community initiative?
When should you evaluate a community initiative?
How do you begin an evaluation?
Whenever you begin a new job or start a project, you will probably want to evaluate your work. You might ask yourself: "Is my work meeting my expectations and those of my boss, my administrator, or our target audience?"
In a similar fashion, community initiatives need to be "evaluated" along the way. Asking the community if your action plans are actually working is one important example of how to begin an evaluation. If the results are positive, that suggests your initiative is heading in the right direction, and you probably should keep heading there. However, evaluation is also essential in helping one see what "course corrections" might need to be made if the initiative is not heading in the direction you planned.
This chapter will explain the importance of evaluation, and will offer step-by -step methods to evaluate your group's work. Then, when you discover that your efforts are beginning to make a positive difference, you can continue on with even more energy and enthusiasm!
What does it mean to evaluate a community initiative?
Basically, it means to determine the value of the work. You have developed and implemented an initiative in your community, and you want to know how well it's working. Evaluation provides you with this feedback.
In many avenues of life, we get feedback right away. You hit the brakes, the car stops. You shoot a basketball; it either goes in or it doesn't. That's instant evaluation, and it's completely understandable. There's no ambiguity. But with more complex events, such as social interventions, the results are not always as clear. (Sometimes they are--you schedule an event and people are lining up around the block to get in--that's rapid and clear evaluation; but it's rare). That's why you need to put more energy and thought into finding out how you did. And that's basically what evaluation is all about--giving you information on the value of your work.
In this chapter, we'll talk about evaluation as a means of obtaining feedback, data and information about your group and its activities. By using this information, you can decide what aspects of your action plan work, and what areas need improvement. When you evaluate your program, then, you are gathering information to help you draw conclusions about your project and the efforts of your group. After you have drawn conclusions from the information, you can make any necessary changes to your goals and/or action plan.
The sections in this chapter will focus specifically on ways to gather these valuable data. The next chapter will discuss methods to best use the data collected to strengthen and improve the initiative. Resistance to evaluation is common. To gain the greatest benefits from evaluation, you must overcome some common misconceptions about it.
Fears of evaluation
Evaluation can be frightening to many people. Generally, these fears fall into three types--"I don't know how;" "I don't have time;" or "The results might be negative and hurt us." All of these are valid concerns. But they shouldn't be so discouraging as to outweigh the benefits of doing an evaluation. Here are some responses to these concerns:
I don't know how to do evaluation.
Well, that's why we're here. We'll help you through the steps of how to plan an evaluation of your program, how to do the evaluation itself, and then how to use it to help your group.
I don't have the time.
Evaluation can take some time to do. That's absolutely true. However, doing an evaluation now will save you a lot more time "down the road," as it will point out potential problems while they're still small problems, instead of waiting until a disaster occurs.
And there is another time-saving benefit. Once you begin to record the kinds of information you will need for your evaluation, it will become just a regular part of your routine operations. It will take less time later on. The hardest part is the beginning.
The results may be negative or hurt us.
This is a possibility. However, it's unlikely to happen if you use evaluation from early on and don't let little problems grow into big problems. And remember that any negative results you may find should actually be helpful to you, at least in the long run. They will help you improve the quality of your program or initiative-- which should be one of your own goals from the start.
Why should you evaluate a community?
Being successful demands careful attention during the beginning, middle, and end of a project. If a violinist wants to learn a new piece of music for an upcoming concert, for example, she would prepare by practicing for many hours each day. But, if she never asks her teacher to listen to her play, she may be playing the music too slowly, too fast, too softly, or too loudly. If she never knows the proper way to play the piece--if she never gets any feedback--all of the practice in the world won't help her sound in tune and in time on the night of the performance.
Like the violinist, community groups need to pay careful attention to feedback during the beginning, middle, and end of their projects. An initiative can devote a great deal of time and energy to working on meeting its goals. But, if the work isn't heading in the right direction, all of those long hours and hard work can lead to frustration instead of a feeling of success. Evaluation tells the group how it's doing and helps identify any necessary changes along the way that will help you stay "in tune" with your own goals and the needs of the community.
There are many reasons why evaluations are valuable. Let's look at a few examples of ways in which evaluation can benefit a community group.
- Success is reinforcing - it brings more resources your way. There is an old adage that says, "Nothing succeeds like success." It stands to reason that the more successful your group's work proves to be, the more support and encouragement you might receive from members of the community and maybe even from funders. Evaluation can document your success, with facts, figures, and examples. If volunteer hours in your organization increased by 100% last year, or if every single child in your community was up-to-date on immunizations, those types of achievements can bring new resources your way. In other words, if evaluation can provide concrete examples of your group's successes, that can only be advantageous. In this way, evaluation can help you "toot your own horn."
- Failure is instructive. Even if your work falls short of its goals - and even if your program falls flat on its face - that knowledge can be helpful too. It may be painful in the short run; it might hurt. Yet negative feedback, or a negative evaluation, can really help you in the longer-range scheme of things. At least you know what the truth is, and where you stand. You have fewer illusions. And once you have dusted off, you will probably learn from the evaluation you received. Chances are you won't make the same mistakes again. You are now in a better position to make improvement.
- Evaluation can make you feel good. Being able to see your successes and the value of your work will obviously boost your spirits and motivate you to continue with your work. Again, this works both ways, but even negative aspects should be seen as an opportunity to learn about what works, and not as a failure!
- Evaluation raises the chances of further action. Once you have completed your first evaluation, you know what has worked and what has not for your group. You can modify the tactics that didn't work as well as planned, and reinforce those areas that were successful. So now you can take further action with an even greater chance of success! Others will notice this success and may join or help your group, further increasing your chances that your program will make a positive impact.
- The evaluation can help you understand important aspects of the initiative. You've just finished your first evaluation. The results may indicate some part of your initiative worked really well. For example, the free cholesterol screening at the local health clinic you sponsored was jam-packed with people waiting to be tested. On the other hand, no one is showing up to your monthly Healthy Cooking classes. Maybe it's because the classes are only held during the day, or maybe the classroom is located too far away from most of the people in your community. There are many possible reasons your project might not work as planned. Evaluation will help you understand why things worked, or didn't work, as they did.
When should you evaluate a community initiative?
Evaluation should take place during several stages of your group's life.
When your plan is in action:
- Determine baselines for behaviors you wish to change. If you want to know how much change your program has brought about, you'll need to know what was happening before your group got started.
- Focus on the impact your work is having on the community.
- Continue revising and updating action plans.
- Keep the group strong and focused on the goals at hand. You may want to use a survey that appraises your community goals, and use the feedback to change your planned priorities.
When some of your action plans are complete:
- Use the evaluation to help the group continue to measure its impact on the community, and to create plans for continuing helpful programs for the future. For example, community level indicators will tell you if your interventions are having an impact on the bottom line.
How do you begin an evaluation?
Suppose you know you want to conduct an evaluation. But where do you begin? The process of evaluating can sound overwhelming. However, remember that what you've already accomplished may have seemed hard to imagine at the beginning. Evaluation can be broken down into several parts to make it more manageable.
Before we jump into specifics, here is a general thought to share with your group members: your evaluation should address questions that are important to you, members of your community, and those who are providing financial support. Of course, finances and time constraints might limit the kind of information that can be collected. Try to ask the questions that will ultimately help your group succeed. Because each group is unique, the evaluation should be as well. Nonetheless, below are some questions of interest to many groups. Would they be of interest to yours?
- How much did the community participate?
- What programs, policies or practices have changed in the community?
- Have people's behaviors changed? If so, what kinds, and how?
- Are those changes due to your efforts?
Tips for implementing a successful evaluation
You have to want to evaluate.
The first step is internal. You have to be motivated to do the evaluation, or else it will be half-hearted, if it happens at all. And you need to be clear on the purpose of the evaluation. Why do you want to do it? For example, if you project was to reduce smoking among adolescents, you may want to find out if your program has lead to stricter enforcement of laws regulating the sale of tobacco to minors.
You need to evaluate in terms of your objectives or goals.
When you planned your initiative, you should have identified your specific objectives. What exactly are you trying to achieve or accomplish? One big advantage of having specific objectives is that your objectives will guide your evaluation. For example, suppose your objective is to reduce the percentage of local high school students who smoke to 5% by May 2002. That's fine. And then your evaluation standard is easy to identify: it's simply the percentage of students who are smokers on that target date.
For each of your objectives, you need to identify criteria, or indicators, which will provide reliable and valid measures for each of your objectives.
You will need to develop measures that tell you what is really happening. Once again, your measures (or indicators) should be based on your objectives.
For example, if your objective is to reduce youth violence in your local high school, then some possible measures might be hospital admissions records for violence-related injuries, attendance at the school's conflict resolution training seminar, or police records of arrests of youths for assault or carrying concealed weapons.
If your objective was increasing the availability of heart healthy foods in your community, some measures might be the number of restaurants who have a low fat menu section, the percentage of milk that is skim in local grocery stores, or the amount of shelf space used to display lean meat versus higher fat cuts.
You need to collect data on each of these indicators.
Sometimes you can find the indicators you need from existing sources. For example, if you were interested in increasing library borrowing, or in reducing false fire alarms, you could gather existing data from the library or fire department. But sometimes data on your chosen indicators may not be available. Suppose, for instance, your number one issue was speeding along residential streets. The key information here unfortunately may not exist. In those cases, the local police might be willing to help collect it; or you and your group might need to collect it yourselves.
Either way, if you can assemble "before" and "after" statistics on your chosen indicators, you can use them to help determine whether your program or initiative made a positive difference. Did borrowing go up? Did traffic slow down? The data here will do a lot of the talking.
Use the results to adjust the program or intervention as necessary.
Are you meeting the objectives you had planned? If so, no adjustment may be needed. If you are not meeting those objectives, the data may indicate what changes need to be made to get back on track. For instance, in our example on high school smokers, if the percentage of high school students who quit smoking is not very high, you might want to change the content of the high school tobacco education program now being given or add a new program to help them quit.
Ask yourself, "What questions do I want to answer?" That's a key first step. Now, how do you answer them? Each of the following sections in this chapter will thoroughly explain methods that you can use to evaluate your initiative.
The Action Catalogue is an online decision support tool that is intended to enable researchers, policy-makers and others wanting to conduct inclusive research, to find the method best suited for their specific project needs.
Are You Ready to Evaluate your Coalition? prompts 15 questions to help the group decide whether your coalition is ready to evaluate itself and its work.
CDC Evaluation Resources provides an extensive list of resources for evaluation, as well as links to key professional associations and key journals.
Written by Heather Weiss, Evaluating Community-Based Initiatives is a special edition of The Evaluation Exchange, a periodical based from the Harvard Graduate School of Education. The issue provides ample information about community initiatives.
Evaluating Your Community-Based Program is a handbook designed by the American Academy of Pediatrics and includes extensive material on a variety of topics related to evaluation.
The Magenta Book- Guidance for Evaluation provides an in-depth look at evaluation. Part A is designed for policy makers. It sets out what evaluation is, and what the benefits of good evaluation are. It explains in simple terms the requirements for good evaluation, and some straightforward steps that policy makers can take to make a good evaluation of their intervention more feasible. Part B is more technical, and is aimed at analysts and interested policy makers. It discusses in more detail the key steps to follow when planning and undertaking an evaluation and how to answer evaluation research questions using different evaluation research designs. It also discusses approaches to the interpretation and assimilation of evaluation evidence.
The Collective Impact Forums podcast episode Measuring What Matters With Community-Led Monitoring speaks with the International Treatment Preparedness Coalition on supporting data gathering and analysis centered on and led by community members.
The Performance Measurement for Public Health Policy was developed by APHA and the Public Health Foundation to help health departments and their partners assess and improve the performance of their policy activities; this tool is the first to focus explicitly on performance measurement for public health policy. The first section of the tool gives a brief overview of the role of health departments in public health policy, followed by an introduction to performance measurement within the context of performance management. It also includes a framework on page 5 for conceptualizing the goals and activities of policy work in a health department. The second section of the tool consists of tables with examples of activities that a health department might engage in and sample measures and outcomes for these activities. The final section of the tool provides three examples of how a health department might apply performance measurement and the sample measures to assess its policy activities.
The Role of Community-Based Participatory Research is a comprehensive website developed by the U.S. Department of Health and Human Services that is dedicated to providing information on CBPR.
Barrett, N. (2013). Program evaluation: A step-by-step guide. Sunnycrest Press.
Fawcett, S., Paine, A., Francisco, V., Schultz, J., Richter, P., Lewis, K., Williams, L., Harris, J., Berkley,Y., Fisher, L., & Lopez, M. (1994). Work group evaluation handbook: Evaluating and supporting community initiatives for health and development. Lawrence, KS: Work Group on Health Promotion and Community Development, University of Kansas.
Harris, M. (2010). Evaluating public and community health programs. San Francisco, CA: Jossey-Bass.
Murphy, Frederick. (Ed.) (2013). Community Engagement, Organization, and Development for Public Health Practice. New York: Springer.
Wholey, J., Hatry, H., & Newcomer, K. (2010). Handbook of practical program evaluation. San Francisco, CA: Jossey-Bass.