|Learn how you can involve participants in the scope of the project, including its evaluation, and how that's likely to benefit the project's final outcomes.|
What is participatory evaluation?
Why would (and why wouldn't) you use participatory evaluation?
When would you use participatory evaluation?
Who should be involved in participatory evaluation?
How do you conduct a participatory evaluation?
Experienced community builders know that involving stakeholders - the people directly connected to and affected by their projects - in their work is tremendously important. It gives them the information they need to design, and to adjust or change, what they do to best meet the needs of the community and of the particular populations that an intervention or initiative is meant to benefit. This is particularly true in relation to evaluation.
As we have previously discussed, community-based participatory research can be employed in describing the community, assessing community issues and needs, finding and choosing best practices, and/or evaluation. We consider the topic of participatory evaluation important enough to give it a section of its own, and to show how it fits into the larger participatory research picture.
It's a good idea to build stakeholder participation into a project from the beginning. One of the best ways to choose the proper direction for your work is to involve stakeholders in identifying real community needs, and the ways in which a project will have the greatest impact. One of the best ways to find out what kinds of effects your work is having on the people it's aimed at is to include those on the receiving end of information or services or advocacy on your evaluation team
Often, you can see most clearly what's actually happening through the eyes of those directly involved in it - participants, staff, and others who are involved in taking part in and carrying out a program, initiative, or other project. Previously, we have discussed how you can involve those people in conducting research on the community and choosing issues to address and directions to go in. This section is about how you can involve them in the whole scope of the project, including its evaluation, and how that's likely to benefit the project's final outcomes.
What is participatory evaluation?
When most people think of evaluation, they think of something that happens at the end of a project - that looks at the project after it's over and decides whether it was any good or not. Evaluation actually needs to be an integral part of any project from the beginning. Participatory evaluation involves all the stakeholders in a project - those directly affected by it or by carrying it out - in contributing to the understanding of it, and in applying that understanding to the improvement of the work.
Participatory evaluation, as we shall see, isn't simply a matter of asking stakeholders to take part. Involving everyone affected changes the whole nature of a project from something done for a group of people or a community to a partnership between the beneficiaries and the project implementers. Rather than powerless people who are acted on, beneficiaries become the copilots of a project, making sure that their real needs and those of the community are recognized and addressed. Professional evaluators, project staff, project beneficiaries or participants, and other community members all become colleagues in an effort to improve the community's quality of life.
This approach to planning and evaluation isn't possible without mutual trust and respect. These have to develop over time, but that development is made more probable by starting out with an understanding of the local culture and customs - whether you're working in a developing country or in an American urban neighborhood. Respecting individuals and the knowledge and skills they have will go a long way toward promoting long-term trust and involvement.
The other necessary aspect of any participatory process is appropriate training for everyone involved. Some stakeholders may not even be aware that project research takes place; others may have no idea how to work alongside people from different backgrounds; and still others may not know what to do with evaluation results once they have them. We'll discuss all of these issues - stakeholder involvement, establishing trust, and training - as the section progresses.
The real purpose of an evaluation is not just to find out what happened, but to use the information to make the project better.
In order to accomplish this, evaluation should include examining at least two areas:
- Process. The process of a project includes the planning and logistical activities needed to set up and run it. Did we do a proper assessment beforehand so we would know what the real needs were? Did we use the results of the assessment to identify and respond to those needs in the design of the project? Did we set up and run the project within the timelines and other structures that we intended? Did we involve the people we intended to? Did we have or get the resources we expected? Were staff and others trained and prepared to do the work? Did we have the community support we expected? Did we record what we did accurately and on time? Did we monitor and evaluate as we intended?
- Implementation. Project implementation is the actual work of running it. Did we do what we intended? Did we serve or affect the number of people we proposed to? Did we use the methods we set out to use? Was the level of our activity what we intended (e.g., did we provide the number of hours of service we intended to)? Did we reach the population(s) we aimed at? What exactly did we provide or do? Did we make intentional or unintentional changes, and why?
- Outcomes. The project's outcomes are its results - what actually happened as a consequence of the project's existence. Did our work have the effects we hoped for? Did it have other, unforeseen effects? Were they positive or negative (or neither)? Do we know why we got the results we did? What can we change, and how, to make our work more effective?
Many who write about participatory evaluation combine the first two of these areas into process evaluation, and add a third - impact evaluation - in addition to outcome evaluation. Impact evaluation looks at the long-term results of a project, whether the project continues, or does its work and ends.
Rural development projects in the developing world, for example, often exist simply to pass on specific skills to local people, who are expected to then both practice those skills and teach them to others. Once people have learned the skills - perhaps particular cultivation techniques, or water purification - the project ends. If in five or ten years, an impact evaluation shows that the skills the project taught are not only still being practiced, but have spread, then the project's impact was both long-term and positive.
In order for these areas to be covered properly, evaluation has to start at the very beginning of the project, with assessment and planning.
In a participatory evaluation, stakeholders should be involved in:
- Naming and framing the problem or goal to be addressed
- Developing a theory of practice (process, logic model) for how to achieve success
- Identifying the questions to ask about the project and the best ways to ask them - these questions will identify what the project means to do, and therefore what should be evaluated
What's the real goal, for instance, of a program to introduce healthier foods in school lunches? It could be simply to convince children to eat more fruits, vegetables, and whole grains. It could be to get them to eat less junk food. It could be to encourage weight loss in kids who are overweight or obese. It could simply be to educate them about healthy eating, and to persuade them to be more adventurous eaters. The evaluation questions you ask both reflect and determine your goals for the program. If you don't measure weight loss, for instance, then clearly that's not what you're aiming at. If you only look at an increase in children's consumption of healthy foods, you're ignoring the fact that if they don't cut down on something else (junk food, for instance), they'll simply gain weight. Is that still better than not eating the healthy foods? You answer that question by what you choose to examine - if it is better, you may not care what else the children are eating; if it's not, then you will care.
- Collecting information about the project
- Making sense of that information
- Deciding what to celebrate, and what to adjust or change, based on information from the evaluation
Why would (and why wouldn't) you use participatory evaluation?
Why would you use participatory evaluation? The short answer is that it's often the most effective way to find out what you need to know, both at the beginning of and throughout the course of a project. In addition, it carries benefits for both individual participants and the community that other methods don't.
Some of the major advantages of participatory evaluation:
- It gives you a better perspective on both the initial needs of the project's beneficiaries, and on its ultimate effects. If stakeholders, including project beneficiaries, are involved from the beginning in determining what needs to be evaluated and why - not to mention what the focus of the project needs to be - you're much more likely to aim your work in the right direction, to correctly determine whether your project is effective or not, and to understand how to change it to make it moreso.
- It can get you information you wouldn't get otherwise. When project direction and evaluation depend, at least in part, on information from people in the community, that information will often be more forthcoming if it's asked for by someone familiar. Community people interviewing their friends and neighbors may get information that an outside person wouldn't be offered.
- It tells you what worked and what didn't from the perspective of those most directly involved - beneficiaries and staff. Those implementing the project and those who are directly affected by it are most capable of sorting out the effective from the ineffective.
- It can tell you why something does or doesn't work.Beneficiaries are often able to explain exactly why they didn't respond to a particular technique or approach, thus giving you a better chance to adjust it properly.
- It results in a more effective project. For the reasons just described, you're much more apt to start out in the right direction, and to know when you need to change direction if you haven't. The consequence is a project that addresses the appropriate issues in the appropriate way, and accomplishes what it sets out to do.
- It empowers stakeholders. Participatory evaluation gives those who are often not consulted - line staff and beneficiaries particularly - the chance to be full partners in determining the direction and effectiveness of a project.
- It can provide a voice for those who are often not heard.Project beneficiaries are often low-income people with relatively low levels of education, who seldom have - and often don't think they have a right to - the chance to speak for themselves. By involving them from the beginning in project evaluation, you assure that their voices are heard, and they learn that they have the ability and the right to speak for themselves.
- It teaches skills that can be used in employment and other areas of life. In addition to the development of basic skills and specific research capabilities, participatory evaluation encourages critical thinking, collaboration, problem-solving, independent action, meeting deadlines...all skills valued by employers, and useful in family life, education, civic participation, and other areas.
- It bolsters self-confidence and self-esteem in those who may have little of either. This category can include not only project beneficiaries, but also others who may, because of circumstance, have been given little reason to believe in their own competence or value to society. The opportunity to engage in a meaningful and challenging activity, and to be treated as a colleague by professionals, can make a huge difference for folks who are seldom granted respect or given a chance to prove themselves.
- It demonstrates to people ways in which they can take more control of their lives. Working with professionals and others to complete a complex task with real-world consequences can show people how they can take action to influence people and events.
- It encourages stakeholder ownership of the project. If those involved feel the project is theirs, rather than something imposed on them by others, they'll work hard both in implementing it, and in conducting a thorough and informative evaluation in order to improve it.
- It can spark creativity in everyone involved. For those who've never been involved in anything similar, a participatory evaluation can be a revelation, opening doors to a whole new way of thinking and looking at the world. To those who have taken part in evaluation before, the opportunity to exchange ideas with people who may have new ways of looking at the familiar can lead to a fresh perspective on what may have seemed to be a settled issue.
- It encourages working collaboratively. For participatory evaluation to work well, it has to be viewed by everyone involved as a collaboration, where each participant brings specific tools and skills to the effort, and everyone is valued for what she can contribute. Collaboration of this sort not only leads to many of the advantages described above, but also fosters a more collaborative spirit for the future as well, leading to other successful community projects.
- It fits into a larger participatory effort. When community assessment and the planning of a project have been a collaboration among project beneficiaries, staff, and community members, it only makes sense to include evaluation in the overall plan, and to approach it in the same way as the rest of the project. In order to conduct a good evaluation, its planning should be part of the overall planning of the project. Furthermore, participatory process generally matches well with the philosophy of community-based or grass roots groups or organizations.
With all these positive aspects, participatory evaluation carries some negative ones as well. Whether its disadvantages outweigh its advantages depend on your circumstances, but whether you decide to engage in it or not, it's important to understand what kinds of drawbacks it might have.
The significant disadvantages of participatory evaluation include:
- It takes more time than conventional process. Because there are so many people with different perspectives involved, a number of whom have never taken part in planning or evaluation before, everything takes longer than if a professional evaluator or a team familiar with evaluation simply set up and conducted everything. Decision-making involves a great deal of discussion, gathering people together may be difficult, evaluators need to be trained, etc.
- It takes the establishment of trust among all participants in the process. If you're starting something new (or, all too often, even if the project is ongoing), there are likely to be issues of class distinction, cultural differences, etc., dividing groups of stakeholders.These can lead to snags and slowdowns until they're resolved, which won't happen overnight. It will take time and a good deal of conscious effort before all stakeholders feel comfortable and confident that their needs and culture are being addressed.
- You have to make sure that everyone's involved, not just "leaders" of various groups. All too often, "participatory" means the participation of an already-existing power structure. Most leaders are actually that - people who are most concerned with the best interests of the group, and whom others trust to represent them and steer them in the direction that best reflects those interests. Sometimes, however, leaders are those who push their way to the front, and try to confirm their own importance by telling others what to do.
By involving only leaders of a population or community, you run the risk of losing - or never gaining - the confidence and perspective of the rest of the population, which may dislike and distrust a leader of the second type, or may simply see themselves shut out of the process.. They may see the participatory evaluation as a function of authority, and be uninterested in taking part in it. Working to recruit "regular" people as well as, or instead of, leaders may be an important step for the credibility of the process. But it's a lot of work and may be tough to sell.
- You have to train people to understand evaluation and how the participatory process works, as well as teaching them basic research skills. There are really a number of potential disadvantages here. The obvious one is that of time, which we've already raised - training takes time to prepare, time to implement, and time to sink in. Another is the question of what kind of training participants will respond to. Still another concerns recruitment - will people be willing to put in the time necessary to prepare them for the process, let alone the time for the process itself?
- You have to get buy-in and commitment from participants. Given what evaluators will have to do, they need to be committed to the process, and to feel ownership of it. You have to structure both the training and the process itself to bring about this commitment.
- People's lives - illness, child care and relationship problems, getting the crops in, etc. - may cause delays or get in the way of the evaluation. Poor people everywhere live on the edge, which means they're engaged in a delicate balancing act. The least tilt to one side or the other - a sick child, too many days of rain in a row - can cause a disruption that may result in an inability to participate on a given day, or at all. If you're dealing with a rural village that's dependent on agriculture, for instance, an accident of weather can derail the whole process, either temporarily or permanently.
- You may have to be creative about how you get, record, and report information. If some of the participants in an evaluation are non- or semi-literate, or if participants speak a number of different languages (English, Spanish, and Lao, for instance), a way to record information will have to be found that everyone can understand, and that can, in turn, be understood by others outside the group.
- Funders and policy makers may not understand or believe in participatory evaluation. At worst, this can lose you your funding, or the opportunity to apply for funding. At best, you'll have to spend a good deal of time and effort convincing funders and policy makers that participatory evaluation is a good idea, and obtaining their support for your effort.
Some of these disadvantages could also be seen as advantages: the training people receive blends in with their development of new skills that can be transferred to other areas of life, for instance; coming up with creative ways to express ideas benefits everyone; once funders and policy makers are persuaded of the benefits of participatory process and participatory evaluation, they may encourage others to employ it as well. Nonetheless, all of these potential negatives eat up time, which can be crucial. If it's absolutely necessary that things happen quickly (which is true not nearly as often as most of us think it is), participatory evaluation is probably not the way to go.
When might you use participatory evaluation?
So when do you use participatory evaluation? Some of the reasons you might decide it's the best choice for your purposes:
- When you're already committed to a participatory process for your project. Evaluation planning can be included and collaboratively designed as part of the overall project plan.
- When you have the time, or when results are more important than time. As should be obvious from the last part of this section, one of the biggest drawbacks to participatory evaluation is the time it takes. If time isn't what's most important, you can gain the advantages of a participatory evaluation without having to compensate for many of the disadvantages.
- When you can convince funders that it's a good idea. Funders may specify that they want an outside evaluation, or they may simply be dubious about the value of participatory evaluation. In either case, you may have some persuading to do in order to be able to use a participatory process. If you can get their support, however, funders may like the fact that participatory evaluation is often less expensive, and that it has added value in the form of empowerment and transferable skills.
- When there may be issues in the community or population that outside evaluators (or program providers, for that matter) aren't likely to be aware of. Political, social, and interpersonal factors in the community can skew the results of an evaluation, and without an understanding of those factors and their history, evaluators may have no idea that what they're finding out is colored in any way. Evaluators who are part of the community can help sort out the influence of these factors, and thus end up with a more accurate evaluation.
- When you need information that it will be difficult for anyone outside the community or population to get. When you know that members of the community or population in question are unwilling to speak freely to anyone from outside, participatory evaluation is a way to raise the chances that you'll get the information you need.
- When part of the goal of the project is to empower participants and help them develop transferable skills. Here, the participatory evaluation, as it should in any case, becomes a part of the project itself and its goals.
- When you want to bring the community or population together. In addition to fostering a collaborative spirit, as we've mentioned, a participatory evaluation can create opportunities for people who normally have little contact to work together and get to know one another. This familiarity can then carry over into other aspects of community life, and even change the social character of the community over the long term.
Who should be involved in participatory evaluation?
We've referred continually to stakeholders - the people who are directly affected by the project being evaluated. Who are the stakeholders? That varies from project to project, depending on the focus, the funding, the intended outcomes, etc.
There are a number of groups that are generally involved, however:
- Participants or beneficiaries. The people whom the project is meant to benefit. That may be a specific group (people with a certain medical condition, for instance), a particular population (recent Southeast Asian immigrants, residents of a particular area), or a whole community. They may be actively receiving a service (e.g., employment training) or may simply stand to benefit from what the project is doing (violence prevention in a given neighborhood). These are usually the folks with the greatest stake in the project's success, and often the ones with the least experience of evaluation.
- Project line staff and/or volunteers. The people who actually do the work of carrying out the project. They may be professionals, people with specific skills, or community volunteers. They may work directly with project beneficiaries as mentors, teachers, or health care providers; or they may advocate for immigrant rights, identify open space to be preserved, or answer the phone and stuff envelopes. Whoever they are, they often know more about what they're doing than anyone else, and their lives can be affected by the project as much as those of participants or beneficiaries.
- Administrators. The people who coordinate the project or specific aspects of it. Like line staff and volunteers, they know a lot about what's going on, and they're intimately involved with the project every day.
- Outside evaluators, if they're involved. In many cases, outside evaluators are hired to run participatory evaluations. The need for their involvement is obvious.
- Community officials. You may need the support of community leaders, or you may simply want to give them and other participants the opportunity to get to know one another in a context that might lead to better understanding of community needs.
- Others whose lives are affected by the project. The definition of this group varies greatly from project to project. In general, it refers to people whose jobs or other aspects of their lives will be changed either by the functioning of the project itself, or by its outcomes.
An example would be landowners whose potential use of their land would be affected by an environmental initiative or a neighborhood plan.
How do you conduct a participatory evaluation?
Participatory evaluation encompasses elements of designing the project as well as evaluating it. What you evaluate depends on what you want to know and what you're trying to do. Identifying the actual evaluation questions sets the course of the project just as surely as a standardized testing program guides teaching. When these questions come out of an assessment in which stakeholders are involved, the evaluation is one phase of a community-based participatory research process.
A participatory evaluation really has two stages: One comprises finding and training stakeholders to act as participant evaluators. The second - some of which may take place before or during the first stage - encompasses the planning and implementation of the project and its evaluation, and includes six steps:
- Naming and framing the issue
- Developing a theory of practice to address it
- Deciding what questions to ask, and how to ask them to get the information you need
- Collecting information
- Analyzing the information you've collected
- Using the information to celebrate what worked, and to adjust and improve the project
We'll examine both of these stages in detail.
Finding and training stakeholders to act as participant evaluators
Unfortunately, this stage isn't simply a matter of announcing a participatory evaluation and then sitting back while people beat down the doors to be part of it. In fact, it may be one of the more difficult aspects of conducting a participatory evaluation.
Here's where the trust building we discussed earlier comes into play. The population you're working with may be distrustful of outsiders, or may be used to promises of involvement that turn out to be hollow or simply ignored. They may be used to being ignored in general, and/or offered services and programs that don't speak to their real needs. If you haven't already built a relationship to the point where people are willing to believe that you'll follow through on what you say, now is the time to do it. It may take some time and effort - you may have to prove that you'll still be there in six months - but it's worth it. You're much more likely to have a successful project, let alone a successful evaluation, if you have a relationship of mutual trust and respect.
But let's assume you have that step out of the way, and that you've established good relationships in the community and among the population you're working with, as well as with staff of the project. Let's assume as well that these folks know very little, if anything, about participatory evaluation. That means they'll need training in order to be effective.
If, in fact, your evaluation is part of a larger participatory effort, the question arises as to whether to simply employ the same team that did assessments and/or planned the project, perhaps with some additions, as evaluators. That course of action has both pluses and minuses. The team is already assembled, has developed a method of working together, has some training in research methods, etc., so that they can hit the ground running - obviously a plus.
The fact that they have a big stake in seeing the project be successful can work either way: they may interpret their findings in the best possible light, or even ignore negative information; or they may be eager to see exactly where and how to adjust the work to make it go better.
Another issue is burnout. Evaluation will mean more time in addition to what an assessment and planning team has already put in. While some may be more than willing to continue, many may be ready for a break (or may be moving on to another phase of their lives). If the possibility of assembling a new team exists, it will give those who've had enough the chance to gracefully withdraw.
How you handle this question will depend on the attitudes of those involved, how many people you actually have to draw on (if the recruitment of the initial team was really difficult, you may not have a lot of choices), and what people committed to.
Recruit participant evaluators
There are many ways to accomplish this. In some situations, it makes the most sense to put out a general call for volunteers; in others, to approach specific individuals who are likely - because of their commitment to the project or to the population - to be willing. Alternatively, you might approach community leaders or stakeholders to suggest possible evaluators.
Some basic guidelines for recruitment include:
- Use communication channels and styles that reach the people you're aiming at
- Make your message as clear as possible
- Use plain English and/or whatever other language(s) the population uses
- Put your message where the audience is
- Approach potential participants individually where possible - if you can find people they know to recruit them, all the better
- Explain what people may gain from participation
- Be clear that they're being asked because they already have the qualities that are necessary for participation
- Encourage people, but also be honest about the amount and extent of what needs to be done
- Work out with participants what they're willing and able to do
- Try to arrange support - child care, for example - to make participation easier
- Ask people you've recruited to recommend - or recruit - others
In general, it's important for potential participant evaluators - particularly those whose connection to the project isn't related to their employment - to understand the commitment involved. An evaluation is likely to last a year, unless the project is considerably shorter than that, and while you might expect and plan for some dropouts, most of the team needs to be available for that long.
In order to make that commitment easier, discuss with participants what kinds of support they'll need in order to fulfill their commitment - child care and transportation, for instance - and try to find ways to provide it. Arrange meetings at times and places that are easiest for them (and keep the number of meetings to a minimum). For participants who are paid project staff, the evaluation should be considered part of their regular work, so that it isn't an extra, unpaid, burden that they feel they can't refuse.
Be careful to try to put together a team that's a cross-section of the stakeholder population. As we've already discussed, if you recruit only "leaders" from among the beneficiary population, for instance, you may create resentment in the rest of the group, not get a true perspective of the thinking or perceptions of that group, and defeat the purpose of the participatory nature of the evaluation as well. Even if the leaders are good representatives of the group, you may want to broaden your recruitment in the hopes of developing more community leadership, and empowering those who may not always be willing to speak out.
Train participant evaluators
Participants, depending on their backgrounds, may need training in a number of areas. They may have very little experience in attending and taking part in meetings, for instance, and may need to start there. They may benefit from an introduction to the idea of participatory evaluation, and how it works. And they'll almost certainly need some training in data gathering and analysis.
How training gets carried out will vary with the needs and schedules of participants and the project. It may take place in small chunks over a relatively long period of time - weeks or months - might happen all at once in the course of a weekend retreat, or might be some combination. There's no right or wrong way here. The first option will probably make it possible for more people to take part; the second allows for people to get to know one another and bond as a team, and a combination might allow for both.
By the same token, there are many training methods, any or all of which might be useful with a particular group. Training in meeting skills - knowing when and how to contribute and respond, following discussion, etc. - may best be accomplished through mentoring, rather than instruction. Interviewing skills may best be learned through roleplaying and other experiential techniques. Some training - how to approach local people, for example - might best come from participants themselves.
Some of the areas in which training might be necessary:
- The participatory evaluation process. How participatory evaluation works, its goals, the roles people may play in the process, what to expect.
- Meeting skills. Following discussion, listening skills, handling disagreement or conflict, contributing and responding appropriately, general ground rules and etiquette, etc.
- Interviewing. Putting people at ease, body language and tone of voice, asking open-ended and follow-up questions, recording what people say and other important information, handling interruptions and distractions, group interviews.
- Observation. Direct vs. participant observation, choosing appropriate times and places to observe, relevant information to include, recording observations.
- Recording information and reporting it to the group. What interviewees and those observed say and do, the non-verbal messages they send, who they are (age, situation, etc.), what the conditions were, the date and time, any other factors that influenced the interview or observation.
For people for whom writing isn't comfortable, where writing isn't feasible, or where language is a barrier, there should be alternative recording and reporting methods. Drawings, maps, diagrams, audio recording, videos, or other imaginative ways of remembering exactly what was said or observed can be substituted, depending on the situation. In interviews, if audio or video recording is going to be used, it's important to get the interviewee's permission first - before the interviewer shows up with the equipment, so that there are no misunderstandings.
- Analyzing information. Critical thinking, what kinds of things statistics tell you, other things to think about.
Planning and implementing the project and its evaluation
There's an assumption here that all phases of a project will be participatory, so that not only its evaluation, but its planning and the assessment that leads to it also involve stakeholders (not necessarily the same ones who act as evaluators). If stakeholders haven't been involved from the beginning, they don't have the deep understanding of the purposes and structure of a project that they'd have of one they've helped form. The evaluation that results, therefore, is likely to be less perceptive - and therefore less valuable - than one of a project they've been involved in from the start.
Naming and framing the problem or goal to be addressed
Identifying what you're evaluating defines what the project is meant to address and accomplish. Community representatives and stakeholders, all those with something to gain or lose, work together to develop a shared vision and mission. By collecting information about community concerns and identifying available assets, communities can understand which issues to focus a project on.
Naming a problem or goal refers to identifying the issue that needs to be addressed. Framing it has to do with the way we look at it. If youth violence is conceived of as strictly a law enforcement problem, for instance, that framing implies specific ways of solving it: stricter laws, stricter enforcement, zero tolerance for violence, etc. If it's framed as a combination of a number of issues - availability of hand guns, unemployment and drug use among youth, social issues that lead to the formation of gangs, alienation and hopelessness in particular populations, poverty, etc. - then solutions may include employment and recreation programs, mentoring, substance abuse treatment, etc., as well as law enforcement. The more we know about a problem, and the more different perspectives we can include in our thinking about it, the more accurately we can frame it, and the more likely we are to come up with an effective solution.
Developing a theory of practice to address the problem
How do you conduct a community effort so that it has a good chance of solving the problem at hand? Many communities and organizations answer this question by throwing uncoordinated programs at the problem, or by assuming a certain approach (law enforcement, as in our example, for instance) will take care of it. In fact, you have to have a plan for creating, implementing, evaluating, adjusting, and maintaining a solution if you want it to work.
Whatever you call this plan - a theory of practice, a logic model, or simply an approach or process - it should be logical, consistent, consider all the areas that need to be coordinated in order for it to work, and give you an overall guideline and a list of steps to follow in order to carry it out.
Once you've identified an issue, for instance, one possible theory of practice might be:
- Form a coalition of organizations, agencies, and community members concerned with the problem.
- Recruit and train a participatory research team which includes representatives of all stakeholder groups.
- The team collects both statistical and qualitative, first-hand information about the problem, and identifies community assets that might help in addressing it.
- Use the information you have to design a solution that takes into account the problem's complexity and context.
This might be a single program or initiative, or a coordinated, community-wide effort involving several organizations, the media, and individuals. If it's closer to the latter, that's part of the complexity you have to take into account. Coordination has to be part of your solution, as do ways to get around the bureaucratic roadblocks that might occur and methods to find the financial and personnel resources you need.
- Implement the solution.
- Carry out monitoring and evaluation that will give you ongoing feedback about how well you're meeting objectives, and what you should change to improve your solution.
- Use the information from the evaluation to adjust and improve the solution.
- Go back to # 2 and do as much of it again as you need to until the problem is solved, or - more likely, since many community problems never actually disappear - indefinitely in order to maintain and increase your gains.
Deciding what evaluation questions to ask, and how to ask them to get the information you need
As we've discussed, choosing the evaluation questions essentially guides the work. What you're really choosing here is what you're going to pay attention to. There could be significant results from your project that you're never aware of, because you didn't look for them - you didn't ask the questions to which those results would have been the answers. That's why it's so important to select questions carefully: they'll determine what you find.
Framing the problem is one element here - putting it in context, looking at it from all sides, stepping back from your own assumptions and biases to get a clearer and broader view of it. Another is envisioning the outcomes you want, and thinking about what needs to change, and how, in order to reach them.
Framing is important in this activity as well. If you want simply to reduce youth violence, stricter laws and enforcement might seem like a reasonable solution, assuming you're willing to stick with them forever; if you want not only to reduce or eliminate youth violence, but to change the climate that fosters it (i.e., long term social change), the solution becomes much broader and requires, as we pointed out above, much more than law enforcement. And a broader solution means more, and more complex, evaluation questions.
In the first case, evaluation questions might be limited to some variation of: "Were there more arrests and convictions of youthful offenders for violent crimes in the time period studied, as compared to the last period for which there were records before the new solution was put in place?" "Did youthful offenders receive harsher sentences than before?" "Was there a reduction in violent incidents involving youth?"
Looking at the broader picture, in addition to some of those questions, there might be questions about counseling programs for youthful offenders to change their attitudes and to help ease their transition back to civil society, drug and alcohol treatment, control of handgun sales, changing community attitudes, etc.
This is the largest part, at least in time and effort, of implementing an evaluation.
Various evaluators, depending on the information needed, may conduct any or all of the following:
- Research into census or other public records, as well as news archives, library collections, the Internet, etc.
- Individual and/or group interviews
- Focus groups
- Community information-sharing sessions
- Direct or participant observation
In some cases - particularly with unschooled populations in developing countries - evaluators may have to find creative ways to draw out information. In some cultures, maps, drawings, representations ("If this rock is the headman's house..."), or even storytelling may be more revealing than the answers to straightforward questions.
Analyzing the information you've collected
Once you've collected all the information you need, the next step is to make sense of it. What do the numbers mean? What do people's stories and opinions tell you about the project? Did you carry out the process you'd planned? If not, did it make a difference, positive or negative?
In some cases, these questions are relatively easy to answer. If there were particular objectives for serving people, or for beneficiaries' accomplishments, you can quickly find out whether they were met or not. (We set out to serve 75 people, and we actually served 82. We anticipated that 50 would complete the program, and 61 actually completed.)
In other cases, it's much harder to tell what your information means. What if approximately half of interviewees say the project was helpful to them, and the other half say the opposite? A result like that may leave you doing some detective work. (Is there any ethnic, racial, geographic, or cultural pattern as to who is positive and who is negative? Whom did each group work with? Where did they experience the project, and how? Did members of each group have specific things in common?)
While collecting the information requires the most work and time, analyzing it is perhaps the most important step in conducting an evaluation. Your analysis tells you what you need to know in order to improve your project, and also gives you the evidence you need to make a case for continued funding and community support. It's important that it be done well, and that it make sense of odd results like that directly above. Here's where good training and good guidance in using critical thinking and other techniques come in.
In general, information-gathering and analysis should cover the three areas we discussed early in the section: process, implementation, and outcomes. The purpose here is both to provide information for improving the project and to provide accountability to funders and the community.
- Process. This concerns the logistics of the project. Was there good coordination and communication? Was the planning process participatory? Was the original timeline for each stage of the project - outreach, assessment, planning, implementation, evaluation - realistic? Were you able to find or hire the right people? Did you find adequate funding and other resources? Was the space appropriate? Did members of the planning and evaluation teams work well together? Did the people responsible do what they were expected to do? Did unexpected leaders emerge (in the planning group, for instance)?
- Implementation. Did you do what you set out to do - reach the number of people you expected to, use the methods you intended, provide the amount and kind of service or activity that you planned for? This part of the evaluation is not meant to assess effectiveness, but only whether the project was carried out as planned - i.e, what you actually did, rather than what you accomplished as a result. That comes next.
- Outcomes. What were the results of what you did? Did what you hoped for take place? If it did, how do you know it was a result of what you did, as opposed to some other factor(s)? Were there unexpected results? Were they negative or positive? Why did this all happen?
Using the information to celebrate what worked, and to adjust and improve the project
While accountability is important - if the project has no effect at all, for example, it's just wasted effort - the real thrust of a good evaluation is formative. That means it's meant to provide information that can help to continue to form the project, reshape it to make it better. As a result, the overall questions when looking at process, implementation, and outcomes are: What worked well? What didn't? What changes would improve the project?
Answering these questions requires further analysis, but should allow you to improve the project considerably. In addition to dropping or changing and adjusting those elements of the project that didn't work well, don't neglect those that were successful. Nothing's perfect; even effective approaches can be made better.
Don't forget to celebrate your successes. Celebration recognizes the hard work of everyone involved, and the value of your effort. It creates community support, and strengthens the commitment of those involved. Perhaps most important, it makes clear that people working together can improve the quality of life in the community.
There's a final element to participatory research and evaluation that can't be ignored. Once you've started a project and made it successful, you have to maintain it. The participatory research and evaluation has to continue - perhaps not with the same team(s), but with teams representative of all stakeholders. Conditions change, and projects have to adapt. Research into those conditions and continued evaluation of your work will keep that work fresh and effective.
If your project is successful, you may think your work is done.Think again - community problems are only solved as long as the solutions are actively practiced. The moment you turn your back, the conditions you worked so hard to change can start to return to what existed before The work - supported by participatory research and evaluation - has to go on indefinitely to maintain and increase the gains you've made.
Participatory evaluation is a part of participatory research. It involves stakeholders in a community project in setting evaluation criteria for it, collecting and analyzing data, and using the information gained to adjust and improve the project.
Participatory process brings in the all-important multiple perspectives of those most directly affected by the project, who are also most likely to be tied into community history and culture. The information and insights they contribute can be crucial in a project's effectiveness. In addition, their involvement encourages community buy-in, and can result in important gains in skills, knowledge, and self-confidence and self-esteem for the researchers. All in all, participatory evaluation creates a win-win situation.
Conducting a participatory evaluation involves several steps:
- Recruiting and training a stakeholder evaluation team
- Naming and framing the problem
- Developing a theory of practice to guide the process of the work
- Asking the right evaluation questions
- Collecting information
- Analyzing information
- Using the information to celebrate and adjust your work
The final step, as with so many of the community-building strategies and actions described in the Community Tool Box, is to keep at it. Participatory research in general, and participatory evaluation in particular, has to continue as long as the work continues, in order to keep track of community needs and conditions, and to keep adjusting the project to make it more responsive and effective. And the work often has to continue indefinitely in order to maintain progress and avoid sliding back into the conditions or attitudes that made the project necessary in the first place.
The Action Catalogue is an online decision support tool that is intended to enable researchers, policy-makers and others wanting to conduct inclusive research, to find the method best suited for their specific project needs.
Better Evaluation provides a comparison of different types of evaluation. This website also provides an extensive example in order to apply information.
Chapter 6: Research Methods in the "Introduction to Community Psychology" describes the ecological lens in community research, the role of ethics, the differences between qualitative and quantitative research, and mixed methods research.
Evaluation Tips is a succinct resource for participatory evaluation that addresses the key features, advantages, and disadvantages.
Chapter 9 - Participatory Evaluation is an extensive resource for understanding participatory evaluation. It provides the key steps to the process, as well as various tools that can be employed to ensure a successful evaluation.
Chapter 18: Dissemination and Implementation in the "Introduction to Community Psychology" explains why “validated” and “effective” interventions are often never used, effective ways to put research findings to use in order to improve health, and advantages of participatory methods that provide more equitable engagement in the creation and use of scientific knowledge.
Facilitator's Guide for Participatory Evaluation with Young People, by Barry Checkoway and Katie Richards-Schuster, is a publication from the Program for Youth and Community of the University of Michigan School of Social Work.
How to Perform Evaluations is a handbook written by the Canadian International Development Agency that specifically focuses on successfully implementing participatory evaluations.
Issue Topic: Democratic Evaluation from The Evaluation Exchange, vol. 1, No. 3/4, Fall, 1995, Harvard Family Research Project.
Participatory Evaluation is an article written by the University of Washington. It answers the questions of “What is it?”, “Why do it?”, and “What are the challenges?”.
"Participatory Evaluation: How It Can Enhance Effectiveness and Credibility of Nonprofit Work" by Susan Saegert, Lymari Benitez, Efrat Eizenberg, Tsai-shiou Hsieh, and Mike Lamb, CUNY Graduate Center, from The Nonprofit Quarterly, 11, 1, Spring 2004.
"Participatory Evaluation: What Is It? Why Do It? What Are the Challenges?" by Ann Zukoski and Mia Luluquisen, from Community-Based Public Health Policy and Practice, Issue #5, April, 2002.
Participatory Methods is a website that provides resources to generate ideas and action for inclusive development and social change.
Participatory Program Evaluation Manual is a handbook dedicated to providing information on participatory program evaluation approaches.
Eldis, a UK development resource organization, provides a resource guide to participatory monitoring and evaluation.
The Research for Organizing toolkit is designed for organizations and individuals that want to use participatory action research (PAR) to support their work towards social justice. PAR helps us to analyze and document the problems that we see in our communities; allows us to generate data and evidence that strengthens our social justice work and ensures that we are the experts about the issues that face our communities. In this toolkit you will find case studies, workshops, worksheets and templates that you can download and tailor to meet your needs
Who Are the Question Makers? A Participatory Evaluation Handbook is a resource from the Office of Evaluation and Strategic Planning of the United Nations Development Programme.
What is participatory evaluation? is a website provided by the Harvard Family Research Project that provides information on what participatory evaluations are and how they started.
Youth Participatory Evaluation Provided by Act for Youth, this is a website designed to address questions of getting youth involved with participatory evaluation.
Fawcett, S., Boothroyd, R., Schultz, J., Francisco, V., Carson, V., & Bremby. R. (2003). Building Capacity for Participatory Evaluation Within Community Initiatives. Journal of Prevention and Intervention in the Community, 26, pp. 21-26.