Search form

Learn how to start the process of setting up an evaluation, i.e. choosing the evaluators that will carry it out, and planning what it will look like.


The opening four sections of this chapter have been largely about theory: what evaluations of community-based initiatives and organizations are about, why they re important, and some essential ideas about how they can be conducted and whose interests they need to serve. This section and the next are about actually starting the process of setting up an evaluation, i.e. choosing the evaluators that will carry it out, and planning what it will look like. The two are very closely connected, but it will probably make sense to choose your evaluators first. Then, in consultation with the organization, the community, and the target population, they'll come up with a plan for helping you get the information you need to become, or to continue to be, as effective as possible.

Why pay attention to the selection of evaluators?

An evaluation is not simply a matter of looking at your organization or initiative and saying, "It's doing okay." It should examine what you're doing from a number of different angles and perspectives, and should give you not only a clear sense of whether or not you're accomplishing your goals, but of where your strengths and weaknesses lie, of how to improve what you're doing, and of new directions to explore. No matter how well your evaluation is planned--and it should be well planned--you'll still need evaluators that have the skills and knowledge to look at your particular situation accurately.

The real reason for an evaluation is its usefulness in improving the organization. The more, and the more accurate, the information you get, and the better it's analyzed, the more useful it will be in helping you identify and build on your organization's strengths and pinpoint and correct the areas it needs to work on. A well-conceived evaluation can also help you adjust to changing needs in the community or target population, learn more about the implications of the issue you're working on, provide a base for advocacy, and help with fundraising. As a result, it's important that your evaluators do the best job they can.

You may be able to hire evaluators, or you may be choosing volunteers from your staff or the community you serve. Or you may simply be choosing a planning team that will go on to select evaluators. In any of these cases, the particular individuals you select will influence the shape of the evaluation you get, and what kinds of results you get from it.

Although this section treats both professional and non-professional evaluators, it is understood that most organizations will either conduct evaluations in-house (i.e. using staff members, participants, and/or organizational records to gather and analyze information), use an individual or team from the community, or find a professional to help with an evaluation without charge. We've tried to cover as many possibilities as we can, so that organizations will be able to consider how best to use the resources they have to get the kind of evaluation they want.

The same is true for community or other planning teams. A planning team is only one of many possibilities, depending again on the resources of an organization and those resources include time. An evaluation that's not perfect, but is useful in some ways is better than no evaluation. A planning team is a great idea, but not if the thought of putting together and working with one prevents an organization from attempting an evaluation. The purpose of this and all Tool Box sections is not to create boxes for you to fit your organization into, but to provide as many options as possible.

If you hire professionals, typically either a consulting firm or a group from a university, they'll have particular ways of doing research, particular prejudices about what an evaluation should cover, and particular interests, all of which may or may not coincide with what you need or what your community wants. Community and in-house evaluators also, depending upon the individuals involved and the groups they represent, will have their own prejudices, interests, and needs which will influence how they view the evaluation process and the evaluation itself. The same goes for graduate students or others who might donate their time to conduct or assist with an evaluation. It's extremely important to select people whose agenda matches your own, or who will put their own agendas aside and respond directly to the needs and desires of the organization or initiative and the community. Only then will you wind up with an evaluation that best serves your purposes.

When should you choose evaluators?

The short answer to this question is "as soon as possible." Evaluations should not start at the end of a project. They should be ongoing, so the information can continually be used to improve what you're doing. A planning team should probably be chosen when the organization or initiative begins, or even before, so that an evaluation can look not only at what the organization has done, but at how it changed over the evaluation period.

Many funders require evaluation from the start in their grants. The grant for an adult literacy program run by a Massachusetts employment training agency, for instance, stipulated that 10% of the money be used for evaluation in the first year of the program. A University of Massachusetts graduate student was hired at the start of the program, and he conducted an evaluation which involved his visiting classrooms several times during the year, meeting regularly with the program staff, and interviewing participants. By the end of the year, he had gained not only a complete picture of the program, but a sense of its movement and growth over time as well. The evaluation he presented was extremely helpful in guiding the program over the next year.

As will become apparent below and in succeeding sections, it's important to know your community and its needs, as well as the needs of your organization or initiative, before you set out to evaluate. If your organization or initiative is just starting out, it may sometimes make more sense to wait a while until you have a clear understanding of the context of your evaluation before beginning to pick evaluators.

Context of the evaluation

Relates to whether an organization with the resources to do so may or may not decide to hire professional evaluators. The context refers to the unique situation of the evaluated organization or initiative and its community, the combination of geographical, historical, political, social, cultural, and other factors that form that situation. An understanding of that context is basic to an understanding of how to approach this particular community to evaluate this particular organization.

An example: An evaluation of an attempt to establish an adult literacy program in a rural area showed that, in the first year, the program failed to attract many students. It looked like a failure, and was in danger of losing its funding. But the staff of the program knew two things that the evaluators and funders didn't: first, that the area's inhabitants were country people, and were suspicious of and slow to accept "outsiders;" and second, that other programs had been started, but had always only lasted a year, because they didn't attract enough students. The staff argued, convincingly, that the missing element in earlier efforts was time, and that by leaving, the earlier programs had confirmed area residents' feelings that the outsiders didn't really care about their problems, and weren't serious about helping. The program continued, and by the end of its second year was thriving, because the local people believed that it would stay. Now, 15 years later, the program has become a fixture in the community, and is always full to capacity.

How should you decide between professional and community or other volunteer evaluators?

Evaluation is actually a type of research: it entails systematically gathering information and then drawing conclusions from it. There are people who engage in research of various kinds: students and academics and many consultants who do research for a living. If evaluation is research, it must make sense to hire trained researchers to conduct evaluations, right?

Well, yes and no. In many cases, it is true that a professional researcher will do the best job. But in other situations, that may not be true at all, for a variety of reasons.

Before you decide who's going to do your evaluation, there are a number of things to consider:


This may be the most important, especially for newer and smaller organizations. Generally, the more grass roots your base is, the less money you have. If you have a funder who requires, or is willing to pay for, evaluation as part of a grant, then hiring professionals is at least an option. If there's no money available and no possibility of getting any, community or other volunteers may be your answer.

In a situation where you believe it's necessary to have professional evaluators, or at least professional guidance, you could try to fundraise what you needed (a difficult alternative, unless you have a lot of lead time and a great community base), or to find a volunteer mentor at a local college or university. Often graduate students, particularly, are willing to consult, or even to conduct the evaluation, without charge, either for the experience, or because the research will fit into their dissertations or some other work they're already doing. Other possible sources of various levels of assistance might include other health or human service agencies or initiatives; local government; or a fundraising drive or proposal (to a local foundation, for instance) specifically aimed at funding an evaluation.

In one instance, the instructor of a graduate course in Program Evaluation solicited proposals from community agencies and organizations for help in getting their programs evaluated. The organizations with successful proposals were each rewarded with a team of grad students who worked on evaluations with them over the course of the semester. At the end, there was a public event to which all agencies were invited, where they learned the overall results, and received copies of "Program Evaluation Tips," written by the grad students. This process produced useful evaluations for the agencies, invaluable hands-on experience for the grad students, and a strengthened connection between the university and the community.

The reality of money aside, if you have the option of considering professionals, the issues below come into play.

The complexity of the evaluation

For some organizations or initiatives, an evaluation may be relatively straightforward. They simply want to measure whether their very specific goals are being met. After a public education initiative, are significantly more children being vaccinated this year than last? Are learners in a literacy program gaining proficiency in reading, writing, and math? What percentage of participants in a substance use treatment program is still substance-free after a given period of time? If these are the types of questions that need to be answered, an evaluation can be fairly simple. But what if the questions asked are more complex? Was it the initiative, or some other factor, or a combination that resulted in increased vaccination rates? What are the environmental factors that influence literacy learners' progress? What do the substance users who backslide have in common? The methods to find the answers to those questions may need to be more complex as well. This issue is muddied, however, by the fact that sometimes a deep knowledge of the community may do more to answer complex questions than a knowledge of research methods. If you have the option of using professionals, you'll have to decide whether or not that's actually the best idea here.

The type of information desired and how it needs to be analyzed

If you're collecting quantitative data (i.e. numbers) it's often not possible to simply look at them and draw conclusions. There are statistical procedures that can be applied to the numbers to tell you what they really mean, how significant they are (i.e. what the odds are that differences or changes were actually caused by what you're measuring), and what they imply about other ways you could operate. If you need this kind of information or if funders or others are specifically asking for it, it would probably be helpful either to hire professional researchers or to find some volunteer professional guidance.

Sometimes conducting such statistical procedures can tell you important things you never expected to find out. A researcher trying to determine what had caused a group of women to return to school in their thirties and forties found that a major factor had been their having to care for younger siblings when they were teenagers. He had barely noticed this in interviews, it had been overshadowed by such more dramatic factors as divorce, substance use, and domestic violence, but computer analysis showed it to be extremely important. In fact, when the researcher went back and talked to several of the women about it, they acknowledged that this caregiving role had been a major reason for their not continuing their educations in the first place, and still loomed large in their lives.

What you're using the evaluation for

Is your evaluation solely to help you become as effective as possible in what you do? Is it to increase your credibility in the community? Is your continued funding dependent on it? Is it a combination of two or all of these? Your answers to these questions will help to determine what kinds of questions you need to ask in the evaluation, what form the answers need to be in, how complex the evaluation needs to be, and whether it's necessary to try to hire researchers or not. The choice of professionals or community volunteers here will depend not only on the factors above, but on your particular situation as well. Professionals may be able to better analyze the information, but may not have access to the same information that people from the community would have (or may in fact have better access, because they're perceived as neutral). Once again, you need to determine what's best for your situation.

How you want the evaluation to be perceived

Some organizations or initiatives may want to demonstrate the extent of their community-based or grassroots orientation by making sure that their evaluation is community-based as well. Others may want to show that their evaluation is objective by hiring evaluators with no connection to the community. Still others, for philosophical or practical reasons, may want to be sure to involve those they serve or benefit in the evaluation process. There are many variations and possibilities here, all legitimate, all right for some organizations and wrong for others. As with anything you do, it's important to be consistent with your mission and principles when you consider how to choose evaluators.

What should you look for in choosing evaluators?

There are characteristics you'd want in professionals, characteristics you'd want in a community planning or evaluation team, and some characteristics you'd want in all of them. We'll examine each of these possibilities separately.

An evaluation can be structured in a huge number of different ways. The split between professional and non-professional can blur in many of these possibilities. One might get a professional evaluator for free, for instance, because of her interests or her commitment to the work of the organization or to the community. While an evaluation could be carried out solely by either a professional or by a community or in-house group, there are many variations on these themes.

A paid evaluator or team probably will work at least to some extent with a group from the organization or the community. A paid evaluator might work with a community or organization planning team to help plan the evaluation, and then either conduct the evaluation itself without further input from the planning team, or withdraw entirely to have the planning team take over arranging and conducting the evaluation. A local planning team might plan the evaluation entirely on its own, then hire a professional to carry it out, or just to analyze the data. Professionals might work alongside a community team through the whole process. Whether and how you choose to use professionals, community or in-house planning teams or evaluators, etc., depends on what makes sense for your organization.

One evaluator, for instance, tells this story: "I was hired to help community members evaluate HIV/AIDS services in the community. The community members here were a mixed group mostly agency folks, but some nonagency people living with HIV /AIDS. I asked for some volunteers to plan the evaluation together with me, and was pleased when about half a dozen came forward.

"When we met, I set most of the agenda, raising the basic questions, such as 'What do we want to evaluate?' and 'How do we want to evaluate it?' We decided on a survey. Then the next questions became what kind of survey we needed, how it should be administered, who should administer it, and how the data should be collected. Our work style was to put each of these main questions on newsprint, to discuss them together, to write down ideas, to decide on the best option, and to move on. We ran systematically and very efficiently down the question list. When we finished, after about three one-hour meetings, the evaluation had basically been planned."

The evaluator then went on to conduct the evaluation and analyze the data on his own before presenting it to the community.

Characteristics to look for in all evaluators

Willingness to leave one's own agenda at the door

Often researchers, particularly those attached to universities, may have their own reasons for embarking on an evaluation. It may fit into a doctoral dissertation, or a book that a professor is writing, or a piece of long-term research that will eventually be published. They may also have very strong prejudices about what kind of research they want to use, or what they expect to find, and a need to prove their prejudices correct. And they may have issues about power and about their standing in relation to that of members of a planning team or organizational staff. If the researchers' needs mesh perfectly with yours, then there's no problem. But if they don't quite fit, there can be a very serious problem. If you're paying for a service, you should get what you're paying for, and not simply what the researcher wants to give you. It's important to be clear about this on all sides at the beginning, and to make sure by writing it into a contract or through some other formal mechanism that the professionals are willing to do what meets your needs, not what meets theirs.

Just as professionals may have research priorities that have nothing to do with the evaluation, community members may have personal or political priorities that also have little to do with the business at hand, but can affect their performance on an evaluation team. As much as possible, it's important that people either be willing to put these priorities aside while they're engaged in the evaluation process, or that you screen out those who will try to impose their particular ways of thinking on everyone, or who will use the process to further their own ends.

Ability to communicate with a broad range of people

Evaluators will have to deal with people from all walks of life, of all political, religious, and philosophical persuasions, and probably of many ethnic, language, and racial groups. If they are to gather accurate information, they will have to be perceived as trustworthy by all of those groups, and will have to be able to generate a certain level of comfort with everyone.

Have they worked with groups that included a broad spectrum of community members Do they have the verbal and interpersonal skills and the patience to explain their methods clearly to people who aren't highly educated and who may never have had any contact with research before (perhaps especially if the evaluators themselves are in that same category)? Can at least some of them speak at least some of the languages of the community?

You can answer at least some of these questions by interviewing potential evaluators before they're hired or chosen. Evaluators need to be able to gather the information needed for the evaluation and to couch the results of the evaluation in terms that the community can understand, and therefore use. The best evaluation plan is worthless if the evaluators' approach means that people can't or won't respond to them, or can't use the results they've come up with.

Cultural sensitivity

Especially in a community that contains residents of many cultures (depending on how you define "culture," that includes virtually every community), mutual respect and some understanding and acceptance of how others see the world is crucial to the functioning of evaluators. Do they understand, or are they willing to learn to understand, the cultures of those in the community? An urban, largely working class community is culturally very different from an upper-middle class suburb; a Haitian neighborhood is different from a Vietnamese or a Puerto Rican one. Evaluators need to respect the cultures of the communities they work with, and not violate them, intentionally or unintentionally.

Ability to treat everyone with the same degree of respect

How evaluators approach people reflects on the organization. If they don't treat everyone respectfully, they're not going to get accurate--or any--information, and they're going to complicate the organization's relationship with its staff, the target population, and the community.

Absolute commitment to keeping all individuals' information confidential

Whether evaluators are paid professionals or not, for ethical, practical, and legal reasons, it is almost always necessary to guarantee that any information gathered in the course of an evaluation will be kept confidential (i.e. used only for the purposes of the evaluation, and not connected to the individual), and that people won't be identified either by name or by other factors that could lead to them. In the case, for instance, of the evaluation of a domestic violence prevention program or women's shelter, confidentiality could be a matter of personal safety. In most instances, it will help evaluators obtain more accurate data. And it will protect the evaluators and the organization from lawsuits that could be brought by individuals injured in some way by the information they provided.

Professional or university researchers often ask informants to read and sign an "informed consent" form that explains exactly what the researchers are doing, what any information will be used for, and how the researchers will protect confidentiality and anonymity. If the researcher violates the terms of the form, the informant might have the grounds for a lawsuit, and the researcher's findings might be questioned. An evaluator, professional or not, who asks for responses from members of the target population or the community might consider using a similar form, both because it helps to explain what the evaluation is about, and because it demonstrates the evaluator's commitment to confidentiality.

Commitment to the evaluation process

For planners and evaluators, whether professionals or otherwise, this means trying to do the best evaluation possible, with an eye toward its actual usefulness for the organization. For the organization, and other community members, commitment means believing enough in the process to take the evaluation seriously and use it to make adjustments to and improve the program, service, or activity. An evaluation, no matter how elegant and informative, is worthless if it's not used.

An important part of a first evaluation is a plan for continued evaluation. To be truly useful, evaluation needs to be ongoing throughout the life of the organization. If evaluators can provide a guide for setting up and conducting evaluations on a regular basis, they'll have performed a service to the organization far beyond the value of the evaluation itself.

Community or in-house planners or evaluators

A word about planning teams vs. evaluation teams: A planning team is meant to plan the evaluation process. If the organization has the option of hiring professional evaluators, a community or staff planning team may decide whether or not to do so, and may serve as the hiring committee for those evaluators. Its main job (which it may do alone or with the help of a hired evaluator) will probably be to examine the possibilities and come up with an actual evaluation plan, detailing what the organization needs to find out, where it can find that information, who will do the information gathering and analysis, and how the information will be used once it's gathered and analyzed.

A community evaluation team is a volunteer team set up to carry out the actual evaluation. In practice, this may be the same group as the planning team, whose function, once the planning is done, is the accomplishment of the task. It's helpful if the evaluation team includes someone with enough understanding of research that the evaluation they conduct will be valid, and its results can be used with some confidence.

With a community planning or evaluation team, you have the opportunity to choose all the members individually. There may be some people who, because of their position in the organization or in the community, have to be on the team. By the same token, some others may not be able to be included. In general, the team should be small enough so that it won't be too difficult for members to meet and to contact one another, and large enough so that there are enough people to do the work. And, in general, the team members should represent the interested stakeholders in the organization or initiative: staff and Board, target population, and community members at large. Characteristics you might look for in those on your planning or evaluation team include:


The team could include any number of the following possibilities (but, for most organizations, would probably number only four or five):

  • A range of organizational representatives (e.g. the director, one or two line staff members line [those who work directly with the target population], and a Board member)
  • Members of the target population, ideally individuals who understand and have participated in the organization or initiative
  • Community members with a stake in the organization or initiative (business people, health and human service agency representatives, school and local government officials, other citizens with an interest in the issue)
  • Youth, if appropriate
  • Relevant language and other minority groups
  • Someone with research skills and experience, perhaps from a local college or university.

Ability to understand the purpose of the evaluation

It's important to understand not only what the evaluationis supposed to measure, but also how its form and purpose are related.

Willingness to listen and learn

This covers a wide spectrum of people and behaviors. It includes both the local high school principal being able to listen to and take seriously the opinions of a high school dropout, and that dropout's belief that he's capable of learning what he needs to know to function as a contributing member of the team. It also includes everyone's being aware that they have things to learn about the process, one another, and evaluation in general, and, by the same token, that they have valuable knowledge, skills, and information to contribute.

Ability to work in a group

This can be a particularly sticky issue in a group that encompasses several cultures and classes. The middle-class "meeting skills " that most educated people have been unconsciously learning and practicing since junior high or before are often unknown to welfare recipients or migrant workers, simply because they've never been exposed to them. Thus, they may sit silent, confused by the flow of the meeting, unable to contribute, and feeling foolish; or they may speak or act in ways or at times others deem inappropriate. People without meeting skills need support and encouragement in what is often for them an intimidating situation. One answer may be to pair all team members, so that those who need support will have a mentor. Another possibility is to start the process with a training for everyone in group dynamics and meeting skills, so that no one is singled out and the ground rules are clear for all.


Unless you're working with an individual consultant, when you work with professionals, it's likely that a team will come already assembled. Thus, you're looking at the whole package: who the team members are, what they know how to do, and how they're likely to interact with your planning team, if you have one, and with your organization and the community. Most professional teams, as mentioned above, will either be private consultants or university researchers. Probably the most important features to look for are their professional experience and skills, and their expertise and style in working with community groups:

Knowledge of different kinds of evaluation techniques

Can they use both quantitative (numbers and statistics) and qualitative (facts, stories, anecdotes, analysis of situations and events, etc.) research, and do they know when each is appropriate? Have they used them in situations like yours, or do they have good ideas about how to do that?

If your team is from higher education, you should consider carefully what department (s) of a university to contact to find evaluators whose professional and scholarly specialties are a good fit with the work you do. Depending upon what your organization or initiative does, and what information you're looking for, some possibilities are:

  • Public Health
  • Medicine
  • Education
  • Sociology
  • Psychology
  • Environmental Sciences
  • Urban Studies

Past performance

Have they done evaluation before? What kinds of techniques have they used? Do they have good references? Can you see examples of past evaluations they've done?

Just as you probably wouldn't hire someone for a position without checking her references, you shouldn't hire an evaluator or team without checking with those who've employed them before. How well professionals do in an interview and on paper is usually enough to tell you what you need to know about them, but when it's not, it can be disastrous, costing you not only money, but time, results, and relationships in the community.

You should ask to see some of the results of previous evaluations. Are they readable and understandable? How did the candidate explain them to the organization and the community? Knowing what you're getting can help immeasurably in choosing the right evaluators.

Experience working with community groups

Have they worked in partnership with community groups before? Do they understand the roles in this situation, and can they handle theirs well? Have they been able to work with the community to address the appropriate goals? Are they responsive to suggestion, to the needs of the community, and to the context of the evaluation? Do they collaborate, in other words, rather than telling the community group what it needs?

Ability to really listen

It's important that not only is the evaluator able to listen, but also respect others and not assume the "expert" role except where that's appropriate (you are, after all, hiring them for their expertise). If the evaluation is community-based, the evaluators are going to have to listen first to the planning group about what the organization or initiative most needs to know, and then to the community in order to gather that information. Real listening (listening not only for the meaning of what's being said, but for the meaning behind the meaning) is an indispensable skill for all good evaluators.

In Summary

The individual or team you select to plan and/or conduct the evaluation of your organization or initiative will do much to determine the character and usefulness of the evaluation itself. If you work with professionals, you need to consider their credentials, past experience, and level of expertise. Whether you choose professional evaluators, community volunteers, or some combination of the two, you need to think carefully about how their needs and interests fit in with those of the organization and the community, about the range and quality of their communication skills, about their willingness to work as partners with the organization and the community, and about their fit with one another. Once you've put together a good team, you are well on your way to carrying out an accurate and valuable evaluation.

Phil Rabinowitz

Online Resources

"A Basic Guide to Program Evaluation" by Carter McNamara, PhD from the Management Assistance Program for Nonprofits, a free online management library.

Centre for Research and Education in Human Services is a non-profit resource organization with a thorough definition of community-based research.

Choosing an Evaluator provides information on using inside versus outside evaluators.

The Evaluation Center, from Western Michigan University, provides links, glossary, evaluation reports and descriptions. It provides an example of professional evaluators operating out of an academic setting.

A Framework for Program Evaluation in Public Health is a report that presents a framework for understanding program evaluation and facilitating integration of evaluation throughout the public health system.

Find an Evaluator is a tool provided by the American Evaluator Association to simplify the search for an evaluator.

Finding and Working with an Evaluator is a resource provided by MEERA that offers an extensive list of resources for choosing an evaluator.

The Program Manager's Guide to Evaluation, Chapter 4  is a chapter out of a handbook provided by the Administration for Children and Families with detailed answers to nine big questions regarding program evaluation, one of which addresses selecting evaluators.

W.K. Kellogg Foundation Evaluation Handbook Chapter 5 provides a framework for how evaluation teams can be successful.

Print Resources

Dewar, T. (1997). A Guide to Evaluating Asset-Based Community Development: Lessons, Challenges, and Opportunities. Chicago: ACTA Publications.

Maltrud, K., Polacsek, M., & Wallerstein, N. Participatory Evaluation Workbook for Healthy Community Initiatives. Albuquerque: New Mexico Department of Health, Public Health Division, Healthy Communities Unit.