Search form

Section 3. Obtaining and Using Feedback from Participants

Learn what participant feedback is, how to get it, and how to use it to maintain a high-quality program, initiative, or intervention that meets the needs of the community.


A new community health initiative was about to start, funded by anti-smoking money from the state Department of Health. The program was aimed at people in a number of community service programs – employment training and job development, adult basic education, teen pregnancy prevention, etc. Its focus was supposed to be on teaching people why smoking was unhealthy and addictive, and helping them to quit.

Before the program started, however, its organizers decided that they needed to know what health issues potential participants were concerned with.

The health educators interviewed a large number of people involved in the community programs they were targeting, and found that a majority of them both smoked and were interested in quitting. Far more important to these folks, however, were the related problems of neighborhood violence and substance use.  Smoking was way down the list for them.

After conferring with its funder, the community health program, as a result, began by concentrating on violence and substance use as community health issues. It attracted a large number of participants, who were able, through a variety of strategies, to reduce violence in the area and to bring about the establishment of a substance use treatment center in the neighborhood. In the course of their work on these issues, participants also were introduced to information on tobacco addiction, and to methods for smoking prevention and cessation. As a result, a great majority of the smokers in the program joined smoking cessation groups and were able to quit. Although they had started out focusing on what seemed to be unrelated health issues, the health educators found themselves with one of the most successful smoking cessation programs in the state. They realized that it never would have happened if they hadn’t asked participants for feedback about what they needed, and what would be likely to draw them in.

Health and community service programs almost always have the best interests of the community at heart, but they don’t always know what those best interests are or the best way to approach them. One of the easiest ways to find out is to ask the people who know best – those at whom a program or intervention is aimed. Chapter 40 is about attaining and maintaining quality in your work. In this section, we’ll discuss what feedback from participants consists of, how you can get it, and how you can use it to create and maintain a high-quality program, initiative, or intervention that meets the real needs of the community.

What do we mean by feedback from participants?

In simplest terms, feedback from participants consists of reactions to or opinions about your effort from those who are affected by it. If you’ve read the first section of this chapter, you’ll remember that an important element of building quality into programs is customer-centeredness. That means doing what the customers actually need and want, rather than what you think they want. The customers, in this case, are the folks you’re hoping will benefit from your work. To be customer-centered, you have to involve them in planning and evaluating what you do at all stages, so that you can adjust it to their needs as much as possible.

The idea of customer-centeredness fits nicely with the emphasis in the Community Tool Box on participatory process. We believe that involving participants from the beginning in a program or intervention greatly increases the possibility of its success, largely for the same reasons that W. Edwards Deming, the father of Total Quality Management, deemed that it was important to consult the customer.  Knowing what people actually need makes it much easier to provide it.

Feedback is not simply criticism, constructive or otherwise. Rather, it is a reflection of the perceptions of the person or group providing it. You may think you’re presenting things in a particular way, or accomplishing a particular purpose by what you’re doing; but if those at whom it’s aimed see it differently, you may have little chance of success, or of achieving what you set out to do.

Obviously, feedback can be positive, negative, or neutral.

Its nature may also be influenced by some other factors:

  • It can be completely or at least somewhat objective, based on facts, observation, and past knowledge; or almost entirely subjective, based on emotional reactions, defensiveness, loyalty (to family, friends, or class), cultural gaps, or distrust of outsiders.
  • It might be bolstered by participants’ knowledge of the history of the community, of specific people and relationships, of culture, or of other factors and information.
  • It might be limited by participants’ limited knowledge of an issue or process, by their lack of information, or by their lack of understanding of the information available.
  • It might be solicited (asked for) or unsolicited. Feedback that’s volunteered often takes a negative – sometimes an extremely negative – form, and may be hard to pay attention to and incorporate into your work.

In addition to its range of possible characteristics, feedback can be delivered in a number of ways, and it’s important to pay attention to as many of them as possible.

  • Direct verbal feedback. The most useful feedback, in most cases, is direct and to the point. That’s the kind you get when you ask someone what they think, and they’re willing to give you an honest answer, either face to face or in writing.  Unfortunately, that kind of feedback isn’t always available.

In general, this section deals with obtaining intentional, direct feedback.  We include some other options immediately below, and it’s important to take them into account, but it’s a lot easier to interpret and use direct feedback if you can get it.

  • Indirect verbal feedback.  This might include remarks that aren’t meant specifically as feedback, parts of conversations overheard in passing, or comments that may not, at first hearing, seem much like useful feedback at all.

A conversation with a participant might include a remark about how great the program has been for her, or how a particular activity or process simply wasn’t helpful. Participants may say things to one another that they’d be reluctant to pass on to a staff member, and occasionally those might be overheard, or reported by the hearer.

Comments that initially do not seem useful may just need some additional interpretation. A common remark, for instance, especially among young participants, is “This is stupid!”  While it sounds like a generalized criticism based on a cynical view of the program or the world, or on a lack of understanding of the situation, it often has a very specific meaning: “This makes me feel stupid.”  Learning that what you’re doing has that effect may be a very important piece of feedback, and can lead to useful changes.  Instead of ignoring the comment, or making light of it, you might ask “Why is it stupid?” or “What would make it less stupid?”   The answer might be tremendously useful to both of you.

  • Direct non-verbal feedback.  When participants fail to keep appointments, never follow through on tasks, or simply disappear from the program, they’re often giving you feedback.  Sometimes – with children or youth, very rarely with adults – this non-verbal feedback may even encompass acting out and outright violence directed toward staff members or other participants.  It’s extreme, and like other non-verbal feedback, doesn’t always have to do with the nature of the program or relationships with the specific people it’s aimed at.

It’s important to draw the distinction between feedback – actions or comments that indicate a real reaction to a real situation or aspect of a program – and symptoms of a larger individual issue. A teen may be hostile or withdrawn because what you’re doing seems insulting or demeaning to him, or because his history of abuse and neglect is so powerful that it might be years – or never – before anything can reach him.  A number of people missing appointments or leaving the program may be a reaction to the way they’re being treated or to the ineffectiveness of the methods you’re using; but it may also be a reflection of other pressures, illness, or domestic situations.

  • Indirect non-verbal feedback.  People send all kinds of messages with their facial expressions, tone of voice, body language (posture, hand or body movements, eye contact, etc.), and their actions.  Participants are often extremely nervous or scared – or defiant – when they start, but that usually breaks down as they become more comfortable with the situation and the other people – staff members and other participants – involved in it.  If it doesn’t change – if the fluttering hands, shaky voice, and other signs of nervousness don’t ease for most participants, or if they remain hostile or withdrawn – they’re telling you something about their response to what you’re doing.  If they never follow through on tasks or assignments, continually fail to keep appointments, or simply disappear from the program, they’re making a feedback statement that you’d better listen to.

It’s important to remember that feedback is a matter of perception.  Thus, even though it comes from participants, that doesn’t mean it’s accurate.  It must be weighed as any other feedback would.  What is true, however, is that if many participants perceive that they’re not being well served, that perception needs to be changed whether it’s correct or not.  Even if feedback is offered in a negative way, you should pay attention.

Why do you need feedback from participants?

  • It’s part of being customer-centered. Paying attention to what participants need and want is a crucial element in creating a quality program.
  • It gives you tools to improve your program. Understanding what’s needed and what isn’t, and what’s working and what isn’t helps you to increase the effectiveness and success of your work.
  • It allows you to respond to changes in the community, the population, or the situation. There may be social or cultural changes within the population you’re working with (an increase in immigration, for instance), changes in the community (a downturn in a major industry), or other factors (an increase or decrease in neighborhood violence) that affect your program.  Being able to spot and react to these through participant feedback will keep your program current and effective.
  • It can give you information about the history of the community, the history of your issue in the community, or the history of your population. Hearing from participants that what you’re doing has bad connotations in the community, or that it’s been tried before, and failed as a result of conditions that haven’t changed, can help you create a program without the baggage of past mistakes.
  • It can inform you about personalities and relationships in the community and/or the population you’re working with. This kind of knowledge can be very important when you’re trying to get different sectors of the community or different neighborhoods working together, or when you’re trying to put together a planning or other group.
  • It can tell you when your methods or approach aren’t working. If participants aren’t feeling like there’s any progress, they may be right, unless you have evidence to the contrary.
  • It can tell you when you’re treating people in ways that make them feel uncomfortable, angry, or otherwise disrespected  Honest participant feedback will let you know if your intake procedure is too impersonal and scares potential participants away, or if your attitude toward the teens in your program is that they don’t know anything and are nothing but trouble.
  • It can help you deal with what’s important to participants, even though that may not be your major goal. The story of the smoking cessation program that opens this section is an example of how paying attention to what participants are concerned about can pay big dividends in the long run.
  • It can assure that your objectives make as much sense for participants as they do for you. Regardless of your long-term goals, you have to start where participants are.  If they see the program as irrelevant to them, all you’ll see are their backs.

Who are the participants that can offer feedback?

When we talk about conscious, intentional, reasonably honest feedback, the typical providers are adults and teens with no intellectual, psychological, or social barriers to providing it.  These are the folks who are most likely to be able and willing to take part in interviews, fill out questionnaires or surveys, and be involved in the assessment, planning, implementation, or evaluation of programs in which they participate. That doesn’t mean, however, that these are the only people you can ask for feedback, or from whom you can get it. To a certain extent, it depends on what kind of feedback you’re looking for, and how hard you’re willing to look for it.

Other possible providers of feedback include:

  • Speakers of languages other than the majority language. Obviously, if you can speak their language, or if you have access to an interpreter, these participants can provide the same kind of feedback any other adults and adolescents can.  You have to understand their culture, however, to understand how direct they’ll be with you.  In some cultures, it’s unthinkable to criticize anyone in authority; in some, it’s rude to criticize directly at all.  Others may be afraid to say anything negative, for fear of reprisals; if they’re undocumented, they may be afraid to say anything.
  • People who have left the program without completing goals. Their comments may be the most important.  Why did they leave?  What could have kept them in the program?  What would attract them back?  The effort to track down and follow up with these short-time participants can be rewarding.

In these cases, it’s often not what the program did, but what it didn’t do that proves important.  Participants may have needed child care or transportation, which weren’t available.  They may have needed an extra bit of attention that they felt they weren’t getting.  Understanding these kinds of needs won’t solve everyone’s problem or keep every participant in the program, but it can certainly help many.

  • Children. Children may be limited in their comprehension of how a particular program or service affects them, but they certainly know how they feel about it, and how they respond to it.  An advantage of asking children for feedback is that they are usually willing to give it.  If  you ask the right questions, you can get a lot of useful material from them.  Children are also more likely to give other kinds of direct feedback – acting out, for example.  That’s a lot harder to deal with, but it can convey important information, nonetheless.
  • At-risk or troubled adolescents. Keep in mind that there are many factors at work here.  All teens, but especially those at risk because of violence, school failure, substance use, or other problems – may have to deal with a self-image and a real or imagined reputation among their peers that dictates never saying anything positive about the adult world. You have to win their trust – often a long process – before they’ll open up and tell you what they’re really thinking.
  • People with mental illness. They can often provide good information on the services they’re receiving, or on other programs they’re engaged in, and have a great deal of relevant information to offer if you’re willing to listen and sort it out.
  • Developmentally delayed teens and adults. The level of feedback you’re likely to get from these folks depends to a large extent on their level of functioning. They can certainly tell or show you whether they’re happy or unhappy to be there, but they may or may not be able to explain why.

When might you obtain and use feedback from participants?

In thinking about when feedback is necessary, it’s helpful to refer to the Deming Cycle, which consists of five phases: Plan, Do, Check, Act, Analyze. Feedback from participants can and should be used at every phase of the Deming Cycle.

  • Before a program or initiative starts (Plan): Participant feedback is crucial to ensuring accurate assessment of the assets and needs of the community, and to understanding what the appropriate outcomes of an effort should be.
  • During the development of the program or initiative itself (Do): Having members of the population you’re aiming at provide feedback for – or, better yet, engage in – the planning and structuring process will greatly increase the chances that your effort will attract participants and meet their needs.
  • When your program is ready to begin (Check): Asking potential participants for feedback on your plan can identify bad choices, omissions, and strong points, and help you to make your effort as effective as possible before you begin. Be sure to ask if it addresses the needs expressed by potential participants, and if it is likely to achieve the outcomes that meet those needs.
  • During the running of the program (Act): Continuing to ask for and obtain feedback from participants as the program runs will allow you to make adjustments on the fly, and to fine-tune the program as it runs.
  • As part of a formal or informal evaluation (Analyze): You should conduct some sort of evaluation on a regular basis (annually is typical). You really can’t get an accurate picture of the strengths and weaknesses and the effectiveness of your efforts without participant feedback. The evaluation allows you to incorporate that feedback, and other information into an improved effort for the following period, thus starting the cycle over again.

How do you obtain feedback from participants?

Obtaining honest feedback from participants should be simple – just ask them, right? Sometimes that’s all you have to do (although you do have to do it – you’re not likely to get much feedback without asking), and you’ll get all the feedback you want, and more. But in many cases, it’s not true at all.  We’ve mentioned the adolescents who can’t, for reasons of self-image and reputation, bring themselves to say anything good – or sometimes anything at all – about adult-connected activities.  And there are cultural differences – for example, the folks who won’t ever say anything negative about or to anyone they see as an authority figure or a high-status person (a teacher, doctor, or administrator, for example), or to someone above a certain age.

Other cultural issues can arise around gender roles (men who are unused to dealing with independent women, for instance, or women who are fearful of offending men), mistrust of other cultures, or simply a lack of understanding about what kind of information would be useful. Many people, culture aside, are eager to please, and will try to say what they think you want to hear. Language and literacy difficulties can present their own problems, as can distrust of or hostility toward any kind of authority, whether actual or perceived (an especially difficult issue in programs that aren’t voluntary, such as court-mandated substance use treatment). Some participants may fear that they’ll be thrown out of the program if they criticize it; others may simply not want to offend someone they like as a person.

It’s our assumption that obtaining honest feedback is easier if participants have felt, from the beginning, that they have a stake in the program.  If they’ve been invited to participate in planning and implementing their own service or education or treatment, they’ll be far more comfortable providing honest and useful feedback on the process, content, and results of it.  If they see themselves, and are treated, as partners in the enterprise, working with staff toward a common goal, providing feedback will be natural.  If they see themselves, and are treated, as clients who have no important skills or knowledge, it will be like pulling teeth.

We present below several ways to ask for and get good feedback from participants, but we believe that the easiest way to assure it is to invite participants to be full partners from the outset, where possible.

Methods of obtaining feedback

There are a number of methods you might use to obtain feedback.  Which is best will depend on your situation, and you may in fact use more than one, in order to reach different groups of participants.

Surveys. Using a survey to collect feedback is a common practice.  Surveys are usually administered in one of four ways.

  • Written. This often takes the form of a short, multiple-choice questionnaire that people can fill out and send or hand back.  Some surveys are much longer, however, and may ask for comments as well as yes-no answers or ratings on a scale.
  • Face to face.  Here, an interviewer asks specific questions – also usually short and often multiple choice – and records the answers.  If the survey is much longer, it essentially becomes an interview, a method we’ll discuss in a little while.
  • Phone.  This is similar to a face-to-face survey, but, because the two people involved can’t see each other, it is somewhat more impersonal.
  • Online surveys. These are now quite common. However, keep in mind that if you’re working with a low-income population, this isn’t the best method, simply because a large number of the people you want to reach are unlikely to have access to a computer.

E-mail and Internet surveys are becoming more common, and may be something you can use. In general, if you’re working with a low-income population, this isn’t the best method, simply because a large number of the people you want to reach are unlikely to have access to a computer.

Like all the methods described here, surveys have their advantages and disadvantages. Among the advantages are that short surveys are relatively easy to complete, so that people are more likely to actually fill them out.  They are limited to specific questions, and generally don’t ask for too much more than checking a box or circling a number for each one. Written surveys can also easily be administered so that the people filling them out remain anonymous, and thus take no risks.

Getting people to fill out and return surveys can be difficult.  Offering incentives – entering anyone who returns the survey in a prize drawing, for instance – can help, as can presenting the survey so that it has to be filled out and returned on the spot.

Among the disadvantages are the limited amount of information they often provide. Long surveys can be very informative, but they’re harder to complete, and fewer of them are returned. Long phone surveys can help with this issue, but they run into time – which often translates to money, if you’re paying the phone interviewer, or paying phone charges by the minute. People may be less willing to cooperate, both because of the time involved, and because, promises of anonymity aside, the interviewer knows who they are.

Furthermore, participants may have difficulty reading, as is the case for many high-school dropouts and for people in such programs as adult basic education and English as a Second or Other Language (ESOL). In this case, either the surveys have to be administered face to face or by phone, or someone has to read the survey to a group or individual to make sure that they understand each question.

Interviews. These are face-to-face conversations with a specific purpose – in this case to get feedback about your work.  Interviews can be conducted one-on-one or in a group;  we’ll look at individual interviews here, and discuss group interviews below.

An interview is different from a face-to-face survey in a number of ways. First, although interviews are sometimes standardized (everyone is asked the same questions in the same words), it is also common for the interviewer to have some leeway.  While there may be a specific list of questions that the interviewer wants the answers to, they might not be phrased the same way for every person, the way survey questions are.  The interviewer might be free to ask other questions as well, or to comment on what the interviewee says. Furthermore, interviews usually aren’t designed to be as brief and easy as possible – they may go on for an hour or more.

Most important, perhaps, is that the answers the interviewer seeks are not yes/no or multiple choice.  Just the opposite, in fact: most interviewers try to ask open-ended questions, questions that can’t be answered with a simple yes or no. (Rather than asking “Did you like X?”, for example, an interviewer might ask “What did you like or dislike most about X?  Why?”)  A good interviewer will also ask follow-up questions when she hears what seems like an important or interesting comment. (It sounds like the other people that you met in the program were really important to you.  What do you think it was about that group that had such an impact?)

Individual interviews are great vehicles for collecting large amounts of detailed information. They offer the interviewee privacy and the interviewer’s undivided attention, both of which act as invitations to be open and thoughtful, and they give the interviewer the opportunity to really concentrate on one person’s thoughts.

On the minus side, individual interviews are time-consuming, and as a result may limit the number of people who can be consulted. Furthermore, in an individual interview, there’s generally no check on the accuracy of an interviewee’s stories or allegations – no confirmation or denial from another participant.  Finally, for an interview to be really successful, the interviewer and interviewee have to establish a personal connection. When this doesn’t happen, the interview may be less useful.

Small-group interviews and focus groups. A small-group interview, as the name implies, is similar to an individual interview, except that it includes several people instead of one.  Usually limited to perhaps six to eight participants, small-group interviews follow the same informal “rules” as those described above – set questions, with the freedom to ask others, length determined by the interviewees’ interest (within reason – some interviews would last for days if the interviewer allowed them to), and open-ended and follow-up questions. The big difference is that people in a small group play off one another, asking their own questions, volunteering follow-up answers before the questions are asked, and elaborating on others’ points (as well as providing that check on others’ accuracy mentioned above).

Focus groups are more purposeful small-group interviews.  Here, the questions are determined beforehand, and the facilitator/interviewer is trained to coax out as much information as possible.  In general, focus group participants aren’t told exactly what information the facilitator is looking for (although that’s not always the case in a health and community service context), and participants often don’t know one another. The reason behind both these restrictions is to try to get as unbiased a reaction as possible from everyone in the group.  If they don’t know exactly what the interviewer wants to know, they can’t alter their comments to please him (or annoy him); if they don’t know anyone else in the group, they’re less likely to play their accustomed roles, or to edit their comments so that they won’t be repeated to other friends and acquaintances.

Town-meeting or whole program format. In this situation, a large gathering of people – perhaps even all participants in a program – meets together to provide feedback.  How good a method this is depends upon the circumstances. If it’s facilitated well, a town-meeting format, where individuals can speak their minds and where give-and-take is welcomed, can be a generator of useful feedback and new ideas. It can be used as a brainstorming session, it can generate volunteers to take on the tasks that grow out of the feedback provided, and can lead to an explosion of involvement and creativity from participants and staff. If it’s not well facilitated, or if there are serious issues that haven’t been dealt with, it can degenerate into name-calling and anger, or produce little more than silence. It’s hard to decide which is worse.

All this is to say that a town-meeting format can be either a dream come true or a nightmare. If you want to use it, make sure that (a) you have a good facilitator who understands the program and its situation, and who knows how to steer discussion in productive directions; and (b) there isn’t a major problem that has gone unattended, or an individual or group with an axe to grind that wants nothing more than to use this meeting as a grindstone.  In those instances, you may get lots of feedback, but it may produce more bad feeling than you know what to do with.

Journals. A way of obtaining ongoing feedback is to ask participants to keep journals. These may be devoted solely to feedback, or may have a larger programmatic purpose, depending on the nature of the program. Since people write in them on a regular basis – daily, weekly, or some other period – journals can provide a running commentary on participants’ reactions to the program, to individual parts of it, to events, etc.

If the journals are interactive – participants’ journal entries are read and responded to in the journal by the educator or other relevant staff member (a counselor, a mentor, a health professional) – participants can be asked to elaborate or expand on particular comments, and the feedback can be extremely focused and valuable. This method only works, of course, if participants are able and willing to keep a journal, and if they trust the staff member enough to reveal their real thoughts and feelings about the program.  It’s time-consuming, but if the time exists – or if the journals serve another purpose as well – it can be extremely helpful.

There are, of course, other ways of obtaining feedback.  The editor of this section, for instance, recalls a mental health center that asked people to evaluate the quality of the therapy they received.  At the end of each session, they were encouraged to grab a poker chip from a dispenser at the front desk and deposit it in a box – blue for good, white for okay, red for poor.  It was simple, anonymous, and informative.

Which method you decide to use will depend on a number of factors.  If time is an issue, for instance, you might not want to use individual interviews or journals. If you want detailed feedback, including opinions and the opportunity to follow up on people’s ideas, on the other hand, some sort of interview is probably the best way to get it.  A town-meeting format may be particularly appropriate in a situation where large gatherings are a regular part of the organization’s operation. Surveys might work best where participants are scattered geographically, or where you’re polling former as well as current participants.  Journals are particularly appropriate where they’re part of the program (a parenting group or adult basic education class, for instance).

Ways to improve your chances of getting good feedback

Regardless of what methods you use to gather feedback from participants, there are some things you can do to increase the chances that the information and opinions you get will be honest, and meant in the spirit in which they’re requested.  Not all of these are possible for all methods of soliciting feedback, and not all of them are necessary or appropriate for a given group of participants.  It’s up to you to determine what might work best for your effort.

Let people know from the beginning that you want their feedback, and will ask for it

This alerts them that they should be thinking about what they approve and don’t approve of, enlists them as partners in improving the enterprise, and gives them “permission” to look at your effort critically.  You might also explain that you want ongoing feedback, and that their opinions and ideas will be welcomed whether they’re directly requested or not.

Specify what kind of feedback you want

The more definite you are about the kind of information you’re seeking, the more likely you are to get it.

Some of the questions you might ask or things you might want to know about:

Assessment / planning:

  • What’s the need for (your particular service) or importance of (your particular issue) in the community?  Is there a group that’s most in need of/affected by it?
  • Has this service/issue been addressed in the community before?  What was the result?
  • Who are the most important people in the community/a specific population to talk to about or try to get involved in this effort?
  • Are there people on that list who simply can’t be in the same room together?  Who work really well together?
  • What do you think might draw people like you into this effort/program?  Are there things we absolutely shouldn’t try?
  • Please let us know if anything that’s suggested in the planning process is culturally offensive, or simply doesn’t fit into the way you do things

Ongoing feedback and program evaluation:

  • What do you like about the program?  What works well for you?
  • What would you change about the program?  How?  What doesn’t work well for you?
  • What would assure that you’d stay in the program?  (What did keep you in the program?)
  • What would absolutely make you leave the program?  (What did make you leave?)
  • Was/is intake welcoming?  Informative – did you get all your questions answered?  Respectful?  Relatively easy (determining eligibility, for instance)?  Did the intake people try to make it as easy and pleasant as possible?
  • Were you placed at the right level?  Was placement flexible?
  • Are you getting the services you were promised?  Are they the services you need?  Do you need services that this program provides that you’re not getting?
  • Do you need support services that aren’t available (child care, transportation, accessibility, etc.)?
  • Are staff supportive, helpful, and respectful?  Do they seem to know what they’re doing?
  • Are you happy with the way you’re treated?  Are you being treated as an equal, with respect?

This question is really about the philosophy of your organization, and its view of participants.  Staff people may be polite, but if their attitude toward participants is that they’re incompetent to run their own lives, that will be communicated, courtesy or no courtesy.  By and large, people respond positively to being treated as equals, with respect, and resent being treated as inferiors.  Part of that respect may be honesty about hard truths (“You really screwed up here.”), but honesty doesn’t prevent a basic assumption of equality. Taking participant feedback on this subject seriously can be hard for an organization.  You may feel you’re doing your best, and getting very little recognition from participants for your efforts.  But your best efforts at providing service can be derailed if they’re not accompanied by an attitude of true respect toward those with whom you’re working.  If you’re hearing that participants aren’t feeling respected, it may be time to honestly reassess the philosophy and guiding principles of your organization.

  • Are program expectations clear and consistent?  Do staff people seem to care?  Do you get help when you need it?
  • How do you get along with other participants?  Do you enjoy the social situation?  If there are problems, have you spoken to anyone (staff or other participants) about them, and are they getting addressed?
  • Has there been any problem about payment (or insurance or eligibility)?  Is it easy to take care of?  Are you comfortable talking to staff about any problems with it?
  • Does the schedule work for you?  What would you need to make it better (evening or weekend hours, for instance)?
  • Is the location one that means a lot of travel for you?  Is there convenient public transportation?  Does the neighborhood/area feel safe?
  • Is the program site comfortable?  Does it serve the purpose of the program well?  Is it big enough, and are there enough separate rooms?  Is it accessible for you and anyone else who needs to use it?  Are the restroom facilities adequate?

Make clear exactly what you’re going to use the feedback for

Whether it’s assessment, planning, ongoing adjustment, or evaluation, let people know what you’re going to do with what they give you.  Some participants might feel that their feedback will be used to judge them, or that someone will be fired if they say anything negative or suggest changes.  If you can explain clearly that the purpose of the feedback is to improve the program and make sure it’s doing what it’s supposed to, and what participants need – and that no one will be fired, but rather that everyone will be helped to do his job better – that may remove some of their reluctance to respond.

Openness about what you’re asking for also fits into the theme of participatory process, enlisting participants as planners or evaluators, and helping them to see their opinions as valuable, and themselves as partners in your effort.

Guarantee anonymity

Many of the reservations or fears participants may have about being negative or straightforward can be overcome if they know that no one will know that they are the authors of their comments.  Written surveys, particularly those that require only circling or checking off answers, are obviously the easiest to hold to this standard.  Those filling them out can drop them off in a neutral spot – a box in the office, for instance – with nothing to identify them as connected to a particular form.  Those participating in face-to-face or phone surveys or individual interviews have to depend upon the promise of the interviewer that their identity won’t be attached to their comments or revealed to anyone.

Where it is obvious, for cultural or other reasons, that fear of being identified is an issue, you might consider using a method that can guarantee anonymity.  Small or large groups pose much more of a problem in this area, but you can sometimes skirt the issue by having a group facilitated by someone unconnected with the organization – a volunteer or outside facilitator, for example.  (This might work if people’s reluctance to be identified only extends to program staff.  If it includes other participants, the group situation would still be a problem.)

Let people know that you’ll share the results of feedback with them, and be sure to do it

Participants should expect that you’ll tell them the overall results of the feedback you got.  You should also explain how you’re using it, and get their feedback on that.  That further cements the partnership, as well as giving participants the opportunity to react if you’re changing a part of the program that most people like, or ignoring what most see as a problem.

How do you use feedback from participants?

So now that you’ve gotten all this feedback, what are you going to do with it?  The last thing you want to do is to solicit feedback, get it, and then file it away and never look at it again. The whole point is to use it to improve your work.  Good feedback presents you with a broad range of opportunity.

Understand the needs of the community

Feedback from potential participants and the community before you start planning can show you what the real needs and concerns of the community are, what the appropriate levels of service are, and what issues you should concentrate on.

Understand changes in the community, or in the situation of the participant population

Increased immigration, for instance, can have a significant effect not only on the community as a whole, but on the resident population – Cambodians, Hispanics – to which the immigrants belong, and can change the needs or the concerns of that population. A factory closing, the start of a major construction project in a particular neighborhood, gentrification – all of these can cause major changes for participants, and require that you alter the approach or the content of your effort.

You might or might not be aware of these community changes themselves, and participants can obviously tell you about them. The real value of participant feedback here, however, is in what those changes mean for the folks you work with. Their feedback can tell you how to respond to changes in order to keep your program on track, and to make sure it’s as useful as possible to the people it serves.

Improve your program itself

Participant feedback can be tremendously useful, both ongoing and as part of an evaluation, in helping you make your program more effective.

In terms of the actual work of the program, it can alert you to the need to refine your techniques and methods, beef up what’s working, and restructure what’s not. It can let you know when program content isn’t relevant, and has to be rethought. Participant feedback can also help you realize when you need to change schedules, try to provide support services, and otherwise make your program more user-friendly.

The other area where participant feedback can have a real impact on program effectiveness is that of program philosophy.  As we’ve stressed, what participants say about how they are treated and how they feel about coming can be extremely important.

 There are really three areas to look at here, all of which have already been referred to in this section:

  • Attitude toward participants. If participants aren’t treated with respect, they’re less likely to remain in a program, and less likely to have success in it if they do.
  • Approach toward provision of services. When participants are accepted as partners and feel ownership of a program, they’re not only more apt to get a lot out of it, but to contribute to its overall success as well.
  • Identifying and addressing barriers to participation or success. Part of being respectful and treating participants as partners is responding creatively to participants’ needs for support services, particularly child care.  This attitude also increases the chances that program staff will realize when, for example, a participant in an employment training program has basic skills needs, or simply can’t see well enough with her present glasses to perform necessary tasks.  Barriers like these can be addressed only if they are perceived.

Develop an organizational culture that embraces and institutionalizes ongoing feedback from participants

So many of the “how-to” portions of Tool Box sections end with the advice “And then keep at it forever.” This is the equivalent of that advice for obtaining and using participant feedback.

Feedback from participants isn’t a one-time thing.  It has to be encouraged continually: participants and staff come and go, conditions change, and institutional memory – the knowledge of what has happened in an organization over time – can be short. An organization, a program, an intervention must continually reinvent itself in order to continue to effectively meet the real needs of those it serves. Without ongoing participant feedback, that reinvention is unlikely to happen, and even less likely to have any real impact if it does.

In Summary

Feedback from those who have the closest perspective on what you’re doing – those you serve, or whom your work is meant to benefit – can be tremendously helpful in making your program, initiative, or intervention as effective as possible.  It’s not always easy to get honest feedback from these folks, because they may be reluctant to criticize, or may be afraid to respond at all for various reasons.  But their feedback is important enough that it’s worth the effort to get around their reluctance.  Perhaps the best way to ensure honest feedback is to involve participants in what you’re doing from the beginning.  If they’re part of the planning, implementation, and evaluation of your work, they’ll feel ownership, and trust you and the work enough to tell you what they really think.

Participant feedback can help you better understand what you need to do and how to do it. It can improve the work you’re already doing, and make sure you adjust to changes in the community or the population that call for changes in your approach or your goals. Overall, the purpose of obtaining feedback is to assure the quality of your work.

You might obtain participant feedback through the use of surveys (written, face to face, or by phone), individual or group interviews, focus groups, town-meeting or whole-program format meetings, or journals. Whatever method you choose, you’re more likely to get honest feedback if you treat participants as partners working with you toward a common goal, rather than as clients who couldn’t function without you.

Ultimately, you should strive toward creating an organizational culture and attitude that encourages participant feedback, and views it as a necessary tool to keep increasing the quality of the work you do.

Phil Rabinowitz

Online Resources

The Consumer Complaints Toolkit guides advocates to understand the consumer complaints process, help consumers file complaints with relevant state agencies, and engage in state-based advocacy to improve the complaints process.