Search form

Section 3. Refining the Program or Intervention Based on Evaluation Research

Learn how to evaluate the process, impact, and outcomes of an intervention and make needed adjustments.

 

A Community Health Center conducted an evaluation of its program to promote physical activity among those with higher risk for heart disease. The evaluation showed mixed results. A small number of participants (15%) had very good outcomes. They had marked increases in physical activity and improved nutrition. Their fitness improved and they lost weight. As predicted, their blood pressure dropped, their pulse rates went down, and they reported feeling more energized. They reported high levels of satisfaction with the program and results.

A large majority of the original group (70%) exercised, but not as regularly as hoped. The health benefits for this group varied, with several reducing blood pressure at least slightly, and the rest maintaining the levels they had entered with.

A final group (15%) consisted of dropouts – several participants left the program, most within a short time – and other people who simply never managed to exercise on any schedule at all. There was virtually no change in their weight, blood pressure, or sense of well-being...except for a small number that had relatively positive results.

What could the Community Health Center do with these results? It knew that, while the intervention apparently worked if people stuck with it, the program was only partially successful. How could it use the evaluation to improve the program, and so improve the health of those it served?

This chapter so far has discussed the elements of conducting a research-based evaluation. But evaluation itself is only a means to an end: a tool to help you see what is happening so you can improve the effectiveness of your work. In this section, we’ll examine how you can use your research – the results of your evaluation – to do just that.

What do we mean by refining the intervention?

Data allow you and other group members to critically reflect on your work and look for opportunities to improve.

Some key reflection questions that you and your group might consider:

  • What are we seeing? (e.g., amount and kind of activities implemented; results shown – increases, decreases, trends)
  • What does it mean? (e.g., was the introduction of the intervention associated with changes)
  • What are the implications for improvement? (e.g., do the results suggest that the intervention should be sustained, altered, discontinued; what changes are suggested)

The reflection questions you ask will depend on the nature of your intervention, but the above set of questions is a good starting point. Consider holding a meeting or brief retreat where the evaluation results can be presented through graphs and charts, and key questions can be discussed. Such a meeting might benefit from an experienced facilitator to keep the process moving toward consensus for specific recommendations on how to improve.

Refining the intervention is the process of making your work more effective by using data collected from your evaluation.

Depending on what you’ve learned from this data, you might want to:

  • Increase or strengthen your intervention in certain areas or with particular groups
  • Change or eliminate elements of the intervention that didn’t work well
  • Adjust your intervention to changing conditions or needs in the community

It will be important for you to meet with other members of your group to review the data, identify key areas for improvement, and brainstorm and come to consensus on how to address issues that have been raised. Careful attention to your evaluation results can help inform which courses of action you should take to improve your efforts.

To continue with our example from above, the Community Health Center staff and selected participants met to review the results. They felt that the evaluation had shown that if people exercised regularly, they could lower their blood pressure, lose weight, and improve their overall health.

A key implication of the findings was how to help people establish and stay with an exercise routine.

Further dialogue about results of the evaluation left the Community Health Center with additional questions:

  • How can we increase the number of participants who actually adopt and continue regular exercise and other healthy behaviors?
  • Why did some people who didn’t exercise regularly reduce their blood pressure, and should we add another component (e.g., healthy nutrition) to our program?
  • What other factors, if any, besides exercise seem to help participants exercise regularly and lower their blood pressure (e.g., wellness group, medication)?

By focusing on the key reflection questions – What are we seeing? What does it mean? What are the implications for improvement? – the center should be able to refine their program to get even better results for more participants.

Why should you use your evaluation research to refine the intervention?

Refining the intervention is the primary purpose of an evaluation. If you find out that your intervention wasn’t effective, you have three choices: you can quit; you can blindly try another approach; or you can use your evaluation research to guide you towards a more effective intervention.

Using evaluation results is vital: it points you in the direction that your research tells you is apt to be most helpful. Using research to help you choose your course of action also establishes you as a credible and practical organization, one that’s concerned with what works. That kind of reputation is likely to increase your opportunities for getting funding and other resources, and to gain and sustain your community support. Most importantly, it helps the group succeed in addressing the important problems or goals of your community.

When should you refine the intervention?

The short answer to this question is “constantly.” Monitoring and evaluation should go on throughout the life of the program or project, and should be used to adapt and adjust what you do on an ongoing basis. In practical terms, it’s wise to reevaluate your work regularly – once a year is typical – and make any major changes at that time. Of course, you can and should make minor adjustments throughout the year, based on your monitoring and on feedback from participants, staff, and others who implement or experience the intervention.

There are, in addition, some specific times when adjusting your work can be especially helpful:

  • When what you’re doing isn’t working. If it’s obvious that your work isn’t having the desired effect, it’s time to consider what you need to change.

Make sure that you allow enough time for a program or intervention to have an effect before you make a judgment that it isn’t working. Nothing happens overnight, and the more difficult the issue you’re addressing, the longer it’s likely to take to influence intended outcomes. You have to walk a line between cutting a program off before it’s had time to work and letting it go on after it’s shown itself to be ineffective.

  • When participants are dropping out at a high rate. What are you doing – or what are the external factors – that might be causing participants to leave your program? How can you change the intervention to assure that people experience it long enough for them to benefit?
  • Between sessions of a time-limited or sequential program. Some programs – like the exercise program used as an example – are only designed to run for a limited period, but may run again and again, with new participants each time. If such a program is continually evaluated, you’ll get – and should use – information each time that will help you make the next round of the program better.
  • When funders or participants ask you to adjust some aspect(s) of your program. Your evaluation research should be helpful in determining how to respond to the funder’s or participants’ requests.
  • When funding or other resources are reduced. You may be faced with eliminating parts of your program, cutting numbers of participants, or other unpleasant choices. Your evaluation research can help you find the best way to make cuts without losing your effectiveness, by keeping intact the elements of the program that make the most difference.
  • When the issue or goal changes. Sometimes there is a shift in priority issues for the community following a rise in unemployment or violence. Your research can tell you that, and suggest ways of dealing with the change in conditions.

Who should be involved in refining the intervention?

The best plan here is to involve a number of stakeholders, depending to some extent on who has been involved in the planning and evaluation of the effort.

Some people who definitely should take part:

  • Participants. These are the folks who experience both the intervention itself and its effects, and they are likely to have ideas about what would make it better, easier for them to participate, or more relevant for them. Participants should be your partners in refining programs and interventions, since they have an inside perspective on whether they are working.
  • Staff members, paid or volunteer. Like participants, staff members have a unique perspective on the intervention. Not only do they see the way it works every day, but they’ll also have to carry out any changes. If they can claim ownership of those changes by participating in the planning process for them, they’re far more likely to understand them properly and to be eager to make them work.
  • People who are directly or indirectly involved in supporting the work. Depending upon the nature of your issue, these might include educators, government officials, health professionals, employers, funders, or others. Since their contribution is needed to make any changes successful, it’s important that they have input into the planning of those changes. They’ll need to understand and support them if the adjusted intervention is to go well.
  • Those who led and participated in the evaluation. They’ll have a good handle on what the evaluation showed, and a grasp of what might need changing and how.

For example, the Community Health Center put together a team to look at the evaluation results and make some recommendations for changes in the program. The team included a variety of participants who had experienced different outcomes, a health care provider, a Center board member, and a staff member from the university that conducted the evaluation. They went over some of the research that the Center had used in developing the program, and carefully studied participant interviews and other evaluation material, as well as the records kept by program staff.

How do you refine an intervention based on research?

Changes in interventions should be focused on one or more of the three aspects of evaluation: Process (both your process – activities implemented, doing what you intended, etc. – and participants’ process – what did they actually do?), impact, and outcomes. You have to examine each of these separately, and ultimately integrate them to decide what adjustments you need to make in your intervention.

Each aspect of the evaluation builds on what comes before. In order to have the impact you want, you have to put together and run your program well, and that’s a matter of process. If your process didn’t go properly, then you haven’t really conducted the program you planned for. If you didn’t get the impact you hoped for, it may be due to the fact that you simply didn’t do what you planned, and the first adjustments should be to the process, to ensure that the intervention is implemented as intended.

Similarly, to get the outcomes you intend, the program has to have an impact on the appropriate risk and protective factors or other environmental conditions. If the program had the impact you envisioned, but not the outcomes, then adjustments need to take place at the impact level, perhaps in the risk and protective factors and/or conditions that influence outcomes.

Process

An evaluation of the process of your effort compares what you planned to do with what you actually did.

Process has a number of elements to which evaluation might be applied. They encompass both logistics (the handling of details, such as finding space and buying materials) and program implementation (methods, program structure, etc.).

These elements can include:

  • Community participation. Were you able to involve members and sectors of the community that you intended to? Were you able to make good contacts and establish relationships within the priority population?
  • Community assessment. Did you conduct an assessment of the situation in the way you planned? Did it give you the information you needed?
  • Program planning. Was the planning participatory? Did it include research into best practices and successful interventions? Did it result in an approach that everyone felt would work?
  • Staff hiring and/or volunteer recruitment. Did you hire staff and/or recruit volunteers that were the right people for the jobs?
  • Staff and/or volunteer training. Were staff and/or volunteers oriented and trained before they started, so that they knew what they were doing when they began work? Was there ongoing training?
  • Outreach to and recruitment of potential participants. Was outreach successful to engage those from the groups intended? Were you able to recruit the number and type of participants intended?
  • Implementation strategy. Here, you’re determining both what you actually did in implementing the program, and what participants actually did. Did you structure the program as planned? Did you use the methods you intended to? Did you arrange the amount and intensity of services, other activities, or conditions as intended? Did you obtain and use the materials and equipment you expected to? Did relationships develop as envisioned? Did participants actually do or experience what you had intended?
  • Evaluation strategy. Did you conduct the evaluation as planned? Did you gather data related to process, impact, and outcome?
  • Timelines and benchmarks. Did you complete or start each of these elements in the time you planned for? Did you complete key milestones or accomplishments as planned?

If all or most things went as planned, and any that didn’t were trivial, you’ve essentially done what you set out to do. If they didn’t, there are a number of possible reasons for changes in the intended process:

  • It took more time than you expected to complete one or more important tasks (finding and hiring key staff is a typical one here)
  • It was harder than you expected to accomplish a particular task. This may be a matter of time spent, but it may also mean that you simply didn’t have the skills or personnel to do what you needed to
  • Something you had good reason to expect didn’t happen (e.g., funding or support that you expected didn’t come through)
  • Someone or some organization you depended on didn’t come through (e.g., a hired staff member became ill and did not finish the work on time)
  • More participants dropped out than you anticipated
  • More people participated than you anticipated
  • Partway through, you found that the methods you had planned didn’t work well, and you had to make adjustments
  • A funder or community advisor board asked you to change some of what you were doing
  • Partway through, you became aware of a new method that seemed to be extremely effective, and you switched to implement it
  • You discovered a more successful way of doing things in the course of the work, and adopted it
  • You underestimated the resources necessary to carry out your original plan, and had to scale back (or look for more funding/volunteer help/space/other support)
  • You encountered opposition
  • You encountered unexpected difficulties (someone quit, materials/equipment weren’t available from the supplier)
  • You encountered disaster (e.g., the site burned down, the program coordinator became ill, a staff member got arrested or misused your funds)
  • You simply didn’t pay attention to following the plan, and/or didn’t do your job as an organization

The deviation from your plan may have had made very little difference at all, or it may have made all the difference. Some differences might be positive – a delay might make it possible to find a more stable funding source; a change in method might make for a more effective program – but they’re still differences. It’s worthwhile to understand what changed to make sense of the evaluation results and to make any needed adjustments.

Perhaps you implemented your process according to plan, and your program ran as intended. By contrast, the process may have been filled with difficulties – opposition, no community support, difficulty recruiting participants, missed deadlines. Does that mean that all the work you put into planning was unnecessary?

It’s likely that the answer is the opposite. If you were able to carry off implementing a program regardless of the fact that your plans were disrupted, it’s a good bet that the clear vision of what you wanted to do kept you on track.

Taking a close look at how you managed to overcome the obstacles in your way will help you understand how to avoid them in the future. (Avoiding all obstacles is unusual in any community work. The key is learning how to anticipate and overcome them.)

It’s also possible that the process leading up to the program went as planned, but the implementation didn’t turn out as expected. In that case, it was probably your plan that was at fault.

Some possible problems:

  • You didn’t assess the situations or take some important aspects of preparation into account
  • You didn’t properly understand some aspect(s) of what you had to do to be successful
  • You didn’t properly communicate some aspect(s) of what you had to do to staff, participants, funders, or the community
  • You underestimated the amount of money or other resources you would need
  • You didn’t have proper fiscal control
  • You ignored something important (treating participants with respect, for instance)
  • You didn’t involve the community enough
  • You didn’t factor in enough time for some aspect(s) of what you had to do (i.e., you planned for a given time period and carried that out, but it was too short)
  • You didn’t provide some important support for participants (travel, child care, stipend), and a large number dropped out as a result

Finding out why your plan didn’t produce the intervention you expected can be helpful. Understanding what you need to plan for, and how to do it, can make your future work both more efficient and more effective.

Impact

Your program or initiative’s impact is the effect it had on the environmental conditions, events, or behaviors that it aimed to change (increase, decrease, sustain.)

In most – but not all – cases, the immediate impact of the program is not the same as the eventual intended results. Generally, a program aims only to influence one or more particular behaviors or conditions – risk or protective factors. The assumption is that such influence will then lead to a longer-term change, which is the ultimate goal of the program.

The intended impact of the Health Center’s exercise program, for example, is the adoption by participants’ regular exercise, a protective factor in reducing risk for chronic diseases. The goals of the program, however, are actually better heart health, and, ultimately, a longer and higher-quality life. Impact is the intermediate step – the influence you have on a behavior or other factor that will in turn lead to the intended results.
 

Your process might have gone perfectly – you might have done exactly what you set out to do – and might still have had no impact on the risk and protective factors you targeted. By the same token, you may have ended up running a program markedly different from the one you planned, and still have had the impact you hoped for. The results of the process evaluation will tell you how closely you stuck to your plan in setting up and running your program. The results of your impact evaluation will tell you whether your program made the changes or intended results.

Your program worked as you planned if the behaviors and risk and/or protective factors changed in the ways you intended. The big question that remains in this case is whether the changes your program influenced led to the ultimate outcomes you were working toward. We’ll consider that when we look at outcomes a little later in the section.

In all these cases, evaluation should involve feedback from both participants who had good results and those who didn’t. What worked particularly well for those who had success? What were barriers to those for whom the program didn’t work well? It’s not always easy to get participants to describe the positives and negatives – but it’s the best way to find out.

If your program actually had a negative impact on the targeted behaviors or risk and/or protective factors – the intervention aimed to increase childhood immunizations, and fewer children were immunized, for example – it is important to look more deeply into what is happening.

Some possibilities:

  • You failed to communicate your message, or its importance
  • You underestimated or ignored cultural influences that were powerful enough that your methods failed to overcome them
  • You didn’t take into account cultural influences in participants’ lives that made it difficult to achieve intended results - these factors could include poverty, or competing demands for time, among many others
  • The cultural incompetence of the organization or some staff members worked against your goals
  • The structure and/or methods of the program led to unanticipated negative consequences
  • The program was seen by participants as something that was being imposed on them, and they had little influence on its design or implementation

Just as you might find that your process went well and your program still didn’t influence the risk and protective factors you meant to, it’s possible that you created exactly the changes you intended in risk and protective factors, and the program still didn’t achieve the outcomes intended. We’ll look at outcomes to consider that situation.

Outcomes

The outcomes of an intervention are the changes that actually took place as a result of it. The goal of an intervention is usually not just a change in behavior or circumstances, but the changes in community health and development that occur as a result of that immediate change. A tobacco control program, for instance, aims to help participants avoid or quit smoking: that’s its impact. Its real goals – the hoped-for outcomes of the program – are reduced rates of heart disease, lung cancer, and other smoking-related diseases for participants and their family members.

The ultimate outcomes may take years to assess, but others – like the blood pressure goals of the Health Center exercise program, or the results of a job training course – can be determined at or soon after the end of the intervention. Outcomes are the true measure of the success of the intervention, because they are the reason it was conducted in the first place. However, the impact made – such as changes in community programs of policies– can be an important intermediate outcome since it can take years to see changes in longer-term outcomes.

The program produced the intended outcomes

If the program produced the outcomes you intended, congratulations: you’ve achieved the goals of your effort. This isn’t the time to consider your work complete, however. How can you make the intervention even better and more effective?

  • Can you expand or strengthen parts of the program that worked particularly well?
  • Are there evidence-based methods or best practices out there that could make your work even more effective?
  • Would targeting more or different behaviors or risk and protective factors lead to greater success?
  • How can you reach people who dropped out early or who didn’t really benefit from your work?
  • How can you improve your outreach? Are there marginalized or other groups you’re not reaching?
  • Can you add services – either directly aimed at program outcomes or related services such as transportation – that would improve results for participants?
  • Can you improve the efficiency of your process, saving time and/or money without compromising your effectiveness or sacrificing important elements of your program?

Good interventions are dynamic: they keep changing and experimenting, always reaching for something better. Programs can always be improved.

The program only produced some of the intended outcomes

If the intervention produced only some, or some lower level, of the desired outcomes, you may be headed in the right direction. The program may also have greater effects in the long run, as participants incorporate the changes they’ve made into their everyday lives.

Some possible reasons for the program’s effect not being as great as planned:

  • You didn’t target sufficient risk and/or protective factors
  • The program’s message didn’t reach participants or speak to them in a powerful way
  • There were intervening factors – attendance or lack of support services – that made the program less effective than it could’ve been
  • Particular parts of the program didn’t work well
  • Particular parts of the program weren’t implemented well
  • You overestimated what was possible in the time available
  • The program didn’t approach participants in the right way – it was too formal, the language used posed barriers for some, etc.
  • The program wasn’t culturally adapted for the population
  • There were conflicts among participants or between participants and staff

For example, let's say that the Health Center’s exercise program wasn’t by any means a failure, but it was only modestly successful. How migth the Health Center use its evaluation information to improve the results for program participants?

First, the Center could examine what participants said about the program. What enabled the members of the most successful group to exercise? Why weren’t members of the much larger group able to establish regular effective exercise routines? And for members of the third group – those who didn’t exercise at all or dropped out quickly – what might have gotten them more motivated?

Perhaps those in the first group attended all the sessions and found exercise partners who challenged one another to do a little more (or to eat a little better.) Perhaps those in the other groups did not locate partners.

Based on the evaluation, the program’s designers decided that they should arrange for exercise partners or groups for everyone. It seemed from the evaluation that both the social situation and the challenge that exercising with others presented made exercise more likely and more fun, and promoted a more vigorous workout. They also decided to develop a much more formal nutrition component to the program, and to incorporate a buddy system into that component as well, in the hopes that participants could help one another develop recipes and stick to a reasonable eating plan.

The program produced no outcomes

If the program produced no outcomes at all, you may have to make big changes.

It can be very difficult to admit that you’ve been taking the wrong direction, especially after investing a lot of time and effort in planning and implementing a program. It’s tempting to believe that if you just work harder, or recruit different participants, or use better materials, you’ll get the results you want. It takes courage conclude that the results call for a major re-design in the effort.

The program may have produced unintended outcomes, either positive or negative. If they’re positive, you might want to understand how they came about so that you can continue to produce them. If they’re negative, you’ll probably want to learn more so you can seek to eliminate them. Most of the reasons for unintended outcomes are similar to those for lack of outcomes.

A positive unintended outcome in a youth violence prevention program, for example, might be better school performance; a negative example in the same program might be an increase in school dropout. Teens in the program might improve their school performance because they admire a staff member with college education, and want either to be like him or to impress him. Or they may see college and an escape from the neighborhood as their best way out of the cycle of violence.

Those who drop out of school as a result of the program may also do so because they see it as a way to avoid violence: school – or the trip to and from school – may be especially dangerous because of the presence of youth from other neighborhoods or rival gangs. Conversely, they may see dropping out of school in favor of work as a non-violent road to financial success, as opposed to dealing drugs or other similar violence-prone activities.

Given all this, how do you approach your evaluation research to decide what you need to refine and how? A good general approach is to work backward from outcomes – asking “but why?” – regarding why each previous phase failed to produce the results you wanted.

Using the "But why?" Method to examine outcomes

  • Examine the outcomes. If your intervention achieved the intended outcomes, it has done its job. Now you can consider how to maintain these effects or refine your program (see above). You should still examine the results for process and impact, and make changes where they’ll gain you greater effectiveness or efficiency. But chances are the program doesn’t need major changes, unless you want to enlarge your goals, or unless you’ve found an alternative approach that could lead to even more impressive outcomes.
  • Examine the impact. If your evaluation research shows no outcomes, or outcomes that fall short of what you intended, the next area to examine is the impact of your program on the targeted behaviors and risk/protective factors.

If the program had the impact you expected, but no outcomes, perhaps you’ve chosen the wrong behaviors or factors to target, and need to rethink your problem analysis and related intervention. There are other plausible explanations: your intervention wasn’t in place long enough, the effects are delayed, your measures are insensitive to what is being achieved, etc.

  • Examine the process. The next step here is to understand how well you planned, prepared for, and implemented your intervention. If the reasoning and assumptions behind your planning were accurate, and if you set up and implemented your program based on them, you should have the impact you were aiming for, and that impact should lead to the outcomes you intended. If your program didn’t go as planned, that could be a good part, if not all, of the reason for your lack of outcomes. Your process evaluation can show you where you need to adjust and improve your implementation to have a better chance to get the intended results.

If your program did go as planned – you met your deadlines and did what you intended to do in the way you intended to do it – and you failed to achieve your goals, there’s a good chance that your planning was the problem. You may have aimed at insufficient risk and/or protective factors, as mentioned above, or you may have chosen ineffective methods to influence the right ones...or both. There are other possibilities that could be picked up by a process evaluation as well, many of which have already been suggested – treatment of participants, language or other communication issues, lack of cultural competence, etc. Identifying and correcting such problems can help a program reach success.

  • Keep making adjustments. Make your adjustments and refinements, run and evaluate the intervention, and make further adjustments and refinements to improve your work. This should be a continual cycle for the life of your program.

In Summary

The purpose of an evaluation and the research that goes into it is not just to tell you whether or not your intervention has been a success. The real value of evaluation research lies in its ability to help you identify and correct problems – as well as to celebrate progress. Evaluation can pinpoint the strengths of your program, and help you to protect and enhance those strengths and make them even stronger.

By examining the three elements of an intervention – process, impact, and outcomes – your evaluation can tell you whether you did what you had planned; whether what you did had the influence you expected on the behaviors and factors you intended to influence; and whether the changes in those factors led to the intended outcomes. That knowledge can show you what you might change to improve your program, as well as the overall effectiveness of the intervention. And, the information can be used to celebrate the accomplishments you are making along the way.

Contributor 
Phil Rabinowitz
Stephen B. Fawcett

Online Resources

Chapter 18: Dissemination and Implementation in the "Introduction to Community Psychology" explains why “validated” and “effective” interventions are often never used, effective ways to put research findings to use in order to improve health, and advantages of participatory methods that provide more equitable engagement in the creation and use of scientific knowledge.

Community Tool Box Training Curriculum.

Improve the Program is a resource written by My Environmental Education Evaluation Research Assistant (MEERA). It includes how to use resources on benefitting a program after an evaluation as well as ensuring the evaluation is used to improve the program.

The Pell Institute provides information on how to Improve the Program with Evaluation Findings as a part of their Evaluation Toolkit. It provides step-by-step information on utilizing evaluation data to improve the program.

Not As Easy As It Seems: The Challenges and Opportunities to Build Community Capacity to Use Data for Decisions and Solutions is from Community Change, Creating Social Change with Knowledge. To achieve success, communities have to be able to access, share, and transform data into actionable knowledge.

Program Evaluation and Social Research Methods, created and maintained by Gene Shackman, Ph.D., links to many free resources for methods in evaluation and social research.

Research Methods Knowledge Base: Introduction to Evaluation provides thorough Information on evaluation research.

Youth Excel's Research-to-Change Toolkit guides youth-led and youth-serving organizations to strengthen their positive youth development programs using research and data. This youth-inclusive toolkit offers a step-by-step process to planning and carrying out research and using findings for concrete action.