An official website of the United States government.

This is not the current EPA website. To navigate to the current EPA website, please go to This website is historical material reflecting the EPA website as it existed on January 19, 2021. This website is no longer updated and links to external websites and some internal pages may not work. More information »

Outcome Evaluation

On this page:

1. Determine What Information the Evaluation Must Provide

  • Relate measures to Communication Objectives. Analyze what the target audience members should think, feel or do as a result of the FCA versus what they thought, felt or did before the FCA was implemented. Determine how these changes can be measured.
  • Answer the questions: “How is the change expected to occur?” and “Will it be slow or rapid?” Determine the measurable intermediate outcomes (steps toward the desired behavior) which are likely to take place before the behavior change can occur.
  • Figure out the kinds of changes that are expected in that time period e.g., attitudinal, awareness, behavior, changes.
  • Determine which outcome evaluation methods can capture the scope of the change that is likely to occur.
  • Determine which aspects of outcome evaluation best fit with the organization’s priorities. Rarely does a communication program have adequate resources to evaluate all activities. It is possible that FCA managers will have to illustrate the FCA’s contribution to organizational priorities to ensure continued funding. If this is the case, it may be wise to evaluate those aspects most likely to contribute to the organization’s mission (assuming that those are also the ones most likely to result in measurable changes).

2. Define the Data That Needs to be Collected

Determine what can and should be measured to assess progress on meeting objectives. Use the following questions as a guide:

  • Did knowledge of the issue increase among the target audience (e.g., understanding how to differentiate, in a general way, fish that contain more contaminants of a certain type, knowing reasons that overconsumption of contaminated fish is not good)?
  • Did behavioral intentions of the target audience change (e.g., intending to clean and cook fish to reduce exposure to contamination)?
  • Did target audience members take steps leading to the behavior change (e.g., phoning for FCA information, going to the FCA website)?
  • Did awareness of the campaign message, name, or logo increase among target audience members?
  • Were policies initiated or other institutional actions taken (e.g., putting up signs by locations where the target audience is known to fish)?

3. Decide on Data Collection Methods

Consider using expert assistance to conduct the evaluation and interpret results if necessary. Select questions or use an expert evaluator who can help write questions that produce objective results. If the FCA program does not have an evaluator on staff, seek help to decide what type of evaluation will best serve the program. Sources include university faculty and graduate students (for data collection and analysis), local businesses (for staff and computer time), state and local health agencies, and consultants and organizations with evaluation expertise.

4. Develop and Pretest Data Collection Instruments

Select a method (e.g., observation, questionnaire) that allows the best answers of the evaluation questions based upon access to the target audience and resources. A variety of data collection instruments and methods may be needed for different target audiences. Data collection instruments should collect data directly related to the evaluation questions. It is important to check the data collection instruments against the questions the evaluation must answer.

5. Collect Data

Collect the data. Baseline data should have been collected during planning, before the FCA program began, to use for comparison with the data collected during outcome evaluation.

6. Process Data

Put the data into a usable form for analysis. This may mean organizing the data to give to professional evaluators or entering the data into an evaluation software package. Some on-line survey instruments, like SurveyMonkey©, have features that allow for analysis and visualization of the data.

7. Analyze the Data to Answer the Evaluation Questions

Use statistical techniques as appropriate to discover significant relationships. The FCA program might consider involving university-based evaluators, providing them with an opportunity for publication and the program with expertise.

As FCA programs prepare the report, they will need someone with appropriate statistical expertise to analyze the outcome evaluation data. Work closely with evaluators to interpret the data and develop recommendations based on the data.

8. Write an Evaluation Report

A report outlining what was done and why, as well as what worked and what should be altered in the future, provides a solid base from which to plan future evaluations. The program evaluation report explains how the program was effective in achieving its Communication Objectives and serves as a record of what was learned from both the program’s achievements and shortcomings. Include any questionnaires or other instruments in the report so that they can be found later.

9. Consider the Format

Decide the most appropriate way to present information for the report’s audience (likely will be different than the target audience for the FCA). Consider the following formats:

  • Concise, including hard-hitting findings and recommendations
  • General, including an overview written for the public
  • Scientific, including a methodology section, detailed discussion, and references
  • Visual, including more charts and graphics than words
  • Case studies, including other storytelling methods
Depending on the report’s audience and report format, include the following sections:
  • Program results/findings
  • Evaluation methods
  • Program chronology/history
  • Theoretical basis for program
  • Implications
  • Recommendations
  • Barriers, reasons for unmet objectives

10. Disseminate the Evaluation Report

Ask selected individuals to review the evaluation report before it is released so that they can identify concerns that might compromise its impact. When the report is ready for release, consider developing a dissemination strategy for the report.

Letting others know about the program results and continuing needs may prompt them to share similar experiences, lessons, new ideas, or potential resources that could be used to refine the program. In fact, feedback from those who have read the evaluation report, or learned about the findings through conference presentations or journal coverage, can be valuable for refining the FCA program. FCA programs may want to develop a formal mechanism for obtaining feedback from peer or partner audiences. If university-based evaluators are used, the mechanism may be their publication of findings.

Top of Page