An official website of the United States government.

This is not the current EPA website. To navigate to the current EPA website, please go to This website is historical material reflecting the EPA website as it existed on January 19, 2021. This website is no longer updated and links to external websites and some internal pages may not work. More information »

8 - Assess Program Effectiveness Through Evaluation

Interactive flowchart displaying information about each of the nine parts of fish consumption advisoriesEvaluation and Refining Fish Consumption Advisory As NeededAssess Program Effectiveness Through EvaluationImplement and Monitor the ProgramDevelop and Pretest Concepts, Messages, Materials and ActivitiesDevelop Outreach PlansExplore Settings, Channels, and Activities to Reach Target AudiencesIdentify Potential PartnersIdentify Target Audiences and ChannelsEstablish Fish Consumption Advisory (FCA) Program Goals and Communication Objectives
                                                Fish and Shellfish Consumption Advisories Home

The eighth step for developing and implementing a risk communication program for fish and shellfish consumption advisories is to assess program effectiveness through evaluation.

On this page:

Why Evaluation is Important

Evaluation of the Fish Consumption Advisory (FCA) program involves monitoring the baseline, the implementation processes, and the overall results. Monitoring the FCA program will help determine if the communications are on track and if progress is being made towards meeting objectives. Monitoring helps quantify what has been done, when it has been done, how it has been done, and who has been reached. It can also help identify any problems so that adjustments can be made. Monitoring tries to answer the question "How much of what we planned to do did we do as planned?" Did the FCA program have the intended effect(s)?

Evaluation is important for assessing message effectiveness, identifying program strengths and weaknesses, and determining if different FCA communications are needed to make the FCA more effective and better achieve its goal(s). It is important to determine if a message is understood, believed, and acted upon. There is no way to know the kind of effect the FCA or related activities are having without a program evaluation. Successful communication requires that information be conveyed in a language, via a medium, in accordance with cultural considerations, and in a way that will enable it to reach and be understood by those affected. If communication is successful in these ways, it becomes the basis for effective decision making.

Evaluation data can provide program staff with valuable insight to help understand the impact of the program, effectiveness of the program, the target audience, and the role the staff can play to contribute to the FCA’s success.

Integrating a monitoring and evaluation strategy into the overall design of a FCA program is key. Evaluation should be built in from the start. Integrating evaluation throughout planning and implementation ensures that FCA program managers:
  • Understand what is and is not working, and why
  • Tailor messages, materials, and activities to the target audience
  • Help program staff see how its work affects the target audiences
  • Define appropriate, meaningful, achievable, and time-specific program objectives
  • Monitor the program and ensure accountability
  • Expend resources as efficiently and effectively as possible
Evaluations that demonstrate that a program is successful can help:
  • Ensure FCA programs receive continued or new funding
  • Increase support of activities by other researchers, educators, and the greater community
  • Increase interest and participation by the target audience
  • Show the value of the program to interested parties such as partners, funding agencies, and the public

Types of Evaluation

There are three types of evaluation which are implemented at different times during the FCA process.

  • Formative evaluation is evaluative research conducted during program development. It may include state-of-the art reviews, pretesting messages and materials, and pilot testing a program on a small scale before full implementation.
  • Process evaluation is research conducted to document and study the functioning of different components of program implementation; includes assessments of whether materials are being distributed to the right people and in what quantities, whether and to what extent program activities are occurring, and other measures of how and how well the program is working.
  • Outcome evaluation is research designed to assess the extent to which a program achieved its objectives.

Determine What to Evaluate

The available budget for evaluation will influence what is evaluated. As FCA program implementors think about and plan the evaluation, they should clarify the purpose of the evaluation and what kind of information is needed. To do this, think about who, including stakeholders and partners, will use the data and how they will use it.

Clarify the Purpose

Examples of possible questions to ask to help clarify the purpose of the evaluation:
  • Does the FCA program need to understand whether the program strategies used are effective in producing greater knowledge about safely eating fish?
  • Does the FCA program need to provide evaluation data to entities, for example funders?

Identify the overarching questions that need to be answered: who the evaluation is for, and how it will be used to figure out what exactly is needed to evaluate and how.

  • Evaluate whether people are “aware” of an advisory
    • whether people are aware that an advisory exists, and
    • whether people are aware of an advisory’s content and recommendations
  • Evaluate whether people have altered their consumption practices as a result of awareness of advisory.
  • Evaluate how well advisories are understood by target audiences
  • Assess which communication channels are more effective than others
  • Assess how the program can improve the message delivery processes


Costs to evaluate the effectiveness of FCAs range significantly, because there is a range of evaluation mechanisms that can be used, with an associated range of costs. Agencies should weigh the costs of implementing such tools against the type of information that can be obtained and the usefulness of that data.

Evaluate Communication Messages with Budget Considerations

Assess whatever is feasible and affordable. The following are some options for evaluation based on different budget availability.

  • No budget
    First, ask experts to review the materials for accuracy. Second, ask coworkers, colleagues, friends, or family to evaluate and comment on materials.
    • Ask how understandable the material seemed
    • Ask whether the amount of information was right
    • Ask how much they would recommend this material
    • Ask how it could be improved
    • Use settings in word processing programs that identify, for example, complex words and provide simpler options for wording
  • Minimum budget
    • Ask coworkers and colleagues to evaluate messages or materials prior to widespread distribution
    • Assess impact by using surveys or focus groups of accessible volunteers who are similar to the target audience in regard to resources, interest, and literacy
    • Partner with interested academics who can help with program evaluation and service, professional or school groups who can help defray evaluation costs
    • Track easy-to-obtain indicators of reach and exposure, such as web hits, internet-based media coverage, and local person-on-the-street interviews
    • Collect audience responses via short online interviews using free survey programs such as SurveyMonkey©
    • Work with partner organizations to conduct evaluations
  • Modest budget 
    The Minimum Budget (above) measures can be expanded or enhanced by including the following:
    • Providing compensation for volunteers’ time
    • Collecting data from more sources such as internet news; TV and radio coverage; and relevant social media activity
    • Using a simple and inexpensive evaluation design such as a pretest-posttest design (data is collected before and after the program takes place), or a posttest (data is collected after the intervention only) design
      • These survey-based approaches can assess changes in self-reported knowledge, attitudes and behavior.
    • Additional strategies include one-on-one cognitive interviews. During cognitive interviews:
      • Test for comprehension of the materials (e.g., with quizzes after each section)
      • Ask participants to describe any other reactions to different sections. It is especially important to include people with less education and lower numeracy and literacy levels and those of different races and genders
  • Substantial budget
    Additional strategies include:
    • Employing a literacy expert to test the reading and numeracy levels of materials to ensure they are around 6th to 8th grade level (and to provide recommendations for improving the materials if necessary)
    • Testing the materials (and alternatives) with a representative sample

Cost/Time Cutting Methods

Simplify the evaluation design by:
  • Doing pretest/posttest comparison
  • Eliminating unnecessary data collection
  • Reducing sample size
  • Reducing costs of data collection
    • Use self-administered questionnaires (online/electronic)
    • Reduce length

Considerations While Drafting the Outcome Evaluation Plan

The outcome evaluation needs to capture intermediate outcomes and to measure the outcomes specified in the Communication Objectives. Consider the following questions to assess the outcome evaluation plan and to be sure the evaluation will give the information that is needed:

  • What are the Communication Objectives? What should the target audience members feel or do as a result of the FCA in contrast to what they thought, felt, or did before? How can these changes be measured?
  • How is it expected that change will occur? Will it be slow or rapid? What measurable intermediate outcomes (steps toward desired behavior) are likely to take place before the behavior change can occur?
  • What kinds of changes are expected over the duration of the program, e.g., attitudinal, awareness, behavior, policy changes? To help ensure that important indicators of change are identified, decide which changes could reasonably occur from year to year.
  • Which outcome evaluation methods can capture the scope of the change that is likely to occur? Many outcome evaluation measures are relatively crude, which means that a large percentage of the target audience (sometimes an unrealistically large percentage) must make a change before it can be measured. If this is the case, the evaluation is said to “lack statistical power.”

Short, Intermediate, Long-term Outcomes

Agencies and their partners can evaluate short-term, intermediate-term and long-term outcomes and impacts. FCA program managers may want to identify an outcome for the short term, medium term, and long term. When determining outcome objectives make sure they are SMART:
  • Specific – identify exactly what it is hoped the outcome to be and include the five W’s: who, what, where, when, and why
  • Measurable – quantify the outcome and the amount of change the program aims to produce
  • Achievable – be realistic in projections and take into account assets, resources, and limitations
  • Relevant – make sure the objectives address the needs of the target audience and support the overarching mission of the program or organization
  • Time-bound – provide a specific date by which the desired outcome or change will take place
Evaluations assessing the effectiveness of fish advisories can collect information on:
  • Short-term outcomes expected resulting from FCAs such as changes in knowledge, attitudes, beliefs of fish advisories/fish consumption
    • Evaluations of fish advisory effectiveness often involve data collection through use of fish consumption survey tools (FCST) to get information of fish consumption behaviors.
  • Intermediate outcomes which can be behavioral shifts in fish consumption practices and locations
  • Long-term impacts resulting from behavioral shifts in fish consumption practices
    • Long-term impacts may translate to lower blood contaminant levels, decreases in infant neurodevelopmental issues resulting from exposure to contaminated fish, and decreases in cancer rates/hospitalizations.

Draft Outcome Evaluation Plans

Outcome evaluation is used to assess the degree to which the Communication Objectives are achieved. Conducting useful outcome evaluation can be challenging because of the following constraints:

  • Justifying the program to management
  • Providing evidence of success or the need for additional resources
  • Increasing organizational understanding of and support for health communication
  • Encouraging ongoing cooperative ventures with other organizations

Many standard evaluation approaches assume a direct cause-and-effect relationship between the stimulus (the program’s communication and the target audience’s response to it). However, it can be impossible to isolate the effects of a particular communication activity, or even the effect of a communication program on a specific target audience, because change does not often occur as a result of just one specific activity.

Outcome Evaluation

Conduct outcome evaluation by following these steps:
  • Determine what information the evaluation must provide
  • Define the data to collect
  • Decide on data collection methods
  • Develop and pretest data collection instruments
  • Collect data
  • Process data
  • Analyze data to answer the evaluation questions
  • Write an evaluation report (consider its format)
  • Disseminate the evaluation report. It is possible that the evaluation could show that there are aspects of the FCA that could be improved or that the FCA is not effective. For more information, refer to Evaluation and Refining Fish Consumption Advisory.

Selecting an Evaluation Design

The Food and Drug Administration's (FDA) Consumer Food Safety Educator Evaluation Toolbox and Guide has useful information of evaluation.

Chapter 4, Selecting an Evaluation Design contains information about:
  • Different types of evaluation designs (i.e., observational, experimental)
  • When to collect data options include:
    • Collect data only one time, usually in a post test.
    • Conduct a pre/post-test, where data is collected before and after the FCA is implemented.
    • Collect data multiple times throughout the evaluation process.
    • Conduct a retrospective pre-test that is administered at the same time as the post test.
  • Other information on evaluation designs and sampling

Data Collection

Chapter 5, Data Collection contains information about:

  • How to collect data, including data collection methods with benefits and limitations of each
  • Data collection tools
  • Instrument/survey development process
  • Ways to administer questionnaires with benefits and limitations of each
  • Recruitment and retention of participants
  • Obtaining informed consent

Data Analysis

Chapter 6, Data Analysis contains information about:
  • Steps to analyze quantitative and qualitative data
  • Consideration in sharing findings of the evaluation

Research on evaluations was conducted with the assistance of Roselyn Thalathara, an Environmental Health Fellow with the Association of Schools and Programs of Public Health (a 501(c)(3) organization) under EPA Cooperative Agreement ID # ASPPH X3-83555301.


Top of Page