Adaptation Monitoring and Evaluation Toolkit
Monitoring and evaluation (M&E) are critical steps in the ongoing process of adaptation that allow us to understand what is working and identify areas for improvement. This page contains a number of resources to 1) introduce adaptation professionals to monitoring and evaluation and their potential benefits, 2) support the preparation for and execution of adaptation evaluations, and 3) explain how to work with evaluation consultants. Click through the tabs below to find information and vetted resources that correspond to your level of experience and specific interests.
Benefits of evaluating adaptation projects and programs
Evaluation enables data-driven decision-making and helps organizations assess their work to identify what worked, what didn’t work, and why. Resilience Metrics, a website that provides guidance on monitoring and evaluation throughout the adaptation process, outlines five key reasons to evaluate adaptation projects and programs:
- Effective communication and public engagement
- More deliberate planning and decision-making
- More persuasive justification of adaptation expenditures
- Better accountability and governance
- Deeper learning and better adaptive management
GLISA supports partners and grantees who want to incorporate M&E into their adaptation projects. GLISA undertakes evaluation of its own work, such as an internal evaluation of its Small Grants Program as well as external program evaluations of GLISA’s societal impacts to understand and improve upon project outcomes, training, and mentoring.
- What is Evaluation?
- Preparing for an Evaluation
- Conducting an Evaluation
- Working with an Evaluator
- Stakeholder Engagement & Communication
- Case Studies
What is Adaptation Evaluation?
Adaptation, according to the Intergovernmental Panel on Climate Change, is “the process of adjustment to actual or expected climate and its effects… by [moderating] or [avoiding] harm or [exploiting] beneficial opportunities”.1 Monitoring and evaluation are critical throughout the adaptation process: they provide insight about what strategies are working, what is not working, and who is benefiting or being burdened.
Monitoring is the ongoing collection of information about project indicators, which are traits of interest (e.g. reduced flood risk). It may involve tracking both qualitative and quantitative metrics and can help us understand whether a program or policy is achieving desired outcomes.2 3
Monitoring would help you answer questions like how often a particular neighborhood experiences nuisance flooding. Evaluation refers to the process of critically examining a program. It involves collecting and analyzing information about a program’s activities, characteristics, and outcomes. Its purpose is to make judgments about a program, to improve its effectiveness, and/or to inform programming decisions.4 Approaches to evaluation can typically be categorized as: Evaluation would help you answer questions like whether representatives from neighborhoods that experience nuisance flooding were included in the planning process. Adaptation evaluation is similar to standard policy and program evaluation, but is tailored for policies and programs specifically focused on climate change adaptation. Adaptation to climate change requires adjustments to policies and programs based on climate change hazards, and most evaluations are localized to reflect the local, tangible impacts of climate change and local adaptation measures. Adaptation evaluation must take into account that the policy or program’s impact, measurable indicators, and goals may fluctuate over time as localized impacts unfold. Evaluating adaptation is especially challenging because it’s difficult to measure avoided impacts — meaning that it’s hard to prove that a particular program worked if the only evidence is that something didn’t happen. Adaptation evaluation would help you answer questions like whether nuisance flooding in a particular neighborhood decreased after the installation of new green stormwater infrastructure. Should my adaptation policy or program be evaluated? Evaluation can be an advantageous tool, especially considering the critical nature of adaptation policies and programs. It can help you understand whether your action is working and for whom, and empowers you to course correct if something isn’t working. Evaluation also allows you to share lessons learned and demonstrate to funders, taxpayers, and other stakeholders that your program is effective. At the same time, evaluation can be costly. Consider completing an evaluability assessment to ensure that your policy or program is ready to be evaluated. If your project or program is similar to that of others, you may also consider collaborating with other adaptation professionals and completing a collective evaluation. When is the best time to conduct an evaluation? Although evaluation is often conducted in the final stages of a program, best practices indicate it should be incorporated throughout all stages of a program’s life cycle. Design your project or program with evaluation in mind, and collect corresponding data on an ongoing basis to continuously improve the program. If this is not feasible, evaluation at the critical stages of a program can still work well. Collecting data as early as possible can help establish a baseline measurement.
Choosing Monitoring, Evaluation, or Both
Monitoring is an important component of evaluation, and both monitoring and evaluation can and should be conducted at a variety of scales throughout all stages of a project or program’s lifecycle. This page focuses on holistic program and policy evaluation, but we recognize that not everyone has the capacity to conduct a full evaluation and may only be interested in learning about monitoring. If you are seeking in-depth guidance on identifying and monitoring indicators and metrics, we suggest exploring Resilience Metrics, a site supported by the National Oceanic and Atmospheric Administration and associated with the National Estuarine Research Reserve System (NERRS) and the NERRS Science Collaborative.
Preparing for an Evaluation
Now that you’re familiar with the definition and benefits of monitoring and evaluation, it’s time to explore how to prepare and design an evaluation for your adaptation program or project. Monitoring and evaluation can take many different forms. This page provides guidance for launching a successful evaluation process.
Why do you want to conduct an evaluation?
Identifying your reasons for conducting an evaluation will help you craft an appropriate evaluation plan. Are you interested in:
- Communicating program successes?
- Accountability to funders or other groups?
- Learning if the program has been implemented as intended?
- Measuring program impacts and identifying success factors?
- Making informed changes to a policy?
What resources do you have?
Taking stock of existing knowledge, resources, and support is critical to the success of your evaluation. Before beginning an evaluation, determine what information and resources are already available:
- Has the program been evaluated in the past? If so, what can you learn from the results?
- Do you have the staff capacity and capabilities to conduct an evaluation in-house?
- Do you have the financial resources to hire an external evaluator?
- Does some data about the program already exist, such as surveys, reviews, or metrics that you or others have collected?
- Do you have support from other members of your organization, supervisors, funders, peers, and target audiences? What can these partners and stakeholders offer in terms of technical expertise, financial resources, or access to partnerships?
What resources do you need?
Determine what gaps you have in terms of time, funding, technical expertise and institutional support. What do you think will be the greatest constraint you will face during the evaluation process? This will help you determine to what extent you may want to work with an external evaluator.
- If knowledge about how to conduct an evaluation is your primary constraint, working with an external evaluator may be your best option (see Working with an Evaluator tab).
- If you have limited time to conduct an evaluation, you may want to consult with others, including an external evaluator, to narrow priorities and craft an evaluation plan that will answer your most critical questions in the time you have available. Reviewing completed evaluations of similar programs may provide insight (see Case Studies tab).
- If funding is your greatest constraint, it may help to collaborate with other organizations seeking to answer similar evaluation questions and pool resources. If you are applying for a grant for an adaptation project, make room for evaluation in the budget. Funders usually have expectations for how much money should be allocated to evaluation.
Clarifying and understanding your project or program’s purpose, goals, and objectives is necessary for a successful evaluation. Creating a logic model can help identify the appropriate evaluation questions and design. A logic model is a diagram that illustrates the rationale behind your program. It shows the relationships between the resources you invest (inputs), the activities you carry out, anticipated outputs, and planned outcomes. If you are working with an external evaluator, this information will help them conduct a successful evaluation. Take a look at this hypothetical adaptation logic model from the UK Climate Impacts Program to see one example of how logic models can be used. The logic model resources from the University of Wisconsin-Madison Division of Extension linked below provide additional templates, examples, and tutorials.
How do I prepare an evaluation plan?
Many funders require an evaluation plan as part of a grant proposal. However, a plan can be helpful whether or not you are required to complete one. An evaluation plan ensures that all of the critical aspects of an evaluation have been explored, and that you, your partners, stakeholders, and the external evaluator are on the same page. See the NOAA Office for Coastal Management’s Guide for Planning for Meaningful Evaluation for guidance on creating an evaluation plan.
During the planning phase, ensure your evaluation meets the evaluation standards from the American Evaluation Association regarding utility, feasibility, propriety, accuracy, and accountability. Consider reviewing existing evaluations of adaptation projects and programs, as they may generate additional ideas for how to conduct your evaluation. See the Case Studies tab for examples.
The Conducting an Evaluation tab contains information on different evaluation approaches and methods.
Resource | Description | Source |
A Guide for Planning for Meaningful Evaluation | An excellent resource that outlines a seven-step process for planning a project or program evaluation, including data collection methods and an evaluability assessment. | NOAA Office for Coastal Management |
Planning an Evaluation | Includes a sample evaluation plan, an evaluation plan checklist, examples of logic models, and more. | Corporation for National and Community Service |
Logic Model Examples and Tutorials | Compilation of resources about logic models, including examples, templates, and a self-paced tutorial that guides users through the process of constructing and using a logic model. | University of Wisconsin-Madison Division of Extension |
Conducting an Evaluation
There are many different evaluation approaches that utilize different data collection and analysis methods. Some use a quantitative evaluation design (e.g., randomized control trials, experimental design), while others use a qualitative approach (e.g., surveys, focus groups, interviews). Many evaluations will employ a mixed methods approach. While there is no right or wrong approach, each makes different assumptions and has advantages and disadvantages. The NOAA Office for Coastal Management’s Guide for Planning for Meaningful Evaluation provides an overview of different approaches.
Evaluation can inform and improve all aspects of a project or program life cycle and ideally should be incorporated throughout, as opposed to only after a program is well-established. This leads to evaluative thinking, which fosters continuous inquiry and reflective practice. Evaluative thinking can help policies and programs achieve their intended outcomes. For more information about evaluative thinking and integrating its key concepts into your work, see this interactive tutorial produced by the New South Wales Department of Education. Evaluative thinking is especially important within the context of adaptation because expected outcomes may not occur in the short term and may need to be adjusted depending on climate impacts and other socioeconomic and environmental conditions. It is similar to adaptive management, a decision making strategy used by ecosystem managers in the face of uncertainty.
The following resources provide more information on different approaches to evaluation; you may find that one or more will give you ideas or help refine your plan along the way.
Resource | Description | Source |
AdaptME Toolkit Report | Also available as a series of webpages, this toolkit provides guidance on evaluation, outlines six main elements of evaluation and key questions to consider within each part. UKCIP has also compiled a number of best practices documents that may be useful. | UKCIP (Formerly known as the UK Climate Impacts Programme) |
Equitable Adaptation Legal & Policy Toolkit: Data, Metrics & Monitoring Tools | Provides an overview of (1) data sets that provide information about areas most likely to face extreme climate-related weather events; (2) data and mapping tools that identify areas that have historically suffered from exposure to environmental pollution, social and economic factors; and (3) measures that policymakers have adopted to ensure that communities have the access to technology to contribute to policymaking. | Georgetown Climate Center |
Developing Urban Climate Adaptation Indicators | Report that reviews seven adaptation frameworks to help city practitioners understand and assess and current approaches for measuring adaptation progress. | Institute for Sustainable Communities, Urban Sustainability Directors Network, and Government of the District of Columbia |
Monitoring & Evaluation in Climate Change Adaptation Projects: Highlights for Conservation Practitioners | A brief summary of current thinking on the monitoring and evaluation of conservation-related climate adaptation projects and a collection of additional resources. | The Wildlife Conservation Society Climate Adaptation Fund |
Resilience Metrics | Comprehensive website with guidance on how to select, use, and monitor indicators and metrics. Includes a searchable resource database. | Supported by NOAA Programs |
Working with an Evaluator
Is your organization able to conduct an evaluation in-house, or would it be more advantageous to seek an outside perspective? In this section, we’ll explore the benefits and costs of conducting an internal evaluation versus working with an external evaluation consultant.
Should I use an internal or external evaluator?
There are advantages and disadvantages to both seeking an independent evaluator and conducting an internal evaluation. To decide whether to work with an internal or external evaluator, consider the following as they relate to your organization and the given program or policy:
cost – availability – knowledge of program – ability to collect information – flexibility – skills and expertise – objectivity – financial accountability – independence – dissemination of results – ethical issues – organizational investment
For example, a common factor in choosing an internal evaluator is cost. Finding someone from within the organization to act as an evaluator would prove much more cost-effective than outsourcing, but you may lose out on objectivity, because an internal evaluator may have a significant interest in the outcome of the evaluation.
How do I choose an external evaluator?
A professional evaluator “collects data and other information to analyze, rate and generally answer questions regarding specific projects, policies and programs offered by organizations, government agencies and businesses.”5 Most professional evaluators belong to a nationally-recognized organization of professional evaluators. The American Evaluation Association is the largest organization, with local chapters across the United States.
-
- Develop a job description. Include the objective, expected activities, timeline, and responsibilities of the evaluation project.
- Find an evaluator. The American Evaluation Association hosts a database of professional evaluators that is searchable by topic and location. You could also ask for referrals from organizations that have conducted similar evaluations, or explore partnering with an academic institution. Please note that GLISA does not endorse any specific evaluator; it is your responsibility to contact the consultants or firms to decide whether or not they will meet your needs.
- Advertise and solicit applications. You may need to develop a Request for Proposals (RFP) or other methods for soliciting applications, depending on your organization’s protocols.
- Review applications and interview candidates. Consider the candidate’s proposed evaluation plan, familiarity with the subject area, experience conducting similar evaluations, and proposed costs.
Adapted from the U.S. Department of Health and Human Services The Program Manager’s Guide to Evaluation.
How can I work effectively with my evaluator?
After selecting an evaluator, the next step is to work with them to identify expectations, objectives, and deliverables for the evaluation. Ongoing communication between management and evaluators can help ensure that the evaluation will ultimately prove timely, relevant, and useful.
Resource | Description | Source |
Find An Evaluator Database | Search by topic and location for professional evaluators who are members of the AEA. | American Evaluation Association (AEA) |
The Evaluation Center | Provides evaluation, research, and capacity-building services to a broad array of University, public, community-based, national and international organizations to assist them in assessing and improving their programs. | Western Michigan University |
Hiring and Working with an Evaluator | Although designed for juvenile justice program managers, this user-friendly guide provides insight that could be applied to adaptation evaluation. Pages 7 – 9 describe the steps involved in working with your external evaluator to develop an evaluation plan and specify the evaluation products. Page 9 explains how to maximize collaboration and prevent conflict. | Justice Research and Statistics Association |
Stakeholder Engagement & Communication
Stakeholders are individuals and organizations who are invested in the outcome of the evaluation. They could be members of your organization, partners, funders, and members of the community affected by your program. By including representatives from each stakeholder group throughout the evaluation process, you will obtain insights into how to improve the evaluation as well as build ownership and buy-in, which will lead to greater use of evaluation findings.
Who are the key stakeholders? How can you include them in the evaluation process?
Stakeholder insight is critical for understanding to what extent adaptation policies and programs will be effective for the communities you are working with. Throughout this process it is important to consider not only which stakeholders to involve and when to involve them, but also to what extent these stakeholders have the funding and resources (e.g., time, transportation options, etc.) to participate, and to what length your organization can support their participation.
Definition from the American Society of Adaptation Professionals
“People and communities on the frontlines of climate change are those that experience the consequences of climate change first and worst. They include people who are both highly exposed to climate risks because of the places they live and have fewer resources, capacity, safety nets, or political power to respond to those risks because of widespread discrimination, promoted by histories of colonialism, white supremacy, domination of nature, and economic exploitation. They include Black people, Indigenous Peoples, people of color, people with low incomes and from low income backgrounds as well as other individuals and communities such as immigrants, those at-risk of displacement, old and young people, people experiencing homelessness, outdoor workers, incarcerated people, renters, people with disabilities, and chronically ill or hospitalized people.”
When engaging with stakeholders from diverse backgrounds, it’s crucial to ensure your language and evaluation approach are culturally responsive and grounded in the principles of cultural competency. This could mean changing the language used to communicate with stakeholders or selecting methods of analysis that reflect relevant cultural values. It is important to acknowledge and incorporate stakeholder knowledge and experiences related to climate change and its impacts. This includes traditional knowledge, which has been critical in better understanding drivers and impacts of climate change. When working with Indigenous groups, consider utilizing the Climate and Traditional Knowledges Workgroup’s Guidelines for Considering Traditional Knowledge in Climate Change Initiatives.
Resources | Source |
Center for Culturally Responsive Evaluation and Assessment (CREA) | University of Illinois at Urbana-Champaign |
Practical Strategies for Culturally Competent Evaluation | U.S. Centers for Disease Control and Prevention |
Guidelines for Considering Traditional Knowledges in Climate Change Initiatives | Climate and Traditional Knowledges Workgroup |
How can I best communicate my results?
After your evaluation is complete, be sure to communicate your findings! In the growing field of adaptation, it’s important to share your work — it can help others learn from your experience.
Some ideas for communicating your results:
- Share through your organization’s website, newsletter, and social media.
- Disseminate to stakeholders, funders, and similar organizations.
- Present your findings at the National Adaptation Forum or a regional adaptation conference.
- Contact the Resilience Metrics team or the Climate Adaptation Knowledge Exchange (CAKE) to explore sharing a case study.
Case Studies
Take a look at these selected examples of completed evaluations and repositories of other case studies that may give you ideas for how to conduct your own evaluation.
Resource | Description | Source |
Quantifying the Success of Buyout Programs: A Staten Island Case Study | Evaluates the success of an adaptation program through quantitative analysis of participant vulnerability. | CAKE (Climate Adaptation Knowledge Exchange) |
Michigan Climate and Health Adaptation Program (MICHAP) 2010-2013 Process Evaluation | Program evaluation focused on implementation and outcomes of a program focused on climate and health. It identified key successes and areas for improvement. Includes an assessment by MICHAP stakeholders and partners of awareness of the issue as well as program usefulness, successes and areas for improvement. | Michigan Department of Community Health |
Maricopa County Cooling Center Evaluation Project | Evaluation of a cooling centers project that involved three types of surveys and interviews with participants and staff. | Maricopa County Department of Public Health |
National Adaptation Forum Webinar: Climate Adaptation Evaluation and Monitoring | Webinar about examples of climate adaptation evaluation and monitoring efforts in the field, including City of Boston adaptation indicator development and a qualitative analysis of the Florida Reef System Climate Action Plan. | National Adaptation Forum |
CAKE (Climate Adaptation Knowledge Exchange) | Searchable database of case studies, resources, and opportunities to connect with others in the adaptation community. | CAKE |
Resilience Metrics Case Examples | Collection of case studies about measuring adaptation success in National Estuarine Research Reserves and their communities. | Resilience Metrics |
References
- IPCC, 2014: Annex II: Glossary [Mach, K.J., S. Planton and C. von Stechow (eds.)]. In: Climate Change 2014: Synthesis Report. Contribution of Working Groups I, II and III to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change [Core Writing Team, R.K. Pachauri and L.A. Meyer (eds.)]. IPCC, Geneva, Switzerland, pp. 117-130.
- “Category 6: Monitoring and Evaluation.” World Health Organization. World Health Organization, December 11, 2010. https://www.who.int/hiv/topics/vct/sw_toolkit/monitoring_and_evaluation/en/.
- “Exploring & Identifying Indicators.” Resilience Metrics. Resilience Metrics. Accessed February 26, 2021. https://resiliencemetrics.org/indicators-metrics/exploring-identifying.
- Patton, M.Q. (1987). Qualitative Research Evaluation Methods. Thousand Oaks, CA: Sage Publishers.
- Professional Data Analysts. Project Highlights. (2013, March 5). Retrieved from https://www.pdastats.com/news/archives/391.