Official websites use .gov
A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS
A lock ( ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.

Evaluation

Reducing poverty through sustained economic growth is the singular goal of MCC, and independent evaluations are MCC’s chosen means of measuring results. As detailed in MCC’s Policy on Monitoring and Evaluation, evaluations are integral to MCC’s commitment to accountability, learning, transparency, and evidence-based decision making. Independent evaluations, which are conducted by third-party independent experts, help answer the following fundamental questions:

  • Was MCC’s investment implemented according to plan? This is key to transparency.
  • Did the investment produce the intended results? Did it achieve its stated objective in pursuit of MCC’s mission to reduce poverty through economic growth? This is key to accountability.
  • Why did or didn’t the investment achieve certain results? This is key to learning.
MCC’s commitment to independently evaluating every project and publishing those results distinguishes it in the international development community.

MCC’s Independent Evaluation Portfolio

This listing summarizes the status, evaluation type, and expected final evaluation publication date for all planning, active and completed independent evaluations of MCC projects. The data can be filtered by the following domains. MCC aims to update it quarterly.

Data is as of November 6, 2023.

    {{ item.title }}

    • Type: {{ item.type }}
    • Status: {{ item.status }}
Filter By
{{ facet.title }}
  • The Evidence Platform contains the official published reports of all active and completed independent evaluations, along with the data collection instruments, results and learning summaries, and the publicly accessible microdata gathered to support these evaluations. The Platform offers various search functions to connect users to the specific knowledge products of interest.
  • This paper builds on prior publications of M&E learning to describe the key drivers of the evolution of evaluation at MCC, the systems for managing evaluation quality, and a refined definition of high-quality evaluation to guide MCC’s evaluation practice going forward.

Evaluating an Investment’s Performance

Impact Evaluations

Impact evaluations are designed to measure changes in outcomes that can be attributed to the MCC investment using a valid counterfactual. The counterfactual represents what would have occurred without MCC’s investment and enables the evaluation to distinguish between changes in outcomes that resulted from MCC’s investment versus those that were driven by external factors, such as increased market prices for agricultural goods, national policy changes, or favorable weather conditions.

Performance Evaluations

Performance evaluations are designed to measure changes in outcomes and assess the contribution of MCC investments to changes in those outcomes when it is not possible to identify a valid counterfactual. While performance evaluations cannot attribute outcome changes to specific causes, including MCC’s investments, they can still be rigorous. They can build a credible case for the contribution of MCC’s investment to the changes in outcomes by, for example, carefully tracking the theory of change with data and triangulating data sources.

How We Choose

There are several critical factors that MCC considers when deciding to invest in an impact or a performance evaluation:

  • Learning potential: A strong case for an impact evaluation exists for programs where the assumptions underlying the project logic are based on limited evidence. A rigorous impact evaluation tests assumptions about a project’s effectiveness and contributes substantially to MCC’s future decision-making and the global evidence base.
  • Feasibility: The feasibility of designing and implementing a strong impact evaluation depends on the design of the project being evaluated and the ability of the evaluator to estimate a valid counterfactual. The feasibility of maintaining the integrity of that counterfactual through the duration of the evaluation period is another key consideration.
  • Strong stakeholder commitment: Identifying a control group and ensuring adherence to an impact evaluation design may require significant commitment and collaboration by sector staff, program implementers, and evaluators, both within MCC and in partner countries.
  • Appropriate timing: The evaluation timeline must be informed by the project logic, particularly regarding assumptions about how long it will take for expected impacts to occur. By collecting data at the wrong time, evaluations may misrepresent the impacts on outcomes of interest or miss important lessons.
  • Proper coordination: Evaluations require close coordination between the evaluator and the program implementer. Program designers, implementers and evaluators must work together to understand and define the program logic, estimate how long it will take expected impacts to accrue, and identify what is most important to learn about how the program works. This is particularly true for impact evaluations, which require coordination and commitment among various stakeholders to estimate a counterfactual.
  • Learning Potential: A strong case for an impact evaluation exists for programs where the assumptions underlying the project logic are based on limited evidence. A rigorous impact evaluation tests assumptions about a project’s effectiveness and contributes substantially to MCC’s future decision-making and the global evidence base.

Both impact and performance evaluations can be informative in measuring trends in outcomes, but impact evaluations have the additional benefit of being able to attribute the results they measure to MCC’s investment. Balancing tradeoffs when deciding how best to evaluate a program is not easy, but it is a challenge that MCC embraces to ensure accountability for results and to improve learning about what works. MCC assesses risks to high-quality results measurement across the evaluation portfolio on an annual basis.