Toward Evidence-Based Solutions: Operational Research for More Inclusive Humanitarian Programming

FSN Network
6 min readMar 28, 2024

--

By: Lloyd Banwart, Activity Director, HAEC, and Hope Schaitkin, Technical Officer for Research and Learning, GAYA

There remains a glaring absence of impact evaluations for gender and youth-focused topics in humanitarian contexts. This gap hinders progress toward equitable development and obscures the path to improving the impact of inclusive humanitarian programming in emergencies. The Humanitarian Assistance Evidence Cycle (HAEC) and Gender and Youth Activity (GAYA) have joined forces to challenge the sector to increase the evidence base of rigorous evaluations in these critical areas, advocating for causal inquiries that drive improvements in program design and implementation for increased inclusion.

Woman standing in a field smiling up with the sky behind her
Photo Credit: Conor Ashleigh / Save the Children

Impact Evaluations for Program Improvements

When it comes to exploring the inclusion of programs, it helps to shift away from “program impact” questions — e.g. did the overall program have impact? — toward “program change” questions e.g. which components or modalities are most effective in achieving impact. This shift enables evaluations to focus on how to improve programming components for the intended outcome(s). For instance:

  • Which combination of activities is most successful in shifting power dynamics and ensuring equal benefits to the most marginalized?
  • Which is a more effective strategy for encouraging women’s increased involvement in joint decision-making on the use of cash assistance, within households containing both women and men — household dialogues/decision-making modules or male-to-male engagement?
  • Which delivery modalities are most effective for individuals with different identity characteristics (for example, race, class, ethnicity, national origin, ability, sexual orientation, religion, and gender identity)?
  • What strategies are most effective in preventing incidents of gender-based violence (GBV)? For example, is adult male engagement programming more effective than creating and delivering content through the safe space approach? Or is it more effective to layer both approaches?

By conducting an impact evaluation around a program change question, rather than on the general impact of a program itself, we shift away from asking, “Did this program work” to a more nuanced and operational question, “Which approaches are most effective in achieving inclusive impact?”

Mukta and Pipasha are not only cousins but best friends. When Pipasha heard about her Mukta’s experiences in an Adolescent Girls Group set up by the ‘Suchana’ programme, she knew she had to get involved. Together they learnt about health, nutrition, how to prevent child marriage and discussed their dreams with their peers.
Photo Credit: Fabeha Monir/Save the Children

Evaluation in Action: Real-World Examples of Program Change Research

HAEC’s experience funding impact evaluations in Niger, Nigeria, Colombia, Guatemala, and Honduras provides tangible examples of how implementer-led research can pave the way for improved programming and greater global understanding. Implementer-led research encourages a culture of reflection and responsible experimentation. Embedding operational research in an iterative process of evaluating different implementation models allows for activity adaptations in protracted contexts. The same contexts will see continued investment by donors in the years to come — enabling current emergency activities to inform and improve future emergency activities.

For example, a current HAEC funded impact evaluation of a USAID/BHA funded emergency activity in Nigeria is looking at how outcomes vary for households that receive technical livelihood training, compared to those that receive technical livelihood skills and an “add-on” life-skills component. The results from this research will inform Mercy Corps program designs in future emergency response activities in the region.

For a second example of how this can work in practice, watch HAEC’s video “Are Impact Evaluations in Humanitarian Settings Necessarily Unethical?

However, the journey of filling evidence gaps can be challenging, particularly in humanitarian contexts.

Constraints to Inclusion and Conducting Impact Evaluations in Humanitarian Contexts

HAEC’s research found that limited donor requirements and enforcement, lack of funding, and low implementer bandwidth result in impact evaluations being seen as an added burden rather than an integrated tool for improvement. These factors can also create shortages in funding, time, and staff dedicated to conducting impact evaluations in humanitarian programs.

These factors additionally discourage inclusive practices, as true inclusion requires dedicated time, people, and resources to ensure diverse perspectives are a part of the impact evaluation process. For example, conducting an inclusive impact evaluation involves tailoring and piloting outreach methods and survey tools to reach and be accessible for the most marginalized, including those with disabilities and members of the LGBTQIA+ community. If evaluations are rushed or underfunded, those extra steps may be the first to get cut, meaning we don’t get to hear from the most marginalized.

HAEC’s research has also found that limited proficiency in impact evaluations and ineffective research partnerships create further inclusion hurdles. Without a clear understanding of how impact evaluations work, implementers may default to methods that perpetuate existing biases. For example — if the research question is focused on individuals and not households, but the survey instruments only examined impact at the household level and not at the individual level, the results provide an incomplete picture.

These factors can lead to research designs that don’t consider the differential impacts on various groups (e.g., women, people with disabilities, minority groups). Where there are questions on the impact and effectiveness of activity design for groups with different identity characteristics — this must be embedded in the research questions. Unfortunately, many humanitarian awards are unwilling to invest in research to answer those questions.

Photo Credit: Mustafa Saeed / Save the Children

A Call to Action

The HAEC and GAYA collective call to action is two-fold. Firstly, impact evaluations can mainstream relevant identity characteristics into existing research questions. Second, impact evaluations have an opportunity to rigorously evaluate the impact of humanitarian programming on economic and social inequalities. This starts with a clear understanding of the current landscape as depicted in HAEC’s Evidence Gap Map (EGM), which identified 163 peer-reviewed and gray literature impact evaluations with an emergency food security focus. This EGM found that, while many evaluations examined age (24) and sex (20) factors, few addressed other intersectional characteristics like education (15), ethnicity (2), and religion (1). Crucially, only 12 impact evaluations considered how these identity factors intersect, highlighting a significant gap in our understanding of the complexity of identity and how it influences participants’ ability to benefit from programming fully.

Further, only six impact evaluations examined poverty and inequality impacts — one of the lowest of any sector the EGM examined. This finding indicates a major knowledge gap: the impact of humanitarian programming on economic and social inequality, particularly those fueled by power imbalances stemming from gender identity, age, disability, sexual orientation, and ethnicity, is not being rigorously evaluated. Impact evaluations need to mainstream relevant identity characteristics into existing research questions, and rigorously evaluate the impact of humanitarian programming on economic and social inequality.

Secondly, to have a dialogue with your colleagues and peers regarding the constraints to conducting operational research, both broadly within impact evaluations and specifically within inclusive humanitarian programming. By doing so, we not only enhance the effectiveness of our interventions but also contribute to a larger framework that supports informed decision-making across the humanitarian sector. The questions we encourage stakeholders to consider are evolving from “Did our program have an impact?” to more refined probes like “Which iteration of our program delivers the most effective outcomes?” and “Does the inclusion of a new program component accelerate the outcomes of women more quickly than men?”

By prioritizing operational research questions, interventions can be refined over time and across varying contexts, ensuring they are not only impactful, but also economically viable and tailored to the specific needs of inclusive humanitarian programming.

HAEC and GAYA remain dedicated to nurturing a culture of continuous learning and adaptation. By championing this shift in evaluative focus, we are not just enhancing the effectiveness of our programs but also contributing to a broader, evidence-informed framework within the humanitarian sector. Our ultimate goal is to ensure that inclusive humanitarian programming achieves its full potential, driving meaningful change and delivering the transformative impact we all seek.

--

--

FSN Network
FSN Network

Written by FSN Network

We engage the food security community to share knowledge and resources to support vulnerable households worldwide. Privacy: www.fsnnetwork.org/privacy-policy

No responses yet