Reflecting on the Future of M&E in the Age of Big Data

Data-Pop Alliance organized two sessions on Big Data and M&E that I moderated at the M&E and Tech Conference in DC on September 25th and at its sister Deep Dive in NYC on October 3rd, sponsored by The Rockefeller Foundation, FHI360, and the GSMA. Two months and an evaluation later, I am seizing the opportunity handed out by the co-organizers Wayan Vota, from FHI360 and Kurante, and Linda Raftree, from Kurante and many places, to summarize these discussions and my reflections on the topic.

This is especially timely as I am finalizing, with co-authors Sally Jackson, M&E specialist at UN Global Pulse Jakarta Lab and Ana Aerias, our program Manager, a chapter on the topic for a forthcoming book on Complex Evaluation to be published by SAGE next year—the “Year of Evaluation”.

The DC session, titled “How to Leverage Big Data for Better M&E in Complex Interventions?”, was a panel discussion with Kalev Leetaru, Terry Leach, Andrew Means, Jack Molyneaux and Bridget Sheerin; my objective was to frame the ‘big picture’ and surface how Big Data could be relevant to M&E activities, starting from the simple (and simplistic) notion that so much ‘new real-time data’ could surely be ‘leveraged’ for their purpose. The NY session, which I chose to call “M&E of complex interventions in the age of Big Data: obsolescence or emergence?” was designed to be more interactive, with only two panelists—Brian d’Alessandro and Evan Waters—and get closer to the crux of the applications and implications of Big Data for M&E of complex interventions—understood as large-scale and multi-actors.

Both sessions attracted about 50 participants and, despite their differences, raised three main sets of considerations.

The first set can be called definitional-conceptual. As several panelists and participants underscored, it is important to get a common understanding of what we mean by Big Data since changing how we define ‘it’ will change the question of how ‘it’ may affect M&E. A few participants considered Big Data as ‘just’ large datasets—by that token, as one participant noted, ‘Big Data’ would be old news. And despite a general consensus about the novelty and associated opportunities and challenges presented by Big Data, specifics are often lacking.

In both sessions—and in NY in particular—I presented my vision of Big Data, articulated in an article and our recent draft primer, I argued that Big Data should no longer be considered as ‘big data’ characterized by the intersection of 3Vs (for Volume, Velocity and Variety), but rather as the union of 3Cs—for Crumbs (personal digital data particles), Capacities (new tools and methods) and Community (new actors with incentives)—creating a new complex system in its own right.

Considering Big Data as a complex system allows, or forces, us to reconsider the whole question of its relevance for and effect on M&E through a systems lens, and see the question’s complexity emerges more clearly. The question is not whether and how Big Data may improve existing M&E systems by providing more data. It is how this new complex system that reflects and affects the behaviors of complex human ecosystems subject to complex policies and programs may fundamentally change the theory and practice of monitoring and/or evaluating the effects of interventions. We also spent time ‘unpacking’ M&E between its ‘M’ and ‘E’ parts, and thought about how the ‘question’ may yield different answers for each.

This veered the discussion towards the question’s theoretical dimensions, and their practical implications. One hypothesis that stirred great interest in both sessions is whether Big Data may mean the advent of a new kind of M, as a mix of monitoring and (predictive) modeling, and the demise of ‘classic’ E, tilting the practice scale towards the former.

“Is program evaluation dying? This question has been swirling around my head the last few months. I don’t mean to imply that programs should stop evaluating their outcomes. I just find that the current framework of traditional, social science driven program evaluation is frankly not embracing the possibilities of today’s world. Put simply, program evaluation was not made for the age of big data.”

Andrew Means, http://www.marketsforgood.org/the-death-of-evaluation/

In a blog post circulated ahead of the Conference, and during the DC session, Andrew Means criticized ‘classic’ program evaluation for being too reflective and not predictive enough, backward looking rather than forward looking, too focused, in his words on “proving rather than improving”.

Many participants—and commentators (and Andrew Means), disagreed with the notion that Big Data meant the death of evaluation; rather it may accelerate its adaptation and a blending of real-time monitoring, predictive modeling and impact evaluation depending on specific contexts.

These considerations point to the centrality and difficulty of causal inference. In contrast to the ideal-typical case of a perfect Randomized Controlled Trial, it is indeed very difficult to infer causality between treatment and effect (to find ‘what works’) in complex systems with feedback loops, recursive causality etc.

On the one hand, Big Data may indeed enhance the evaluability challenge by adding additional feedback loops and complexity overall. On the other hand, highly granular data lend themselves to finding or creating natural or quasi-natural experiments, as a recent paper argued. A panelist in DC noted that new methods were being built and tools available to infer causality in times series. This suggests that causal inference—and related issues of sample biases and their implications for internal and external validity—will receive significant attention as one of the next frontiers of Big Data’s applications to social problems.1 

The third set of issues included the questions’ ethical and institutional aspects. Several participants noted that term ‘Monitoring’ itself had taken a new connotation in the post-Snowden era. As several participants but also controversies and conferences recently underlined, the rise of Big Data—as a system—may require the development of new codes of ethics to reflect current technological abilities. In both sessions, panelist and participants agreed that M&E practitioners, given their experience and expertise in balancing competing demands for privacy (by and of individual and groups) and evidence and accountability (by and for citizens, donors..) should play a more active role in these debates in the near future. To contribute to this process, comments and feedback are welcome!

 

1. I gave a talk at a Conference at UC Berkeley on Population Research in the Age of Big Data last week where Prof. Maya Petersen presented a paper on causal inference from Big Data in epidemiology.

Share
Keywords
Author(s)
Share
Recommendations

Event Paper

Politics vs. Policy in Disinformation Research: A Systematic Literature Review

Despite the wealth of research on disinformation, knowledge production is unevenly distributed

Annual Report

Overview and Outlook 2023-2024

The world of 2024 should be much safer, fairer, more empathetic, sustainable,

Project Report

Segundo Informe Nacional Voluntario de Guinea Ecuatorial 2024

El Segundo Informe Nacional Voluntario de Guinea Ecuatorial 2024 recoge el impacto