EA - How will we know if we are doing good better: The case for more and better monitoring and evaluation by TomBill

The Nonlinear Library: EA Forum - En podkast av The Nonlinear Fund

Podcast artwork

Kategorier:

Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How will we know if we are doing good better: The case for more and better monitoring and evaluation, published by TomBill on February 8, 2023 on The Effective Altruism Forum.Written by Sophie Gulliver and Thomas BillingtonTL;DR - EAs want to do the most good with the resources we have. But how do we know if we are actually doing the most good? How do we know if our projects are running efficiently and effectively? How do we know we are achieving impact? How do we check we are not causing harm? Monitoring and evaluation (M&E) theories and tools can help EA organisations answer these questions—but are currently not being applied to a sufficient degree. These theories and practices will help us achieve more impact and value in the short-term and long-term.Here, we outline a few ways to build M&E knowledge and skills within EA, including:Helpful resourcesThe EA M&E Slack community groupSigning up for our pro bono M&E supportM&E as a career choiceIntroductionEA is “a project that aims to find the best ways to help others, and put them into practice”. Specifically, EA distinguishes itself from general do-gooding through its commitment to doing the most good possible with its resources.To these ends, the EA community is only as good at achieving its goal of “finding the best way to help others” as it is at knowing what the best way to help others is. But how can we know what the best ways to help others are? And how will we know if what we are doing is actually helping others in a cost-effective and impactful way?Monitoring and evaluation (M&E) experts, tools and concepts are some of the best ways to realise effective altruism’s philosophy of maximising impact and doing good better.M&E are two interrelated concepts that help us track and assess a project or organisation’s progress, impact, and value. In essence, if you really care about whether a project is making the world better, then you should also care about M&E.However, in our experience, the integration of M&E tools and expertise within the EA community has been variable and mostly restricted to the global health and development sector.This post argues that the broader EA movement should also be engaging with M&E more to ensure we are doing the most good possible.Note: This post aims at being a broad overview of M&E. Our objective is to increase knowledge and awareness of M&E in the EA space, as well as give some first steps to learn more. Don't worry if you feel like you can't apply it immediately. Future posts will delve into details, and see our section below for what you can do right now.What is M&E?Monitoring and evaluation are two distinct functions that work synergistically. They can be defined as:Monitoring: the systematic and routine collection of information to track progress and identify areas for improvement. Monitoring asks questions like:Are we on track?Are we reaching the right people?Are we using our money and time efficiently?What can we improve?For example, if you were running a project to reduce deaths through improved water quality, you might regularly monitor chlorine availability at water points, or run quarterly surveys asking community members how often they are chlorinating their water.Evaluation: the rigorous assessment of the value of a project or programme to inform decision-making. This assessment usually measures the performance of the project against criteria that define what a ‘valuable’ project looks like. These could be criteria like ‘impactful’ or ‘cost-effective’ or ‘sustainable’. Unlike monitoring, evaluations answer bigger questions to inform important decisions about a project’s future:How well is this project doing overall?How valuable is it?Is it worth it?Should we adapt, scale up or scale down?Given the deeper analysis required, evaluations are t...

Visit the podcast's native language site