EA - EA is three radical ideas I want to protect by Peter Wildeford

The Nonlinear Library: EA Forum - En podkast av The Nonlinear Fund

Podcast artwork

Kategorier:

Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA is three radical ideas I want to protect, published by Peter Wildeford on March 27, 2023 on The Effective Altruism Forum.Context note: This is more of an emotional piece meant to capture feelings and reservations, rather than a logical piece meant to make a persuasive point. This is a very personal piece and only represents my own views, and does not necessarily represent the views of anyone else or any institution I may represent.It’s been a rough past five months for effective altruism.Because of this, understandably many people are questioning their connection and commitment to the movement and questioning whether “effective altruism” is still a brand or set of ideas worth promoting. I’ve heard some suggest it might be better to instead promote other brands and perhaps even abandon promotion of “effective altruism” altogether.I could see ways in which this is a good move. Ultimately I want to do whatever is most impactful. However, I worry that moving away from effective altruism could make us lose some of what I think makes the ideas and community so special and what drew me to the community more than ten years ago.Essentially, effective altruism contains three radical ideas that I don’t easily find in other communities. These three ideas are ideas I want to protect.Radical empathyHumanity has long had a fairly narrow moral circle. Radical Empathy is the idea that there are many groups of people, or other entities, that are worthy of moral concern even if they don't look or act like us. Moreover, it’s important to deliberately identify all entities worthy of moral concern so that we can ensure they are protected. I find effective altruism to be unique in extending moral concern to not just traditionally neglected farm animals and future humans (very important) – but also to invertebrates and potential digital minds. Effective altruists are also unique in trying to intentionally understand who might matter and why and actually incorporating this into the process of discovering how to best help the world. Asking the question "Who might matter that we currently neglect?" is a key question that is asked way too rarely.We understand that while it’s ok to have special concern for family and friends, we should generally aim to make altruistic decisions based on impartiality, not weighing people differently just because they are at a different level of geographic distance, a different level of temporal distance, a different species, or run cognition on a different substrate.I worry that if we were to promote individual subcomponents of effective altruism, like pandemic preparedness or AI risk, we might not end up promoting radical empathy and we might end up missing entire classes of entities that matter. For example, I worry that one more subtle form of misaligned AI might be an AI that treats humans ok but adopts common human views on nonhuman animal welfare and perpetuates factory farming or abuse of a massive number of digital minds. The fact that effective altruism has somehow created a lot of AI developers that avoid eating meat and care about nonhuman animals is a big and fairly unexpected win. I think only some weird movement that somehow combined factory farming prevention with AI risk prevention could’ve created that.Scope sensitivityI also really like that EAs are willing to “shut up and multiply”. We’re scope sensitive. We’re cause neutral. Nearly everyone else in the world is not. Many people pick ways to improve the world based on vibes or personal experience, rather than through a systematic search of how they can best use their resources. Effective altruism understands that resources are limited and that we have to make hard choices between potential interventions and help 100 people instead of 10, even if helping the 10 people feels as or m...

Visit the podcast's native language site