EA - Promoting compassionate longtermism by jonleighton
The Nonlinear Library: EA Forum - En podkast av The Nonlinear Fund
Kategorier:
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Promoting compassionate longtermism, published by jonleighton on December 7, 2022 on The Effective Altruism Forum.This post is in 6 parts, starting with some basic reflections on suffering and ethics, and ending with a brief project description. While this post might seem overly broad-ranging, it’s meant to set out some basic arguments and explain the rationale for the project initiative in the last section, for which we are looking for support and collaboration. I go into much greater detail about some of the core ethical ideas in a new book about to be published, which I will present soon in a separate post. I also make several references here to Will MacAskill’s What We Owe the Future, because many of the ideas he expresses are shared by many EAs, and while I agree with many of the things he says, there are some important stances I disagree with that I will explain in this post.My overall motivation is a deep concern about the persistence of extreme suffering far into the future, and the possibility to take productive steps now to reduce the likelihood of that happening, thereby increasing the likelihood that the future will be a flourishing one.Summary:Suffering has an inherent call to action, and some suffering literally makes non-existence preferable.For various reasons, there are mixed attitudes within EA towards addressing suffering as a priority.We may not have the time to delay value lock-in for too long, and we already know some of the key principles.Increasing our efforts to prevent intense suffering in the short term may be important for preventing the lock-in of uncompassionate values.There’s an urgent need to research and promote mechanisms that can stabilise compassionate governance at the global level.OPIS is initiating research and film projects to widely communicate these ideas and concrete steps that can already be taken, and we are looking for support and collaboration.1. Some reflections on sufferingInvoluntary suffering is inherently bad – one could argue that this is ultimately what “bad†means – but extreme, unbearable suffering is especially bad, to the point that non-existence is literally a preferable option. At this level, people choose to end their lives if they can in order to escape the pain.We probably cannot fully grasp what it’s like to experience extreme suffering unless we have experienced it ourselves. To get even an approximate sense of what it’s like requires engaging with accounts and depictions of it. If not, we may underestimate its significance and attribute much lower priority to it than it deserves. As an example, a patient with a terrible condition called SUNCT whom I provided support to, who at one point attempted suicide, described in a presentation we recently gave together in Geneva the utter hell he experienced, and how no one should ever have to experience what he did.Intense suffering has an inherent call to action – we respond to it whenever we try to help people in severe pain, or animals being tortured on factory farms.There is no equivalent inherent urgency to fill the void and bring new sentient beings into existence, even though this is an understandable desire of intelligent beings who already exist.Intentionally bringing into existence a sentient being who will definitely experience extreme/unbearable suffering could be considered uncompassionate and even cruel.I don’t think the above reflections should be particularly controversial. Even someone who would like to fill the universe with blissful beings might still concede that the project doesn’t have an inherent urgency – that is, that it could be delayed for some time, or even indefinitely, without harm to anyone (unless you believe, as do some EAs, that every instance of inanimate matter in space and time that isn’t being optimally used ...
