“How bad would human extinction be?” by arvomm

EA Forum Podcast (Curated & popular) - En podkast av EA Forum Team

Figure 1  (see full caption below)This post is a part of Rethink Priorities' Worldview Investigations Team's CURVE Sequence: "Causes and Uncertainty: Rethinking Value in Expectation." The aim of this sequence is twofold: first, to consider alternatives to expected value maximisation for cause prioritisation; second, to evaluate the claim that a commitment to expected value maximisation robustly supports the conclusion that we ought to prioritise existential risk mitigation over all else.Executive SummaryBackgroundThis report builds on the model originally introduced by Toby Ord on how to estimate the value of existential risk mitigation. The previous framework has several limitations, including:The inability to model anything requiring shorter time units than centuries, like AI timelines.A very limited range of scenarios considered. In the previous model, risk and value growth can take different forms, and each combination represents one scenarioNo explicit treatment of persistence –– how long the mitigation efforts’ effects last for ––as a variable of interest.No easy way [...] ---Outline:(00:38) Executive Summary(05:26) Abridged Report(11:20) Generalised Model: Arbitrary Risk Profile(13:37) Value(19:00) Great Filters and the Time of Perils Hypothesis(21:06) Decaying Risk(21:55) Results(21:58) Convergence(25:35) The Expected Value of Mitigating Risk Visualised(31:59) Concluding Remarks(35:00) AcknowledgementsThe original text contained 24 footnotes which were omitted from this narration. --- First published: October 23rd, 2023 Source: https://forum.effectivealtruism.org/posts/S9H86osFKhfFBCday/how-bad-would-human-extinction-be --- Narrated by TYPE III AUDIO.

Visit the podcast's native language site