“Don’t treat probabilities less than 0.5 as if they’re 0” by MichaelDickens

EA Forum Podcast (All audio) - En podkast av EA Forum Team

Example: "[wishy-washy argument that AI isn't risky], therefore we shouldn't work on AI safety." How confident are you about that? From your perspective, there's a non-trivial possibility that you're wrong. And I don't even mean 1%, I mean like 30%. Almost everyone working on AI safety think it has less than a 50% chance of killing everyone, but it's still a good expected value to work on it. Example: "Shrimp are not moral patients so we shouldn't try to help them." Again, how confident are you about that? There's no way you can be confident enough for this argument to change your prioritization. The margin of error on the cost-effectiveness of some intervention is way higher than the difference in subjective probability on "shrimp are sentient" between someone who does, and someone who does not, care about shrimp welfare. EAs are better at avoiding this fallacy than [...] --- First published: February 26th, 2025 Source: https://forum.effectivealtruism.org/posts/hq4oZiCDzggJyGXas/don-t-treat-probabilities-less-than-0-5-as-if-they-re-0 --- Narrated by TYPE III AUDIO.

Visit the podcast's native language site