EA - Most small probabilities aren't pascalian by Gregory Lewis
The Nonlinear Library: EA Forum - En podkast av The Nonlinear Fund
Kategorier:
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Most small probabilities aren't pascalian, published by Gregory Lewis on August 7, 2022 on The Effective Altruism Forum. Summary: We routinely act to prevent, mitigate, or insure against risks with P = 'one-in-a-million'. Risks similarly or more probable than this should not prompt concerns about 'pascal's mugging' etc. Motivation Reckless appeals to astronomical stakes often prompt worries about pascal's mugging or similar. Sure, a 10^-20 chance of 10^40 has the same expected value as 10^20 with P = 1, but treating them as equivalent when making decisions is counter-intuitive. Thus one can (perhaps should) be wary of lines which amount to "The scale of the longterm future is so vast we can basically ignore the probability - so long as it is greater than 10^-lots - to see x-risk reduction is the greatest priority." Most folks who work on (e.g.) AI safety do not think the risks they are trying to reduce are extremely (nor astronomically) remote. Pascalian worries are unlikely to apply to attempts to reduce a baseline risk of 1/10 or 1/100. They also are unlikely to apply if the risk is a few orders of magnitude less (or a few orders of magnitude less tractable to reduce) than some suppose. Despite this, I sometimes hear remarks along the lines of "I only think this risk is 1/1000 (or 1/10 000, or even 'a bit less than 1%') so me working on this is me falling for Pascal's wager." This is mistaken: an orders-of-magnitude lower risk (or likelihood of success) makes, all else equal, something orders of magnitude less promising, but it does not mean it can be dismissed out-of-hand. Exactly where the boundary should be drawn for pascalian probabilities is up for grabs (10^-10 seems reasonably pascalian, 10^-2 definitely not). I suggest a very conservative threshold at '1 in a million': human activity in general (and our own in particular) is routinely addressed to reduce, mitigate, or insure against risks between 1/1000 and 1/1 000 000, and we typically consider these activities 'reasonable prudence' rather than 'getting mugged by mere possibility'. Illustrations Among many other things: Aviation and other 'safety critical' activities One thing which can go wrong when flying an airliner is an engine stops working. Besides all the engineering and maintenance to make engines reliable, airlines take many measures to mitigate this risk: Airliners have more than one engine, and are designed and operated so that they are able to fly and land at a nearby airport 'on the other engine' should one fail at any point in the flight. Pilots practice in initial and refresher simulator training how to respond to emergencies like an engine failure (apparently engine failure just after take-off is the riskiest) Pilots also make a plan before each flight what to do 'just in case' an engine fails whilst they are taking off. This risk is very remote: the rate of (jet) engine failure is something like 1 per 400 000 flight hours. So for a typical flight, maybe the risk is something like 10^-4 to 10^-5. The risk of an engine failure resulting in a fatal crash is even more remote: the most recent examples I could find happened in the 90s. Given the millions of airline flights a year, '1 in a million flights' is comfortable upper bound. Similarly, the individual risk-reduction measures mentioned above are unlikely to be averting that many micro(/nano?) crashes. A pilot who (somehow) manages to skive off their recurrency training or skip the pre-flight briefing may still muddle through if the risk they failed to prepare for realises. I suspect most consider the diligent practice by pilots for events they are are unlikely to ever see in their career admirable rather than getting suckered by Pascal's mugging. Aviation is the poster child of safety engineering, but it is not unique. Civil engineering di...
