EA - Optimism, AI risk, and EA blind spots by Justis
The Nonlinear Library: EA Forum - En podkast av The Nonlinear Fund
Kategorier:
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Optimism, AI risk, and EA blind spots, published by Justis on September 28, 2022 on The Effective Altruism Forum. Preface I'm going to start this post with a personal story, in part because people tend to enjoy writing that does that. If you don't want the gossip, just skip section one, the takeaway of which is: "EA has a strong cultural bias in favor of believing arbitrary problems are solvable". The gossip - and this takeaway - are not the only insight I'm trying to communicate. I don't mean for this post to be a "community" post overall, but rather one that is action relevant to doing good on the object level. N=1 I had a two week work trial with a prominent EA org. There were some red flags. Nobody would tell me the projected salary, despite the job opportunity taking place across the country and in one of the most expensive cities on Earth. But whatever. I quit my job and flew over. It didn't work out. My best guess is that this was for cultural reasons. My would-be manager didn't think I'd been making fast enough progress understanding a technical framework, but the jobs I've had since have involved that framework, and I've received overwhelmingly positive feedback, working on products dramatically more complicated than the job opportunity called for. C'est la vie. Much later, I was told some of the things in my file for that organization. I was told by the organization's leader in a totally open way - nothing sneaky or "here's the dirt", just some feedback to help me improve. I appreciate this, and welcomed it. But here's the part relevant to the post: One of the negative things in my file was that someone had said I was "a bit of a downer". Much like with my technical competency, maybe so. But it's worth mentioning that in my day to day life, my coworkers generally think I'm weirdly positive, and often comment that my outlook is shockingly sanguine. I believe that both are true. I'm unusually optimistic. But professional EA culture is much, much more so. That's not a bad thing (he said, optimistically). But it's also not all good. (Why) is there an optimism bias? If you want to complete an ambitious project, it's extremely useful to presume that (almost) any challenge can be met. This is a big part of being "agentic", a much-celebrated and indeed valuable virtue within the EA community. (And also within elite circles more generally.) The high-end professional world has lots of upside opportunities and relatively little downside risk (you will probably always find a pretty great job as a fallback), so it's rational to make lots of bets on long odds and try to find holy grails. Therefore, people who are flagged as "ambitious", "impressive", "agentic", will both be selected for and encouraged to further cultivate a mindset where you never say a problem is insurmountable, merely challenging or, if you truly must, "not a top priority right now". But yeah. No odds are too long to be worth a shot! How is this action relevant? To avoid burying the lede, it's a major part of my reasoning to donate my 10% pledge to the Against Malaria Foundation, rather than x-risk reduction efforts. I'll trace out the argument, then pile on the caveats. On the 80,000 Hours Podcast, Will MacAskill put the odds of a misaligned AI takeover around 3%. Many community figures put the odds much higher, but I feel pretty comfortable anchoring on a combination of Will and Katja Grace, who put the odds at 7% that AI destroys the world. Low to mid single digits. Okay. So here's a valid argument, given its premises: Premise One: There is at least a 6% chance that AI destroys the world, or removes all humans from it. Premise Two: There exist interventions that can reliably reduce the risk we face by at least 1% (of the risk, not of the total - so 6% would turn into 5.94%, not 5%). Premise Three: W...
