[Linkpost] “Sam Altman’s Chip Ambitions Undercut OpenAI’s Safety Strategy” by Garrison
EA Forum Podcast (Curated & popular) - En podkast av EA Forum Team
Kategorier:
If you enjoy this, please consider subscribing to my Substack. Sam Altman has said he thinks that developing artificial general intelligence (AGI) could lead to human extinction, but OpenAI is trying to build it ASAP. Why? The common story for how AI could overpower humanity involves an “intelligence explosion,” where an AI system becomes smart enough to further improve its capabilities, bootstrapping its way to superintelligence. Even without any kind of recursive self-improvement, some AI safety advocates argue that a large enough number of copies of a genuinely human-level AI system could pose serious problems for humanity. (I discuss this idea in more detail in my recent Jacobin cover story.) Some people think the transition from human-level AI to superintelligence could happen in a matter of months, weeks, days, or even hours. The faster the takeoff, the more dangerous, the thinking goes. Sam Altman, circa February 2023, agrees [...] --- First published: February 10th, 2024 Source: https://forum.effectivealtruism.org/posts/vBjSyNNnmNtJvmdAg/sam-altman-s-chip-ambitions-undercut-openai-s-safety Linkpost URL:https://garrisonlovely.substack.com/p/sam-altmans-chip-ambitions-undercut --- Narrated by TYPE III AUDIO.