Future of Life Institute Podcast
En podkast av Future of Life Institute
230 Episoder
-
Not Cool Ep 8: Suzanne Jones on climate policy and government responsibility
Publisert: 24.9.2019 -
Not Cool Ep 7: Lindsay Getschel on climate change and national security
Publisert: 20.9.2019 -
Not Cool Ep 6: Alan Robock on geoengineering
Publisert: 17.9.2019 -
AIAP: Synthesizing a human's preferences into a utility function with Stuart Armstrong
Publisert: 17.9.2019 -
Not Cool Ep 5: Ken Caldeira on energy, infrastructure, and planning for an uncertain climate future
Publisert: 12.9.2019 -
Not Cool Ep 4: Jessica Troni on helping countries adapt to climate change
Publisert: 10.9.2019 -
Not Cool Ep 3: Tim Lenton on climate tipping points
Publisert: 5.9.2019 -
Not Cool Ep 2: Joanna Haigh on climate modeling and the history of climate change
Publisert: 3.9.2019 -
Not Cool Ep 1: John Cook on misinformation and overcoming climate silence
Publisert: 3.9.2019 -
Not Cool Prologue: A Climate Conversation
Publisert: 3.9.2019 -
FLI Podcast: Beyond the Arms Race Narrative: AI and China with Helen Toner and Elsa Kania
Publisert: 30.8.2019 -
AIAP: China's AI Superpower Dream with Jeffrey Ding
Publisert: 16.8.2019 -
FLI Podcast: The Climate Crisis as an Existential Threat with Simon Beard and Haydn Belfield
Publisert: 1.8.2019 -
AIAP: On the Governance of AI with Jade Leung
Publisert: 22.7.2019 -
FLI Podcast: Is Nuclear Weapons Testing Back on the Horizon? With Jeffrey Lewis and Alex Bell
Publisert: 28.6.2019 -
FLI Podcast: Applying AI Safety & Ethics Today with Ashley Llorens & Francesca Rossi
Publisert: 31.5.2019 -
AIAP: On Consciousness, Qualia, and Meaning with Mike Johnson and Andrés Gómez Emilsson
Publisert: 23.5.2019 -
The Unexpected Side Effects of Climate Change with Fran Moore and Nick Obradovich
Publisert: 30.4.2019 -
AIAP: An Overview of Technical AI Alignment with Rohin Shah (Part 2)
Publisert: 25.4.2019 -
AIAP: An Overview of Technical AI Alignment with Rohin Shah (Part 1)
Publisert: 11.4.2019
The Future of Life Institute (FLI) is a nonprofit working to reduce global catastrophic and existential risk from powerful technologies. In particular, FLI focuses on risks from artificial intelligence (AI), biotechnology, nuclear weapons and climate change. The Institute's work is made up of three main strands: grantmaking for risk reduction, educational outreach, and advocacy within the United Nations, US government and European Union institutions. FLI has become one of the world's leading voices on the governance of AI having created one of the earliest and most influential sets of governance principles: the Asilomar AI Principles.