2. Connor Leahy on GPT3, EleutherAI and AI Alignment

The Inside View - En podkast av Michaël Trazzi

Kategorier:

In the first part of the podcast we chat about how to speed up GPT-3 training, how Conor updated on recent announcements of large language models, why GPT-3 is AGI for some specific definitions of AGI [1], the obstacles in plugging planning to GPT-N and why the brain might approximate something like backprop. We end this first chat with solomonoff priors [2], adversarial attacks such as Pascal Mugging [3], and whether direct work on AI Alignment is currently tractable. In the second part, we chat about his current projects at EleutherAI [4][5], multipolar scenarios and reasons to work on technical AI Alignment research. [1] https://youtu.be/HrV19SjKUss?t=4785 [2] https://en.wikipedia.org/wiki/Solomonoff%27s_theory_of_inductive_inference [3] https://www.lesswrong.com/posts/a5JAiTdytou3Jg749/pascal-s-mugging-tiny-probabilities-of-vast-utilities [4] https://www.eleuther.ai/ [5] https://discord.gg/j65dEVp5

Visit the podcast's native language site