AI Safety Fundamentals: Alignment
En podkast av BlueDot Impact
Kategorier:
83 Episoder
-
Yudkowsky Contra Christiano on AI Takeoff Speeds
Publisert: 13.5.2023 -
Why AI Alignment Could Be Hard With Modern Deep Learning
Publisert: 13.5.2023 -
Least-To-Most Prompting Enables Complex Reasoning in Large Language Models
Publisert: 13.5.2023 -
Measuring Progress on Scalable Oversight for Large Language Models
Publisert: 13.5.2023 -
Supervising Strong Learners by Amplifying Weak Experts
Publisert: 13.5.2023 -
Summarizing Books With Human Feedback
Publisert: 13.5.2023 -
Robust Feature-Level Adversaries Are Interpretability Tools
Publisert: 13.5.2023 -
Debate Update: Obfuscated Arguments Problem
Publisert: 13.5.2023 -
High-Stakes Alignment via Adversarial Training [Redwood Research Report]
Publisert: 13.5.2023 -
AI Safety via Debate
Publisert: 13.5.2023 -
Takeaways From Our Robust Injury Classifier Project [Redwood Research]
Publisert: 13.5.2023 -
Introduction to Logical Decision Theory for Computer Scientists
Publisert: 13.5.2023 -
AI Safety via Debatered Teaming Language Models With Language Models
Publisert: 13.5.2023 -
Toy Models of Superposition
Publisert: 13.5.2023 -
Understanding Intermediate Layers Using Linear Classifier Probes
Publisert: 13.5.2023 -
Acquisition of Chess Knowledge in Alphazero
Publisert: 13.5.2023 -
Feature Visualization
Publisert: 13.5.2023 -
Discovering Latent Knowledge in Language Models Without Supervision
Publisert: 13.5.2023 -
Progress on Causal Influence Diagrams
Publisert: 13.5.2023 -
Careers in Alignment
Publisert: 13.5.2023
Listen to resources from the AI Safety Fundamentals: Alignment course!https://aisafetyfundamentals.com/alignment