Best AI papers explained
En podkast av Enoch H. Kang
550 Episoder
-
Provably Learning from Language Feedback
Publisert: 9.7.2025 -
Markets with Heterogeneous Agents: Dynamics and Survival of Bayesian vs. No-Regret Learners
Publisert: 5.7.2025 -
Why Neural Network Can Discover Symbolic Structures with Gradient-based Training: An Algebraic and Geometric Foundation
Publisert: 5.7.2025 -
Causal Abstraction with Lossy Representations
Publisert: 4.7.2025 -
The Winner's Curse in Data-Driven Decisions
Publisert: 4.7.2025 -
Embodied AI Agents: Modeling the World
Publisert: 4.7.2025 -
Beyond Statistical Learning: Exact Learning Is Essential for General Intelligence
Publisert: 4.7.2025 -
What Has a Foundation Model Found? Inductive Bias Reveals World Models
Publisert: 4.7.2025 -
Language Bottleneck Models: A Framework for Interpretable Knowledge Tracing and Beyond
Publisert: 3.7.2025 -
Learning to Explore: An In-Context Learning Approach for Pure Exploration
Publisert: 3.7.2025 -
Human-AI Matching: The Limits of Algorithmic Search
Publisert: 25.6.2025 -
Uncertainty Quantification Needs Reassessment for Large-language Model Agents
Publisert: 25.6.2025 -
Bayesian Meta-Reasoning for Robust LLM Generalization
Publisert: 25.6.2025 -
General Intelligence Requires Reward-based Pretraining
Publisert: 25.6.2025 -
Deep Learning is Not So Mysterious or Different
Publisert: 25.6.2025 -
AI Agents Need Authenticated Delegation
Publisert: 25.6.2025 -
Probabilistic Modelling is Sufficient for Causal Inference
Publisert: 25.6.2025 -
Not All Explanations for Deep Learning Phenomena Are Equally Valuable
Publisert: 25.6.2025 -
e3: Learning to Explore Enables Extrapolation of Test-Time Compute for LLMs
Publisert: 17.6.2025 -
Extrapolation by Association: Length Generalization Transfer in Transformers
Publisert: 17.6.2025
Cut through the noise. We curate and break down the most important AI papers so you don’t have to.
