Best AI papers explained
En podkast av Enoch H. Kang
550 Episoder
-
Uncovering Causal Hierarchies in Language Model Capabilities
Publisert: 17.6.2025 -
Generalization or Hallucination? Understanding Out-of-Context Reasoning in Transformers
Publisert: 17.6.2025 -
Improving Treatment Effect Estimation with LLM-Based Data Augmentation
Publisert: 17.6.2025 -
LLM Numerical Prediction Without Auto-Regression
Publisert: 17.6.2025 -
Self-Adapting Language Models
Publisert: 17.6.2025 -
Why in-context learning models are good few-shot learners?
Publisert: 17.6.2025 -
Take Caution in Using LLMs as Human Surrogates: Scylla Ex Machina∗
Publisert: 14.6.2025 -
The Logic of Machines: The AI Reasoning Debate
Publisert: 12.6.2025 -
Layer by Layer: Uncovering Hidden Representations in Language Models
Publisert: 12.6.2025 -
Causal Attribution Analysis for Continuous Outcomes
Publisert: 12.6.2025 -
Training a Generally Curious Agent
Publisert: 12.6.2025 -
Estimation of Treatment Effects Under Nonstationarity via Truncated Difference-in-Q’s
Publisert: 12.6.2025 -
Strategy Coopetition Explains the Emergence and Transience of In-Context Learning
Publisert: 12.6.2025 -
Emergent Misalignment: Narrow finetuning can produce broadly misaligned LLMs
Publisert: 11.6.2025 -
Agentic Supernet for Multi-agent Architecture Search
Publisert: 11.6.2025 -
Sample Complexity and Representation Ability of Test-time Scaling Paradigms
Publisert: 11.6.2025 -
Aligning with Human Judgement: The Role of Pairwise Preference in Large Language Model Evaluators
Publisert: 10.6.2025 -
LLMs Get Lost In Multi-Turn Conversation
Publisert: 9.6.2025 -
PromptPex: Automatic Test Generation for Prompts
Publisert: 8.6.2025 -
General Agents Need World Models
Publisert: 8.6.2025
Cut through the noise. We curate and break down the most important AI papers so you don’t have to.
