109 Episoder

  1. Managing frontier model training organizations (or teams)

    Publisert: 19.3.2025
  2. Gemma 3, OLMo 2 32B, and the growing potential of open-source AI

    Publisert: 13.3.2025
  3. Interviewing Eugene Vinitsky on self-play for self-driving and what else people do with RL

    Publisert: 12.3.2025
  4. Elicitation, the simplest way to understand post-training

    Publisert: 10.3.2025
  5. Where inference-time scaling pushes the market for AI companies

    Publisert: 5.3.2025
  6. GPT-4.5: "Not a frontier model"?

    Publisert: 28.2.2025
  7. Character training: Understanding and crafting a language model's personality

    Publisert: 26.2.2025
  8. Claude 3.7 thonks and what's next for inference-time scaling

    Publisert: 24.2.2025
  9. Grok 3 and an accelerating AI roadmap

    Publisert: 18.2.2025
  10. An unexpected RL Renaissance

    Publisert: 13.2.2025
  11. Deep Research, information vs. insight, and the nature of science

    Publisert: 12.2.2025
  12. Making the U.S. the home for open-source AI

    Publisert: 5.2.2025
  13. Why reasoning models will generalize

    Publisert: 28.1.2025
  14. Interviewing OLMo 2 leads: Open secrets of training language models

    Publisert: 22.1.2025
  15. DeepSeek R1's recipe to replicate o1 and the future of reasoning LMs

    Publisert: 21.1.2025
  16. Let me use my local LMs on Meta Ray-Bans

    Publisert: 15.1.2025
  17. (Voiceover) DeepSeek V3 and the actual cost of training frontier AI models

    Publisert: 9.1.2025
  18. The state of post-training in 2025

    Publisert: 8.1.2025
  19. Quick recap on the state of reasoning

    Publisert: 2.1.2025
  20. (Voiceover) 2024 Interconnects year in review

    Publisert: 31.12.2024

2 / 6

Audio essays about the latest developments in AI and interviews with leading scientists in the field. Breaking the hype, understanding what's under the hood, and telling stories. www.interconnects.ai

Visit the podcast's native language site