109 Episoder

  1. We aren't running out of training data, we are running out of open training data

    Publisert: 29.5.2024
  2. Name, image, and AI's likeness

    Publisert: 22.5.2024
  3. OpenAI chases Her

    Publisert: 16.5.2024
  4. OpenAI's Model (behavior) Spec, RLHF transparency, and personalization questions

    Publisert: 13.5.2024
  5. RLHF: A thin line between useful and lobotomized

    Publisert: 1.5.2024
  6. Phi 3 and Arctic: Outlier LMs are hints

    Publisert: 30.4.2024
  7. AGI is what you want it to be

    Publisert: 24.4.2024
  8. Llama 3: Scaling open LLMs to AGI

    Publisert: 21.4.2024
  9. Stop "reinventing" everything to "solve" alignment

    Publisert: 17.4.2024
  10. The end of the "best open LLM"

    Publisert: 15.4.2024
  11. Why we disagree on what open-source AI should be

    Publisert: 3.4.2024
  12. DBRX: The new best open LLM and Databricks' ML strategy

    Publisert: 29.3.2024
  13. Evaluations: Trust, performance, and price (bonus, announcing RewardBench)

    Publisert: 21.3.2024
  14. Model commoditization and product moats

    Publisert: 13.3.2024
  15. The koan of an open-source LLM

    Publisert: 6.3.2024
  16. Interviewing Louis Castricato of Synth Labs and Eleuther AI on RLHF, Gemini Drama, DPO, founding Carper AI, preference data, reward models, and everything in between

    Publisert: 4.3.2024
  17. How to cultivate a high-signal AI feed

    Publisert: 28.2.2024
  18. Google ships it: Gemma open LLMs and Gemini backlash

    Publisert: 22.2.2024
  19. 10 Sora and Gemini 1.5 follow-ups: code-base in context, deepfakes, pixel-peeping, inference costs, and more

    Publisert: 20.2.2024
  20. Releases! OpenAI’s Sora for video, Gemini 1.5's infinite context, and a secret Mistral model

    Publisert: 16.2.2024

5 / 6

Audio essays about the latest developments in AI and interviews with leading scientists in the field. Breaking the hype, understanding what's under the hood, and telling stories. www.interconnects.ai

Visit the podcast's native language site