Interconnects
En podkast av Nathan Lambert
109 Episoder
-
We aren't running out of training data, we are running out of open training data
Publisert: 29.5.2024 -
Name, image, and AI's likeness
Publisert: 22.5.2024 -
OpenAI chases Her
Publisert: 16.5.2024 -
OpenAI's Model (behavior) Spec, RLHF transparency, and personalization questions
Publisert: 13.5.2024 -
RLHF: A thin line between useful and lobotomized
Publisert: 1.5.2024 -
Phi 3 and Arctic: Outlier LMs are hints
Publisert: 30.4.2024 -
AGI is what you want it to be
Publisert: 24.4.2024 -
Llama 3: Scaling open LLMs to AGI
Publisert: 21.4.2024 -
Stop "reinventing" everything to "solve" alignment
Publisert: 17.4.2024 -
The end of the "best open LLM"
Publisert: 15.4.2024 -
Why we disagree on what open-source AI should be
Publisert: 3.4.2024 -
DBRX: The new best open LLM and Databricks' ML strategy
Publisert: 29.3.2024 -
Evaluations: Trust, performance, and price (bonus, announcing RewardBench)
Publisert: 21.3.2024 -
Model commoditization and product moats
Publisert: 13.3.2024 -
The koan of an open-source LLM
Publisert: 6.3.2024 -
Interviewing Louis Castricato of Synth Labs and Eleuther AI on RLHF, Gemini Drama, DPO, founding Carper AI, preference data, reward models, and everything in between
Publisert: 4.3.2024 -
How to cultivate a high-signal AI feed
Publisert: 28.2.2024 -
Google ships it: Gemma open LLMs and Gemini backlash
Publisert: 22.2.2024 -
10 Sora and Gemini 1.5 follow-ups: code-base in context, deepfakes, pixel-peeping, inference costs, and more
Publisert: 20.2.2024 -
Releases! OpenAI’s Sora for video, Gemini 1.5's infinite context, and a secret Mistral model
Publisert: 16.2.2024
Audio essays about the latest developments in AI and interviews with leading scientists in the field. Breaking the hype, understanding what's under the hood, and telling stories. www.interconnects.ai