Weights and Biases on Fine-Tuning LLMs - Weaviate Podcast #68!
Weaviate Podcast - En podkast av Weaviate
Kategorier:
Hey everyone! Thank you so much for watching the 68th episode of the Weaviate Podcast! We are super excited to welcome Morgan McGuire, Darek Kleczek, and Thomas Capelle! This was such a fun discussion beginning with generally how see the space of fine-tuning from why you would want to do it, to the available tooling, intersection with RAG and more! Check out W&B Prompts! https://wandb.ai/site/prompts Check out the W&B Tiny Llama Report! https://wandb.ai/capecape/llamac/reports/Training-Tiny-Llamas-for-Fun-and-Science--Vmlldzo1MDM2MDg0 Chapters 0:00 Tiny Llamas! 1:53 Welcome! 2:22 LLM Fine-Tuning 5:25 Tooling for Fine-Tuning 7:55 Why Fine-Tune? 9:55 RAG vs. Fine-Tuning 12:25 Knowledge Distillation 14:40 Gorilla LLMs 18:25 Open-Source LLMs 22:48 Jonathan Frankle on W&B 23:45 Data Quality for LLM Training 25:55 W&B for Data Versioning 27:25 Curriculum Learning 29:28 GPU Rich and Data Quality 30:30 Vector DBs and Data Quality 32:50 Tuning Training with Weights & Biases 35:47 Training Reports 42:28 HF Collections and W&B Sweeps 44:50 Exciting Directions for AI