Day Two Cloud 201: Building A Product That Uses LLMs

Day Two DevOps - En podkast av Packet Pushers - Onsdager

Kategorier:

Today we talk about Large Language Models (LLMs) and developing products and applications that use LLMs. An LLM is a training model for generative AI systems that can write text in response to questions and prompts. Our guest is Phillip Carter, Principal PM at Honeycomb.io. Honeycomb makes an observability tool for site reliability engineers, and Carter worked on a project called Query Assistant that helps Honeycomb customers get insights from the product via natural language queries. We talk with Carter about how the LLM works, what it can and can’t do, how he and his team worked around challenges, and more. We discuss: * The challenges of AI Ops * Prompt engineering and what it is * Datasets and how to get an LLM to work with a product * Getting from natural language inputs to JSON outputs * Ensuring accuracy in responses * Addressing privacy and security concerns * More Takeaways: LLMs don’t let you hide your bad product experiences. You can get an LLM to do 80% of a product MVP in an afternoon. The other 20% is the rest of the month. It’s the wild west out there. Everything changes weekly. Stick with a tool or methodology and just use that for now. Don’t ride hype waves. Show Links: All the Hard Stuff Nobody Talks About when Building Products with LLMs – Honeycomb Blog Phillipcarter.dev @_cartermp – Phillip Carter on Twitter Phillip Carter on LinkedIn

Visit the podcast's native language site