UNC research team on VL Adapter for Efficient CLIP Transfer
Weaviate Podcast - En podkast av Weaviate
Kategorier:
Weaviate Podcast #14. Thanks for watching the Weaviate podcast! Our 14th episode welcomes Yi-Lin Sung, Jaemin Cho, and Professor Mohit Bansal, a research team from UNC! Our guests present their work on VL Adapter, a technique to achieve full fine-tuning performance while only updating 4% of original parameters!! This is an incredibly interesting finding for the sake of cost-effective tuning of Vision and Language models based on CLIP. We additionally discussed topics around compression bottlenecks in neural architectures, V&L datasets, and the tricky question of compositional generalization. If you are curious about using CLIP in Weaviate, please check out this text-to-image search example with Unsplash images and a React frontend!