Resolution-robust Large Mask Inpainting with Fourier Convolutions (w/ Author Interview)

Yannic Kilcher Videos (Audio Only) - En podkast av Yannic Kilcher

Kategorier:

#lama #inpainting #deeplearning At the end of the video is an interview with the paper authors! LaMa is a system that is amazing at removing foreground objects from images, especially when those objects cover a large part of the image itself. LaMa is specifically trained to reconstruct large masked areas and includes global information throughout its forward propagation by using Fourier Convolutions in its layers. This makes it incredibly effective at reconstructing periodic structures with long-range consistency, compared to regular convolutions. OUTLINE: 0:00 - Intro 0:45 - Sponsor: ClearML 3:30 - Inpainting Examples 5:05 - Live Demo 6:40 - Locality as a weakness of convolutions 10:30 - Using Fourier Transforms for global information 12:55 - Model architecture overview 14:35 - Fourier convolution layer 21:15 - Loss function 24:25 - Mask generation algorithm 25:40 - Experimental results 28:25 - Interview with the authors Paper: https://arxiv.org/abs/2109.07161 Code: https://github.com/saic-mdal/lama Online Demo: https://cleanup.pictures/ Sponsor: ClearML https://clear.ml Abstract: Modern image inpainting systems, despite the significant progress, often struggle with large missing areas, complex geometric structures, and high-resolution images. We find that one of the main reasons for that is the lack of an effective receptive field in both the inpainting network and the loss function. To alleviate this issue, we propose a new method called large mask inpainting (LaMa). LaMa is based on i) a new inpainting network architecture that uses fast Fourier convolutions (FFCs), which have the image-wide receptive field; ii) a high receptive field perceptual loss; iii) large training masks, which unlocks the potential of the first two components. Our inpainting network improves the state-of-the-art across a range of datasets and achieves excellent performance even in challenging scenarios, e.g. completion of periodic structures. Our model generalizes surprisingly well to resolutions that are higher than those seen at train time, and achieves this at lower parameter&time costs than the competitive baselines. The code is available at \url{this https URL}. Authors: Roman Suvorov, Elizaveta Logacheva, Anton Mashikhin, Anastasia Remizova, Arsenii Ashukha, Aleksei Silvestrov, Naejin Kong, Harshith Goka, Kiwoong Park, Victor Lempitsky Links: TabNine Code Completion (Referral): http://bit.ly/tabnine-yannick YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://discord.gg/4H8xxDF BitChute: https://www.bitchute.com/channel/yann... LinkedIn: https://www.linkedin.com/in/ykilcher BiliBili: https://space.bilibili.com/2017636191 If you want to support me, the best thing to do is to share out the content :) If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this): SubscribeStar: https://www.subscribestar.com/yannick... Patreon: https://www.patreon.com/yannickilcher Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2 Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n

Visit the podcast's native language site