Research Reading: Fine-Tuning Stable Diffusion Models with SVD

Jonathan Bechtel

Details
Stable diffusion models have been a revelation for image generation, but so far they're not able to be used in production due to their size and cost. Therefore, finding ways to make their inference more efficient is of paramount importance for their commercial adoption.
Today we'll go over an intriguing new paper that uses an old fashioned linear algebra technique on layer weights to make it easier to train large models and, according to the results, allows for models that are 100x smaller with similar performance. The main insight is by using newer data augmentation techniques and fitting on the singular values of layer weights you can dramatically reduce model size without much accuracy loss.
Join us for this casual and fun-loving reading session where we go through the paper, discuss its main insights, and chat about the future of ML.
Link to the paper we'll be discussing can be found here: https://arxiv.org/abs/2303.11305
While this event is FREE, tickets are required & space is limited!
Attend this event