Model Shrinking
After training your LoRA model, you can significantly reduce its file size without noticeable quality loss. This process, known as “shrinking,” offers several benefits:
- Reduced file size (especially significant for LyCORIS models)
- Better compatibility with other models
- Improved performance when stacking multiple LoRAs together
Using resize_lora
We’ll use the resize_lora tool for this process.
Installation
First, clone the repository:
git clone https://github.com/elias-gaeros/resize_lora
cd resize_lora
Make sure you have the required dependencies:
pip install torch tqdm safetensors
Running the Shrinking Process
The basic command structure is:
python resize_lora.py -o {output_directory} -r fro_ckpt=1,thr=-3.55 model.safetensors lora.safetensors
Where:
{output_directory}
: Your desired output directorymodel.safetensors
: The checkpoint you used to train your LoRA (or the checkpoint you want to use with the LoRA)lora.safetensors
: Your LoRA file that you want to shrink
Understanding the Shrinking Recipe
The -r fro_ckpt=1,thr=-3.55
part is the “recipe” for shrinking:
fro_ckpt=1
: Uses the Frobenius norm for analysis, considering the checkpoint’s weightsthr=-3.55
: Sets the threshold for SVD (Singular Value Decomposition)
How Shrinking Works
The shrinking process uses SVD to analyze the importance of different components in your LoRA. It then removes the least important parts while preserving the most influential ones.
This process is somewhat similar to how JPEG compression works for images – it removes information that contributes least to the final result, keeping file sizes smaller while maintaining visual quality.
Testing the Results
After shrinking, it’s a good idea to compare the original and shrunk models to ensure quality is maintained. Generate some images with both versions using the same seed and prompts to validate that the visual output remains satisfactory.
Most users report little to no noticeable difference in output quality, while enjoying the benefits of smaller file sizes and better model compatibility.
Additional Recipes
The resize_lora tool offers various SVD recipes for different shrinking approaches. The one recommended here (fro_ckpt=1,thr=-3.55
) has been tested extensively and tends to work well for most cases, but you can experiment with other options as described in the project’s README.
For more information about the mathematics behind the shrinking process and additional options, refer to the resize_lora GitHub repository.
Next Steps
After successfully shrinking your model to optimize its size, you might want to explore more advanced training concepts. Continue to the Advanced Training Concepts guide to learn about steps vs epochs and other important training concepts.