LyCORIS Chopping
LoRAs can sometimes include unwanted elements or behaviors during training. “Chopping” allows you to selectively enable or disable different parts of a LoRA model to fine-tune its effects. This can help control style transfer, character consistency, and other attributes.
Quick Solution: Block Weighting
You can use block weighting tools during generation:
- ComfyUI Inspire Pack - Includes LoRA block weight functionality
- A1111 LoRA Block Weight
Permanent Solution: Chopping
For a permanent solution, you can use chop_blocks.py
by Gaeros to modify the LoRA file itself:
git clone https://github.com/elias-gaeros/resize_lora
cd resize_lora
How to Use
python chop_blocks.py --model input.safetensors --save_to output.safetensors --vector "1,0,0,0,0,0,0,0,0,0,0,0,1,1,1,1,1,1,1,1,1"
Understanding Block Weights
Block weight vectors control which layers are kept or removed using values from 0 to 1:
- 1.0 = Keep layer completely
- 0.0 = Remove layer completely
- Values between = Partial effect
SDXL Block Structure (27 Values)
SDXL models use a 27-element vector with this mapping:
- Indices 0-8: UNet Down Blocks
- Indices 9-11: UNet Mid Block
- Indices 12-20: UNet Up Blocks
- Indices 21-26: CLIP Text Encoder
By setting values to 1, you enable training for that block; setting to 0 disables it.
Common Block Weight Presets
Preset Name | Vector | Description |
---|---|---|
attn-mlp (kohya preset) | 0,0,1,1,1,0,0,0,0,1,1,1,0,0,0,0,1,1,1,0,0,1,1,1,1,1,1 | Targets all transformer blocks, including attention and MLP layers |
attn-only | 0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,1,0,0,0,0,1,0,0,0,0,0 | Focuses solely on attention layers within the transformer blocks |
full | 1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1 | Applies training to all layers in both the UNet and CLIP |
unet-transformer-only | 0,0,1,1,1,0,0,0,0,1,1,1,0,0,0,0,1,1,1,0,0,0,0,0,0,0,0 | Targets only the transformer blocks within the UNet, excluding the text encoder |
unet-convblock-only | 1,1,0,0,0,1,1,1,1,0,0,0,1,1,1,1,0,0,0,1,1,0,0,0,0,0,0 | Focuses on the ResBlock, UpSample, and DownSample layers within the UNet |
My Personal Block Weight Presets
Preset Name | Vector |
---|---|
Character Focus | 1,0,0,0,0,0,0,1,1,0,0,0,1,1,1,1,1,1,0,0,0 |
hamgas | 1,0,0,0,0,0,0,1,1,0,0,0,1,0,1,1,1,1,0,0,0 |
kenket | 1,1,1,1,0,0,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1 |
serpent_x | 1,0,0,0,0,1,0,1,1,0,0,0,1,1,1,1,1,1,0,0,0 |
BEEG LAYERS | 1,0,0,0,1,1,0,1,1,0,1,0,1,1,1,0,0,0,0,0,0 |
All The Layers | 1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1 |
All-In | 1,1,1,1,1,1,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0 |
All-Mid | 1,0,0,0,0,0,0,0,0,1,1,1,0,0,0,0,0,0,0,0,0 |
All-Out (Wolf-Link) | 1,0,0,0,0,0,0,0,0,0,0,0,1,1,1,1,1,1,1,1,1 |
Style Transfer | 1,0,0,0,0,0,0,0,1,0,0,0,1,1,1,0,0,0,0,0,0 |
Ringdingding (Stoat) | 1,0,0,0,0,0,0,0,1,0,0,1,1,1,1,0,0,0,0,0,0 |
Garfield (Character+Style) | 1,0,0,0,0,0,0,0,0,0,0,0,1,1,0,0,1,1,0,0,0 |
Rutkowski | 1,1,1,1,1,1,0,1,1,0,0,0,1,1,1,1,1,1,1,1,1 |
LyCORIS Preset/Config System
The Preset/Config system was added after LyCORIS 1.9.0 for more fine-grained control during training.
Built-in Presets
LyCORIS provides several presets that can be applied with --network_args "preset=NAME"
:
preset=full
: Default preset, trains all layers in the UNet and CLIPpreset=full-lin
: Same asfull
but skips convolutional layerspreset=attn-mlp
: The “kohya preset”, trains all transformer blockspreset=attn-only
: Only attention layers will be trained (common in research papers)preset=unet-transformer-only
: Same as kohya_ss/sd_scripts with disabled TE, or attn-mlp preset with train_unet_only enabledpreset=unet-convblock-only
: Only ResBlock, UpSample, and DownSample will be trained
Custom Configs with TOML
For more detailed control, you can create a config.toml
file and specify module targets and algorithms. This allows you to:
- Choose different algorithms for specific module types/modules
- Use different settings for specific module types/modules
- Enable training for specific module types/modules
The config system supports the following arguments:
enable_conv
(bool): Enable training for convolution layers or notunet_target_module
(list[str]): Classes of modules to train in UNetunet_target_name
(list[str]): Names of modules to train in UNet (regex supported)text_encoder_target_module
(list[str]): Classes of modules to train in Text Encodertext_encoder_target_name
(list[str]): Names of modules to train in Text Encoder (regex supported)module_algo_map/name_algo_map
(dict[str, str]): Apply different settings to different class/name
Implementing Block Weights
There are several ways to apply block weights to your LyCORIS models.
During Training
When training with sd-scripts, you can specify block weight vectors using the --block_weights
argument:
accelerate launch train_network.py \
--network_module=lycoris.kohya \
--network_args "algo=locon" \
--block_weights="0,0,1,1,1,0,0,0,0,1,1,1,0,0,0,0,1,1,1,0,0,1,1,1,1,1,1" \
[other training arguments]
Alternatively, you can apply presets during training using the --network_args
parameter:
accelerate launch train_network.py \
--network_module=lycoris.kohya \
--network_args "algo=locon" "preset=attn-mlp" \
[other training arguments]
During Generation
In ComfyUI
In ComfyUI, you can use the LoRA Loader (Block Weight) node to input block vectors directly.
In Automatic1111 (A1111)
In A1111, you can incorporate block weight vectors into your prompts using this syntax:
<lyco:"model_name":1:1:lbw=0,0,1,1,1,0,0,0,0,1,1,1,0,0,0,0,1,1,1,0,0,1,1,1,1,1,1>
Replace “model_name” with the actual name of your LyCORIS model.