LyCORIS Chopping and Block Weighting

LyCORIS Chopping

LoRAs can sometimes include unwanted elements or behaviors during training. “Chopping” allows you to selectively enable or disable different parts of a LoRA model to fine-tune its effects. This can help control style transfer, character consistency, and other attributes.

Quick Solution: Block Weighting

You can use block weighting tools during generation:

Permanent Solution: Chopping

For a permanent solution, you can use chop_blocks.py by Gaeros to modify the LoRA file itself:

git clone https://github.com/elias-gaeros/resize_lora
cd resize_lora

How to Use

python chop_blocks.py --model input.safetensors --save_to output.safetensors --vector "1,0,0,0,0,0,0,0,0,0,0,0,1,1,1,1,1,1,1,1,1"

Understanding Block Weights

Block weight vectors control which layers are kept or removed using values from 0 to 1:

  • 1.0 = Keep layer completely
  • 0.0 = Remove layer completely
  • Values between = Partial effect

SDXL Block Structure (27 Values)

SDXL models use a 27-element vector with this mapping:

  • Indices 0-8: UNet Down Blocks
  • Indices 9-11: UNet Mid Block
  • Indices 12-20: UNet Up Blocks
  • Indices 21-26: CLIP Text Encoder

By setting values to 1, you enable training for that block; setting to 0 disables it.

Common Block Weight Presets

Preset NameVectorDescription
attn-mlp (kohya preset)0,0,1,1,1,0,0,0,0,1,1,1,0,0,0,0,1,1,1,0,0,1,1,1,1,1,1Targets all transformer blocks, including attention and MLP layers
attn-only0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,1,0,0,0,0,1,0,0,0,0,0Focuses solely on attention layers within the transformer blocks
full1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1Applies training to all layers in both the UNet and CLIP
unet-transformer-only0,0,1,1,1,0,0,0,0,1,1,1,0,0,0,0,1,1,1,0,0,0,0,0,0,0,0Targets only the transformer blocks within the UNet, excluding the text encoder
unet-convblock-only1,1,0,0,0,1,1,1,1,0,0,0,1,1,1,1,0,0,0,1,1,0,0,0,0,0,0Focuses on the ResBlock, UpSample, and DownSample layers within the UNet

My Personal Block Weight Presets

Preset NameVector
Character Focus1,0,0,0,0,0,0,1,1,0,0,0,1,1,1,1,1,1,0,0,0
hamgas1,0,0,0,0,0,0,1,1,0,0,0,1,0,1,1,1,1,0,0,0
kenket1,1,1,1,0,0,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1
serpent_x1,0,0,0,0,1,0,1,1,0,0,0,1,1,1,1,1,1,0,0,0
BEEG LAYERS1,0,0,0,1,1,0,1,1,0,1,0,1,1,1,0,0,0,0,0,0
All The Layers1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1
All-In1,1,1,1,1,1,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0
All-Mid1,0,0,0,0,0,0,0,0,1,1,1,0,0,0,0,0,0,0,0,0
All-Out (Wolf-Link)1,0,0,0,0,0,0,0,0,0,0,0,1,1,1,1,1,1,1,1,1
Style Transfer1,0,0,0,0,0,0,0,1,0,0,0,1,1,1,0,0,0,0,0,0
Ringdingding (Stoat)1,0,0,0,0,0,0,0,1,0,0,1,1,1,1,0,0,0,0,0,0
Garfield (Character+Style)1,0,0,0,0,0,0,0,0,0,0,0,1,1,0,0,1,1,0,0,0
Rutkowski1,1,1,1,1,1,0,1,1,0,0,0,1,1,1,1,1,1,1,1,1

LyCORIS Preset/Config System

The Preset/Config system was added after LyCORIS 1.9.0 for more fine-grained control during training.

Built-in Presets

LyCORIS provides several presets that can be applied with --network_args "preset=NAME":

  • preset=full: Default preset, trains all layers in the UNet and CLIP
  • preset=full-lin: Same as full but skips convolutional layers
  • preset=attn-mlp: The “kohya preset”, trains all transformer blocks
  • preset=attn-only: Only attention layers will be trained (common in research papers)
  • preset=unet-transformer-only: Same as kohya_ss/sd_scripts with disabled TE, or attn-mlp preset with train_unet_only enabled
  • preset=unet-convblock-only: Only ResBlock, UpSample, and DownSample will be trained

Custom Configs with TOML

For more detailed control, you can create a config.toml file and specify module targets and algorithms. This allows you to:

  • Choose different algorithms for specific module types/modules
  • Use different settings for specific module types/modules
  • Enable training for specific module types/modules

The config system supports the following arguments:

  • enable_conv (bool): Enable training for convolution layers or not
  • unet_target_module (list[str]): Classes of modules to train in UNet
  • unet_target_name (list[str]): Names of modules to train in UNet (regex supported)
  • text_encoder_target_module (list[str]): Classes of modules to train in Text Encoder
  • text_encoder_target_name (list[str]): Names of modules to train in Text Encoder (regex supported)
  • module_algo_map/name_algo_map (dict[str, str]): Apply different settings to different class/name

Implementing Block Weights

There are several ways to apply block weights to your LyCORIS models.

During Training

When training with sd-scripts, you can specify block weight vectors using the --block_weights argument:

accelerate launch train_network.py \
  --network_module=lycoris.kohya \
  --network_args "algo=locon" \
  --block_weights="0,0,1,1,1,0,0,0,0,1,1,1,0,0,0,0,1,1,1,0,0,1,1,1,1,1,1" \
  [other training arguments]

Alternatively, you can apply presets during training using the --network_args parameter:

accelerate launch train_network.py \
  --network_module=lycoris.kohya \
  --network_args "algo=locon" "preset=attn-mlp" \
  [other training arguments]

During Generation

In ComfyUI

In ComfyUI, you can use the LoRA Loader (Block Weight) node to input block vectors directly.

In Automatic1111 (A1111)

In A1111, you can incorporate block weight vectors into your prompts using this syntax:

<lyco:"model_name":1:1:lbw=0,0,1,1,1,0,0,0,0,1,1,1,0,0,0,0,1,1,1,0,0,1,1,1,1,1,1>

Replace “model_name” with the actual name of your LyCORIS model.