Hi all, I'm a brand new Patreon

Hi all, I'm a brand new Patreon supporter. I'm following the tutorial to make a LoRA training of myself in Kaggle. But I get the following error: torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 14.76 GiB total capacity; 13.17 GiB already allocated; 7.75 MiB free; 13.46 GiB reserved in total by PyTorch Here is my training command: accelerate launch --num_cpu_threads_per_process=2 "./sdxl_train_network.py" --pretrained_model_name_or_path="stabilityai/stable-diffusion-xl-base-1.0" --train_data_dir="/kaggle/working/results/img" --reg_data_dir="/kaggle/working/results/reg" --resolution="1024,1024" --output_dir="/kaggle/working/results/model" --logging_dir="/kaggle/working/results/log" --network_alpha="1" --save_model_as=safetensors --network_module=networks.lora --text_encoder_lr=0.0004 --unet_lr=0.0004 --network_dim=32 --output_name="kaggle_glpr123" --lr_scheduler_num_cycles="8" --no_half_vae --learning_rate="0.0004" --lr_scheduler="constant" --train_batch_size="1" --max_train_steps="6400" --save_every_n_epochs="1" --mixed_precision="fp16" --save_precision="fp16" --cache_latents --optimizer_type="Adafactor" --optimizer_args scale_parameter=False relative_step=False warmup_init=False --max_data_loader_n_workers="0" --bucket_reso_steps=64 --full_fp16 --xformers --bucket_no_upscale --noise_offset=0.0 --lowram --max_grad_norm=0.0
1 Reply
Greg
Greg12mo ago
seems like the issue was that somehow --gradient_checkpointing was dropped from my list of params
Want results from more Discord servers?
Add your server