i m starting with the most simple thing
i'm starting with the most simple thing - restarting computer .. lol will share shortly
26 Replies
computer restarted and same error. which setting should i share?
Python revision: 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Dreambooth revision: 9f4d931a319056c537d24669cb950d146d1537b0
SD-WebUI revision: 15e89ef0f6f22f823c19592a401b9e4ee477258c
Checking Dreambooth requirements...
[+] bitsandbytes version 0.35.0 installed.
[+] diffusers version 0.10.2 installed.
[+] transformers version 4.25.1 installed.
[+] xformers version 0.0.14.dev0 installed.
[+] torch version 1.12.1+cu113 installed.
[+] torchvision version 0.13.1+cu113 installed.
@uzmenesi i am checking now
sorry for delay
thank you appreciate it
ok your batch size is 6
that uses more vram
i have 24, shouldn't be an issue
your gradient accumulation is 4
also did you pick xformers?
if not that is why
24 images total
i suggest you to first try with
1 batch and 1 gradient
later as experiment you can try gradient 24
Launching Web UI with arguments: --xformers --no-half
==============================================================================
i did try with 1
dont use no-half
if i change my bach size and grad acc steps to 1, i get a different error
Applying xformers cross attention optimization.
Training at rate of 0.005 until step 3000
Preparing dataset...
100%|█████████████████████████████████████████████████████████████████████████████████████| 48/48 [00:01<00:00, 28.56it/s]
Training textual inversion [Epoch 0: 6/24] loss: 0.2058506: 0%| | 3/2998 [00:00<12:27, 4.01it/s]Traceback (most recent call last):
File "C:\stable-diffusion-webui\modules\textual_inversion\textual_inversion.py", line 502, in train_embedding
scaler.step(optimizer)
File "C:\stable-diffusion-webui\venv\lib\site-packages\torch\cuda\amp\grad_scaler.py", line 336, in step
assert len(optimizer_state["found_inf_per_device"]) > 0, "No inf checks were recorded for this optimizer."
AssertionError: No inf checks were recorded for this optimizer.
it is necessary for sd 2.1
and dont apply xformers
you dont need
your ram is good
if you open any desk i can quickly set parameters for you
i also want to do embeddings in 2.1, that's why i have the no half
dreambooth 1.5 was running fine with half though.
ye it works fine but increases vram
question - if remove the no half, i have to manually add it back when switching models from 1.5 to 2?
yes
but it only necessary for
training
not genereating images
i'm doing training w dreambooth
ye for 2.1 you need no half
i'll take out no half for 1 run and see if it works with 1.5.
i'm puzzled with the reserved memory though
Tried to allocate 3.00 GiB (GPU 0; 23.99 GiB total capacity; 16.73 GiB already allocated; 0 bytes free; 21.40 GiB reserved in total by PyTorch)
ye it is pretty wild
took out no half, ran with batch & gradient 1, same error
if you open anydesk
i can check
one sec.
do you my any chance have teamviewer?
yes
can i send you dm?