i m starting with the most simple thing

i'm starting with the most simple thing - restarting computer .. lol will share shortly
26 Replies
uzmenesi
uzmenesiOP2y ago
computer restarted and same error. which setting should i share? Python revision: 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)] Dreambooth revision: 9f4d931a319056c537d24669cb950d146d1537b0 SD-WebUI revision: 15e89ef0f6f22f823c19592a401b9e4ee477258c Checking Dreambooth requirements... [+] bitsandbytes version 0.35.0 installed. [+] diffusers version 0.10.2 installed. [+] transformers version 4.25.1 installed. [+] xformers version 0.0.14.dev0 installed. [+] torch version 1.12.1+cu113 installed. [+] torchvision version 0.13.1+cu113 installed.
uzmenesi
uzmenesiOP2y ago
No description
Furkan Gözükara SECourses
@uzmenesi i am checking now sorry for delay
uzmenesi
uzmenesiOP2y ago
thank you appreciate it
Furkan Gözükara SECourses
ok your batch size is 6 that uses more vram
uzmenesi
uzmenesiOP2y ago
i have 24, shouldn't be an issue
Furkan Gözükara SECourses
your gradient accumulation is 4 also did you pick xformers? if not that is why
uzmenesi
uzmenesiOP2y ago
24 images total
Furkan Gözükara SECourses
i suggest you to first try with 1 batch and 1 gradient later as experiment you can try gradient 24
uzmenesi
uzmenesiOP2y ago
Launching Web UI with arguments: --xformers --no-half ============================================================================== i did try with 1
Furkan Gözükara SECourses
dont use no-half
uzmenesi
uzmenesiOP2y ago
if i change my bach size and grad acc steps to 1, i get a different error Applying xformers cross attention optimization. Training at rate of 0.005 until step 3000 Preparing dataset... 100%|█████████████████████████████████████████████████████████████████████████████████████| 48/48 [00:01<00:00, 28.56it/s] Training textual inversion [Epoch 0: 6/24] loss: 0.2058506: 0%| | 3/2998 [00:00<12:27, 4.01it/s]Traceback (most recent call last): File "C:\stable-diffusion-webui\modules\textual_inversion\textual_inversion.py", line 502, in train_embedding scaler.step(optimizer) File "C:\stable-diffusion-webui\venv\lib\site-packages\torch\cuda\amp\grad_scaler.py", line 336, in step assert len(optimizer_state["found_inf_per_device"]) > 0, "No inf checks were recorded for this optimizer." AssertionError: No inf checks were recorded for this optimizer.
Furkan Gözükara SECourses
it is necessary for sd 2.1 and dont apply xformers you dont need your ram is good if you open any desk i can quickly set parameters for you
uzmenesi
uzmenesiOP2y ago
i also want to do embeddings in 2.1, that's why i have the no half dreambooth 1.5 was running fine with half though.
Furkan Gözükara SECourses
ye it works fine but increases vram
uzmenesi
uzmenesiOP2y ago
question - if remove the no half, i have to manually add it back when switching models from 1.5 to 2?
Furkan Gözükara SECourses
yes but it only necessary for training not genereating images
uzmenesi
uzmenesiOP2y ago
i'm doing training w dreambooth
Furkan Gözükara SECourses
ye for 2.1 you need no half
uzmenesi
uzmenesiOP2y ago
i'll take out no half for 1 run and see if it works with 1.5. i'm puzzled with the reserved memory though Tried to allocate 3.00 GiB (GPU 0; 23.99 GiB total capacity; 16.73 GiB already allocated; 0 bytes free; 21.40 GiB reserved in total by PyTorch)
Furkan Gözükara SECourses
ye it is pretty wild
uzmenesi
uzmenesiOP2y ago
took out no half, ran with batch & gradient 1, same error
Furkan Gözükara SECourses
if you open anydesk i can check
uzmenesi
uzmenesiOP2y ago
one sec. do you my any chance have teamviewer?
uzmenesi
uzmenesiOP2y ago
can i send you dm?
Want results from more Discord servers?
Add your server