they use bigger batch size and lesser steps
they use bigger batch size and lesser steps
13 Replies
actually when I tested before they did training on 512x512 haha
the second question is the answer. you want each dataset to be trained total number of steps equally
that is the logic of repeating
otherwise if that is a balanced dataset put all into a single folder
this is not error
just a warning
make a video and send me
How much does the image set matter? 2 photos were duplicated, so my model may have been strong. After deleting these two and having only 23 photos left, I had to change the d_coef value, and 0.75 was too much. I went all the way down to 0.5, which seems to have been fine, although since it never reached the 1 limit of normalization, it was probably a bit under-trained. Based on the tests, not bad, but not reaching a score above 0.9, so really under-trained.
i never got any useful info from these statistics for stable diffusion :/
okay
These graphs are only displayed when normalization is enabled for Lora (it is not enabled for DB). Based on the developer's description, I watch these two values to see if the maximum normalization reaches 1 (it is supposed to be good then) or not over normalizing, so how much it deviates from the original model, how much Lora needs to be improved. And from my observations, for cosine type training, these two graphs give very useful results under Prodigy.
Well I would prefer full DreamBooth and extract LoRA
Here is a topic on kohya site from normalization:
https://github.com/kohya-ss/sd-scripts/pull/545
GitHub
Dropout and Max Norm Regularization for LoRA training by AI-Casanov...
This PR adds Dropout and Max Norm Regularization [Paper] to train_network.py
Dropout randomly removes some weights/neurons from calculation on both the forward and backward passes, effectively trai...
That's what I was looking for in DreamBooth training, because it helps a lot to make a good model, and since I use it with cosine under Prodigy, I get absolutely good results when I watch the graph. I can see at a glance how good or bad Lora will be (see above), and tests have confirmed this.
Hi everyone,
I'm having serious problem with dreambooth, I always get terrible results and for the good ones, the face is not the same per dataset. I'm training everything on epicPhotogasm.
Of course it's my fault, this is my setup (apart from the A100 80G):
20 pictures of the same person (always portrait shot, not fullbody) in 1024x1024
2000 train steps
100 class images
lr 1e-6 100 validation steps
(please let me know if you need more details on the settings, but everything else is set as default)
Do you suggest to use 100/200 pictures and increase both train steps and class images? Or if you have better suggestions I'm open to everything!
Thank you!
And here's the graph of the winning Lora, with d_coef=0.6 and 23 images. While I like to train it a bit stronger, I also let the Hires Fix work, which I used to boost the initial face to 0.96 for the very first generation. You can see in the image that I achieved normalization 1 on the right graph (yellow color), so it made a good model and only adjusted the normalization a few times.
are you using my patreon config?
unfortunately no 😅 I sent you a dm btw, I'd appreciate if you can reply, thank you!
checking dms in a moment hopefully sorry for delay