15 Replies
For Lora SDXL training, I notice a lot of people have training image sets around 30. Is this lore or has there been actual testing involved?
highly depends on what you want the LoRA to generate. If you need portraits images of a person (shoulder up) you can use 5 - 10 images.
If you want upperbody and rotations, aim for 15 - 20
If you want full flexibilty go for 30+
The more images you use, the more different poses you should add while keeping the "main" pose (usually portrait) dominant.
i.e. 15 portrait (inkl rotations of head) + 5 upperbody + 5 activities + 5 full body ...
Its a bit of manual process in A1111, I don't know of an extension that does this. I created the workflow in blockey and published it as a template so you can just click try block or select the template and you can build of it. (It will load the same thing in the video) Here is the template doc. https://www.blockeyai.com/blog/infographic-generator
Infographic Generator - BlockeyAI
Infographic and Vector image manipulation.
workspace/stable-diffusion-webui/models/Lora/me-test-000005.safetensors
anyone know why these are not showing up in the loras?
i even added a wtf folder that shows up
@ashleyk @Dr. Furkan Gözükara i was working on Kohya LORA with SDXL https://www.youtube.com/watch?v=-xEwaQ54DI4&t=282s and successfully downloaded images from https://www.patreon.com/posts/4k-2700-real-84053021 but was not sure the next steps of cropping, since it is not mentioned in the notes . Please advise on how to proceed as mentioned in the videos
SECourses
YouTube
How To Do SDXL LoRA Training On RunPod With Kohya SS GUI Trainer & ...
If you don't have a GPU, or have a strong GPU, or you are using Mac and your computer not supporting Stable Diffusion training, SDXL training, then this is the tutorial you are looking for. In this tutorial, we will use a cheap cloud GPU service provider RunPod to use both Stable Diffusion Web UI Automatic1111 and Stable Diffusion trainer Kohya ...
I try to install the dreambooth extension. I have installed it and clicked Apply and restarted my pod on Runpod, but the extension is not showing up
I wondered if anyone know why
@ashleyk @Dr. Furkan Gözükara good news everyone. Looks like we can train Kohya_ss Dreambooth with <20GB now. I'm not sure though
will you please pm me an invite to that discord
I want to train Dreambooth (not Lora) and copy the style of a cartoon series with 500 style photos and also add a character based on a youtuber, but I only have up to 30 photos of him. Does it make sense to copy paste these 30 photos up to 300 photos to create a balance between the many style photos and the few youtuber photos? Or should I create lots of image generations (I already have a Lora based on him)? What would you suggest?
I heard its not good to have lots of style photos but very few character photos.
My end goal is basically to be able to train an animation style and create characters based on real people or drawings. And afterwards create little movie clips based on the batch function. Later on I would use this process for my own drawings.
its annoying how much i break stuff
im getting this error def apply_min_size(sample, size, image_interpolation_method=cv2.INTER_AREA):
AttributeError: module 'cv2' has no attribute 'INTER_AREA'
this was after i ended a img2img batch run before it finished
😢
New woman reg images are awesome. @Dr. Furkan Gözükara much thanks!! This is a result of 8000 steps style training with mostly your settings