The Very Best OneTrainer Workflow & Conf...

Hi @Dr. Furkan Gözükara what speed are you getting using: tier1_SD15_fastest_48GB.json from this post? https://www.patreon.com/posts/very-best-config-97381002?utm_medium=clipboard_copy&utm_source=copyLink&utm_campaign=postshare_fan&utm_content=web_share
Patreon
The Very Best OneTrainer Workflow & Config For SD 1.5 Based Models ...
Get more from SECourses: Tutorials, Guides, Resources, Training, MidJourney, Voice Clone, TTS, ChatGPT, GPT, LLM, Scripts by Furkan Gözükara on Patreon
19 Replies
papanton
papantonOP6mo ago
I am getting 4s/it in a 4090
Furkan Gözükara SECourses
what is your batch size? 48 gb config wont work on 4090 @papanton also 4090 gets like 1.2 second / it on kohya with batch size 1
papanton
papantonOP6mo ago
oh its actually 48gb? I thought that was a typo it seems to work? Unless you think OneTrainer is somehow throttling vs throwing an out of memory error I am using batch size 1. I dont want to sacrifice likeness for speed 1.2s/it with which workflow? oh actually I am using: tier2_SD15_fast_15GB.json : Uses 145 GB VRAM and 1.03 second / it on RTX 3090 TI on a 4090. Since you where getting 1s/it on 3090, I was hopping for as good/better
Furkan Gözükara SECourses
1s/it i never got for sdxl with 3090 1.03 second / it for sdxl is great
papanton
papantonOP6mo ago
it’s for sd1.5 using the epic realism checkpoint
Furkan Gözükara SECourses
i dont remember latest speed but i think it is ok speed
papanton
papantonOP6mo ago
I am getting 4s/it i am referring to your numbers
Furkan Gözükara SECourses
ye this is wrong restart computer use lesser vram config
papanton
papantonOP6mo ago
I am using a run pod
Furkan Gözükara SECourses
did you kill web ui?
papanton
papantonOP6mo ago
with 15 training images 768x768
Furkan Gözükara SECourses
updated relunacher py?
papanton
papantonOP6mo ago
I am using a docker one trainer through api no UI
Furkan Gözükara SECourses
then i dont know
papanton
papantonOP6mo ago
could it be due to the ` "weight_dtype": "FLOAT_32", "output_dtype": "FLOAT_32",
papanton
papantonOP6mo ago
since I am using the https://civitai.com/models/25694/epicrealism which is fp16
epiCRealism - Natural Sin RC1 VAE | Stable Diffusion Checkpoint | C...
Natural Sin Final and last of epiCRealism Since SDXL is right around the corner , let's say it is the final version for now since I put a lot effor...
Furkan Gözükara SECourses
weight type makes it slower output type dont change but i already train sd 1.5 with fp32 bf16 wont work there
papanton
papantonOP6mo ago
@Dr. Furkan Gözükara do you happen to know what cuda version are you using locally? and what python version?
Furkan Gözükara SECourses
11.8 cuda 3.10.11 python
Want results from more Discord servers?
Add your server