The Very Best OneTrainer Workflow & Conf...
Hi @Dr. Furkan Gözükara what speed are you getting using: tier1_SD15_fastest_48GB.json from this post? https://www.patreon.com/posts/very-best-config-97381002?utm_medium=clipboard_copy&utm_source=copyLink&utm_campaign=postshare_fan&utm_content=web_share
Patreon
The Very Best OneTrainer Workflow & Config For SD 1.5 Based Models ...
Get more from SECourses: Tutorials, Guides, Resources, Training, MidJourney, Voice Clone, TTS, ChatGPT, GPT, LLM, Scripts by Furkan Gözükara on Patreon
19 Replies
I am getting 4s/it in a 4090
what is your batch size?
48 gb config wont work on 4090
@papanton
also 4090 gets like 1.2 second / it on kohya
with batch size 1
oh its actually 48gb? I thought that was a typo
it seems to work? Unless you think OneTrainer is somehow throttling vs throwing an out of memory error
I am using batch size 1. I dont want to sacrifice likeness for speed
1.2s/it with which workflow?
oh actually I am using: tier2_SD15_fast_15GB.json : Uses 145 GB VRAM and 1.03 second / it on RTX 3090 TI
on a 4090. Since you where getting 1s/it on 3090, I was hopping for as good/better
1s/it i never got for sdxl with 3090
1.03 second / it for sdxl is great
it’s for sd1.5
using the epic realism checkpoint
i dont remember latest speed but i think it is ok speed
I am getting 4s/it
i am referring to your numbers
ye this is wrong
restart computer
use lesser vram config
I am using a run pod
did you kill web ui?
with 15 training images 768x768
updated relunacher py?
I am using a docker one trainer through api
no UI
then i dont know
could it be due to the ` "weight_dtype": "FLOAT_32",
"output_dtype": "FLOAT_32",
epiCRealism - Natural Sin RC1 VAE | Stable Diffusion Checkpoint | C...
Natural Sin Final and last of epiCRealism Since SDXL is right around the corner , let's say it is the final version for now since I put a lot effor...
weight type makes it slower
output type dont change
but i already train sd 1.5 with fp32
bf16 wont work there
@Dr. Furkan Gözükara do you happen to know what cuda version are you using locally?
and what python version?
11.8 cuda
3.10.11 python