I was starting to play around with one

I was starting to play around with one trainer and an XL checkpoint and noticed the learning rate in the tier_1 10GB json preset was "learning_rate": 1e-05" which seems higher than all your other learning rates you've been moving towards. Is that accurate or should I move it to something like 1e-06 or 8e-07?
5 Replies
devinthenull
devinthenull3mo ago
or does the adafactor optimizer auto tune the LR so it doesn't matter what you put
Maxivi
Maxivi3mo ago
try 7e-6 or 8e-6 for unet
Furkan Gözükara SECourses
@devinthenull which link you downloaded config from? 1e-5 is accurate for u-net for text encoder we use lower this LR is good for adafactor + mixed BF16 precision it was like this for kohya as well but later something changed with kohya and i did new trainings and found out that LR is now applied more strongly so i had to reduce LR
Furkan Gözükara SECourses
Patreon
OneTrainer Stable Diffusion XL (SDXL) Fine Tuning Best Presets | SE...
Get more from SECourses: Tutorials, Guides, Resources, Training, MidJourney, Voice Clone, TTS, ChatGPT, GPT, LLM, Scripts by Furkan Gözükara on Patreon
devinthenull
devinthenull3mo ago
@Dr. Furkan Gözükara from the patreon link. tier1_10.4GB_slow.json and tier1_15.4GB_fast.json
Want results from more Discord servers?
Add your server