So what do we conclude from this
So what do we conclude from this?
84 Replies
unet becomes your character
any noise will be denoised as your character
dont prompt anything
and generate
you will see yourself
I cant wait to experiment with that
if enough trained :d
Ok, so 200 steps were used
wow
wowowowow
200 epoch
wow
what did you use for prompt?
photo of man art by tomer hanuka
😮
photo of man art by craig paton
😄
interesting
try photo of castle
u didnt train any tench?
this is not expected
it sort of is, its affecting photos
I think its impacted by weight look
ye unet will become yourself
i tink you didnt that much overtrianed
that is why
did you use blink to do the tags for the dataset?
yeah its not overtrained
tags are not important
he didnt do tags training
I did this
when text encoder training 0
if code is working as expected
tags are not used at all
captions
only unet trained
so text files are irrelevant?
yep
because they are not used
in unet
training
yup
unet trains only based on image and noise
ok so then
this worked
nice
so it appears
code is not working as expected
i will ask in dreambooth channel now
I will be on standby
Epoch 100 vs 200 same seed
photo of Cyborg man cyberpunk
when you dont use man
what happens
this is vanilla
one second will test
Prompt: photo of Cyborg cyberpunk
what happens if you use woman?
photo of woman cygborg
Does that look like zack?
ok i found the reason
even though text encoder is not trained
it is used how unet is trained
in dreambooth
so it will recognize the face as a man's face?
it will be still assigned to the used prompts
so in this case since he uses man
in file descriptions
yes man and also other used words
yes
will be like subject
right
if you use castle
castle will become like your subjec
yup
ok
blurry photo
so this will only make it subpar if you dont do tenc training
if you dont provide any tokens
i think entire unet would become
you
if works haha :d
let me run one then
Subpar,
So still better to run at 0.7 when using filewords?
i think yes
i think 1 could be better as well
they said they did come up with 0.7
based on experiment
when you dont train unet, only text encoder will be trained
But for next experiment
Step ratio 0, only PNG files
no filewords either right¿
filewords yes, but no filewords in the folder
ok, in the last experiment you used filewords with .txt file next to png, right?
yes
k
awaiting result
@Furkan Gözükara SECourses
holy cow
as expected
this is because
text encoder still there
have your opinion on best practice changed?
not yet
i think both unet training and text encoder training with dreambooth still best
but i cant do
experiments
so i cant say
so follow your long stable diffusion video
this is best
my gpu always full :d
I need to buy a motherboard to plug in the other GPU
wow you going more power :d
I have 2100 watt i think
wow 😄