this is very useful : transformer-explainer : https://poloclub.github.io/transformer-explainer/

this is very useful : transformer-explainer : https://poloclub.github.io/transformer-explainer/
15 Replies
Furkan Gözükara SECourses
quantizations and can be applied to flux
No description
Furkan Gözükara SECourses
swarmui extra args --fast for 4xxx cards
No description
Furkan Gözükara SECourses
GitHub
DeepSpeed/blogs/windows/08-2024/README.md at master · microsoft/Dee...
DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective. - microsoft/DeepSpeed
Furkan Gözükara SECourses
swarmui inpaint face syntax
<segment:face-1,creativity,confidency threshold>
<segment:face-1,creativity,confidency threshold>
or in case of yolov8 models
<segment:yolo-filename.pt-1,creativity,confidency threshold>
<segment:yolo-filename.pt-1,creativity,confidency threshold>
this is where you can continue training lora but i never tried to check if works or not
Furkan Gözükara SECourses
better face model for swarmui https://huggingface.co/Bingsu/adetailer/resolve/main/face_yolov9c.pt <segment:yolo-face_yolov9c.pt,0.7>
No description
Furkan Gözükara SECourses
Batch SUPIR Upscale Settings For FLUX
No description
Furkan Gözükara SECourses
accelerate extra args for multi multi gpu training : --main_process_port 30000 GGUF (quantized) models on swarmui : https://github.com/mcmonkeyprojects/SwarmUI/blob/master/docs/Model%20Support.md#gguf-quantized-models
Furkan Gözükara SECourses
you can set swarmui model files like this - put into diffusion_models folder
No description
Furkan Gözükara SECourses
'D:/Kohya-SS-GUI/kohya_ss' is on a file system that does not record ownership To add an exception for this directory, call: git config --global --add safe.directory D:/Kohya-SS-GUI/kohya_ss guys read this to learn a lot of important stuff about swarmui masking : https://github.com/mcmonkeyprojects/SwarmUI/issues/292
Furkan Gözükara SECourses
Currently, --highvram only affects caching of latens, and --lowvram only affects model loading. Training speed remains unchanged. - Kohya https://github.com/jakaline-dev/Triton_win/releases/tag/3.0.0
Furkan Gözükara SECourses
python -m xformers.info
python -m xformers.info

Did you find this page helpful?