this post was submitted on 19 Aug 2024
1 points (100.0% liked)

StableDiffusion

98 readers
1 users here now

/r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and...

founded 1 year ago
MODERATORS
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/stablediffusion by /u/terminusresearchorg on 2024-08-18 23:48:55+00:00.


Thanks to one of the members of my research group, jimmycarter, SimpleTuner now supports actual attention masking on Flux for full fine-tuning and LoRAs.

More work has been done on making the trainer more easily used - it now matters a lot less how many images you have and what your batch size is, for example.

This issue with attention masking was mentioned a while ago in a post I made here lamenting that BFL did this to Flux and that AuraFlow also had the issue.

No results to share back yet, but initial testing shows that this makes the loss look better while training and seems to converge more quickly while biasing the length of your captions less.

More details here:

no comments (yet)
sorted by: hot top controversial new old
there doesn't seem to be anything here