StableDiffusion

98 readers
1 users here now

/r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and...

founded 1 year ago
MODERATORS
226
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/stablediffusion by /u/ericreator on 2024-10-02 19:41:18+00:00.

227
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/stablediffusion by /u/Total-Resort-3120 on 2024-10-02 23:52:42+00:00.

228
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/stablediffusion by /u/Total-Resort-3120 on 2024-10-02 22:30:48+00:00.

229
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/stablediffusion by /u/ExpressWarthog8505 on 2024-10-02 19:45:34+00:00.

230
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/stablediffusion by /u/jenza1 on 2024-10-02 17:05:54+00:00.

231
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/stablediffusion by /u/Cute_Ride_9911 on 2024-10-02 16:59:30+00:00.

232
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/stablediffusion by /u/Devajyoti1231 on 2024-10-02 16:35:15+00:00.

233
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/stablediffusion by /u/CeFurkan on 2024-10-02 14:26:37+00:00.


The below text quoted from resource : https://huggingface.co/ostris/OpenFLUX.1

Beta Version v0.1.0

After numerous iterations and spending way too much of my own money on compute to train this, I think it is finally at the point I am happy to consider it a beta. I am still going to continue to train it, but the distillation has been mostly trained out of it at this point. So phase 1 is complete. Feel free to use it and fine tune it, but be aware that I will likely continue to update it.

What is this?

This is a fine tune of the FLUX.1-schnell model that has had the distillation trained out of it. Flux Schnell is licensed Apache 2.0, but it is a distilled model, meaning you cannot fine-tune it. However, it is an amazing model that can generate amazing images in 1-4 steps. This is an attempt to remove the distillation to create an open source, permissivle licensed model that can be fine tuned.

How to Use

Since the distillation has been fine tuned out of the model, it uses classic CFG. Since it requires CFG, it will require a different pipeline than the original FLUX.1 schnell and dev models. This pipeline can be found in open_flux_pipeline.py in this repo. I will be adding example code in the next few days, but for now, a cfg of 3.5 seems to work well.

234
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/stablediffusion by /u/ampp_dizzle on 2024-10-02 14:24:00+00:00.

235
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/stablediffusion by /u/ThinkDiffusion on 2024-10-02 13:23:01+00:00.

236
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/stablediffusion by /u/ItsCreaa on 2024-10-02 11:16:43+00:00.

237
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/stablediffusion by /u/ZyloO_AI on 2024-10-02 15:09:24+00:00.

238
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/stablediffusion by /u/a_beautiful_rhind on 2024-10-02 12:25:37+00:00.

239
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/stablediffusion by /u/Viking_Baboon on 2024-10-02 11:13:31+00:00.

240
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/stablediffusion by /u/Devajyoti1231 on 2024-10-02 06:02:13+00:00.

241
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/stablediffusion by /u/ninjasaid13 on 2024-10-02 05:15:57+00:00.

242
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/stablediffusion by /u/xavier047 on 2024-10-02 03:17:37+00:00.

243
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/stablediffusion by /u/Total-Resort-3120 on 2024-10-01 23:56:31+00:00.

244
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/stablediffusion by /u/terminusresearchorg on 2024-10-01 21:02:31+00:00.


Performance

  • Improved launch speed for large datasets (>1M samples)
  • Improved speed for quantising on CPU
  • Optional support for directly quantising on GPU near-instantly (--quantize_via)

Compatibility

  • SDXL, SD1.5 and SD2.x compatibility with LyCORIS training
  • Updated documentation to make multiGPU configuration a bit more obvious.
  • Improved support for torch.compile(), including automatically disabling it when eg. fp8-quanto is enabled
    • Enable via accelerate config or config/config.env via TRAINER_DYNAMO_BACKEND=inductor
  • TorchAO for quantisation as an alternative to Optimum Quanto for int8 weight-only quantisation (int8-torchao)
  • f8uz-quanto, a compatibility level for AMD ROCm users to experiment with FP8 training dynamics
  • Support for multigpu PEFT LoRA training with Quanto enabled (not fp8-quanto)
    • Previously, only LyCORIS would reliably work with quantised multigpu training sessions.
  • Ability to quantise models when full-finetuning, without warning or error. Previously, this configuration was blocked. Your mileage may vary, it's an experimental configuration.

Integrations

  • Images now get logged to tensorboard (thanks u/anhi)
  • FastAPI endpoints for integrations (undocumented)
  • "raw" webhook type that sends a large number of HTTP requests containing events, useful for push notification type service

Optims

  • SOAP optimiser support
    • uses fp32 gradients, nice and accurate but uses more memory than other optims, by default slows down every 10 steps as it preconditions
  • New 8bit and 4bit optimiser options from TorchAO (ao-adamw8bit, ao-adamw4bit etc)

Schnell

Recently we discovered that training LyCORIS LoKr on Flux.1 Dev works perfectly fine on Flux.1 Schnell at just 4 steps, and that the problems of transferring it over are specific to LoRA.

No special training is needed, other than to just train on Dev instead of Schnell.

The release:

The quickstart:

Some docs have been updated for v1.1, mostly OPTIONS.md and the FLUX quickstart.

245
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/stablediffusion by /u/an303042 on 2024-10-01 14:31:17+00:00.

246
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/stablediffusion by /u/uisato on 2024-10-01 16:34:24+00:00.

247
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/stablediffusion by /u/Corinstit on 2024-10-01 16:11:44+00:00.

248
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/stablediffusion by /u/missing-in-idleness on 2024-10-01 12:42:17+00:00.

249
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/stablediffusion by /u/3deal on 2024-10-01 11:58:55+00:00.

250
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/stablediffusion by /u/FortranUA on 2024-10-01 11:58:52+00:00.

view more: โ€น prev next โ€บ