This is an automated archive made by the Lemmit Bot.
The original was posted on /r/stablediffusion by /u/danamir_ on 2024-08-21 13:58:13+00:00.
Following the various tests on CFG & FLUX (like this one for example), I was wondering if I could use the same trick as in SDXL : switching settings mid-rendering in ComfyUI by passing the latent between two SamplerCustomAdvanced nodes.
The answer is a resounding yes. You can set the CFG at any value you want, limiting it to the few first step to harvest the benefit of a greater prompt adherence (and optionally use the negative prompt, to a certain extent) and only suffer the cost of double rendering time for those few steps.
single rendering vs. split rendering
Single rendering, CFG 1
Split rendering at 4 steps, CFG 2
The increased CFG adds details, but depending on the prompt can be too contrasted. This can be somewhat balanced by lowering the Guidance. You can push the CFG much higher, 4 and 5 can give interesting results.
The single rendering (72s) :
100%|█████| 18/18 [01:12<00:00, 4.02s/it]
Versus the split rendering (88s) :
100%|█████| 4/4 [00:32<00:00, 8.04s/it]
100%|█████| 14/14 [00:56<00:00, 4.04s/it]
Here is the full workflow : Danamir Flux v14.json
[edit] : A simplified version of the workflow, with two outputs to compare the two rendering method : Flux Danamir Split v15.json
[edit2] : It has been brought to my attention that may do just that without the need of a full set of nodes, to be tested.
Note that this workflow was used to test many things so you'll also find in it : every checkpoint, unet, and CLIP loaders (including GGUF & NF4), an upscale pass (optionally tiled), a second pass with an SDXL model at base level or upscale, detailers both for FLUX and SDXL, supporting any detectors.