I'd heard it was harder, but that may be with the base model. It's certainly been much easier so far when using SDXL Unstable Diffusers ☛ YamerMIX ! It doesn't work with tools like ControlNet, though, so it's limited to prompting and img2img. It might work for some limited use cases in Deforum that don't require ControlNet, like zooming with high strength. It's hard to get good videos in Deforum without zooming, though. (I'm trying to make videos with Deforum now.)
Probably about as effective given how little their armor seems to protect them. At least it would distract the rebels!
They look great! Very upset about being cyborgs though...
If you can find the key that goes in her navel keyhole!
Seems to have reverted again?
Thanks for all your hard work!
Rendering at larger sizes can also make it more likely SD will generate multiple subjects. You can use the Regional Prompter or Latent Couple extensions for stable-diffusion-webui to control the subjects separately.
You can also generate subjects separately using txt2img, paste them into a single scene, then do a pass with img2img to make the image look seamless. That is about the hardest approach, though. ControlNet with OpenPose and region prompter are my go-to solution.
This is why "keeping it centralized" is not really practical; people who make posts that get downvoted are going to find different communities. People have different preferences, which is the whole point of a community-oriented system like Lemmy.
She's beautiful!
Maybe I should just wait the 3 days until ControlNet support is released!