StableDiffusion

98 readers
1 users here now

/r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and...

founded 1 year ago
MODERATORS
726
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/stablediffusion by /u/Zugzwangier on 2024-08-31 23:02:36+00:00.


Be it AI images or GPT/Llama stuff or otherwise.

I'm no expert here, but it strikes me as pretty straightforward thing to set this up.

  • There are all of these open source codebases to build on.
  • The organizers can be disillusioned tech guys from the usual suspects, and they'd get to pay themselves healthy salaries while making their resumes look great
  • The development process can (unlike e.g. video game development) be largely very transparent, so the risks of mismanagement and theft could be minimized. (I'm thinking probably the only thing that needs to be kept under wraps is the exact composition of training data, to avoid misguided IP lawsuits.)
  • The GPU time ain't super cheap but based on estimates I've seen, it's certainly achievable with crowdfunding.
  • Appealing to whale donors--i.e. the most important type of donor-- seems very easy: large donations allow you to have some say over some key concepts and training data (within reason.) Consider for moment the fact that many people dropped four or even five figures for exclusive Star Citizen DLC...

Or, if not a full, ground-up training, at least some major high-quality checkpoints or finetunes.

I mean, the demand is there. The fact that Flux is almost universally adored, despite its current limitations, surely speaks to the hunger for better products.

There will always be a lot of controversy about safety/bad press/etc. but that's gonna happen anyway--see the bill that California may be set to pass soon.

It's clear the anti-AI backlash is coming regardless of how carefully people tiptoe here.

...and I'm really starting to wonder if the best defense against this is to get the highest quality possible AI tools developed and out there in the wild ASAP. The best defense here might just be to reach the "nah, it's too late" point sooner rather than later.

If we wait too long, significant legal hurdles could easily pop up that would make this difficult or impossible.

This would be sort of a... "Unpause AI" crowdfunding project, if you will.

(For the record: I do have my own serious misgivings about AI's long term effects on society... but I think allowing corporations and governments to dictate the AI narrative and ecosystem is bound to be worse.)

727
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/stablediffusion by /u/CrasHthe2nd on 2024-08-31 22:50:20+00:00.

728
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/stablediffusion by /u/CrisMaldonado on 2024-08-31 20:50:33+00:00.

729
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/stablediffusion by /u/Major_Specific_23 on 2024-08-31 20:39:01+00:00.

730
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/stablediffusion by /u/tubbymeatball on 2024-08-31 20:37:55+00:00.

731
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/stablediffusion by /u/NES64Super on 2024-08-31 18:56:40+00:00.

732
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/stablediffusion by /u/Abject-Recognition-9 on 2024-08-31 18:48:55+00:00.

733
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/stablediffusion by /u/MY7AH7 on 2024-08-31 16:46:06+00:00.

734
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/stablediffusion by /u/vmandic on 2024-08-31 16:22:18+00:00.


SD.Next Release 08/31/2024

Summer break is over and we are back with a massive update!

Support for all of the new models:

What else? Just a bit... ;)

New fast-install mode, new Optimum Quanto and BitsAndBytes based quantization modes, new balanced offload mode that dynamically offloads GPU<->CPU as needed, and more...

And from previous service-pack: new ControlNet-Union all-in-one model, support for DoRA networks, additional VLM models, new AuraSR upscaler

Breaking Changes...

Due to internal changes, you'll need to reset your attention and offload settings!

But...For a good reason, new balanced offload is magic when it comes to memory utilization while sacrificing minimal performance!

Best place to post questions is on our Discord server which now has over 2.5k active members!

For more details see: Changelog | ReadMe | Wiki | Discord

735
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/stablediffusion by /u/PixarCEO on 2024-08-31 15:29:23+00:00.

736
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/stablediffusion by /u/tom83_be on 2024-08-31 14:33:20+00:00.


Intro

There are a lot of requests on how to do LoRA training with Flux.1 dev. Since not everyone has 24 VRAM, interest in low VRAM configurations is high. Hence, I searched for an easy and convenient but also completely free and local variant. The setup and usage of "ComfyUI Flux Trainer" seemed matching and allows to train with 12 GB VRAM (I think even 10 GB and possibly even below). I am not the creator of these tools nor am I related to them in any way (see credits at the end of the post). Just thought a guide could be helpful.

Prerequisites

git and python (for me 3.11) is installed and available on your console

Steps (for those who know what they are doing)

  • install ComfyUI
  • install ComfyUI manager
  • install "ComfyUI Flux Trainer" via ComfyUI Manager
  • install protobuf via pip (not sure why, probably was forgotten in the requirements.txt)
  • load the "flux_lora_train_example_01.json" workflow
  • install all missing dependencies via ComfyUI Manager
  • download and copy Flux.1 model files including CLIP, T5 and VAE to ComfyUI; use the fp8 versions for Flux.1-dev and the T5 encoder
  • use the nodes to train using:
    • 512x512
    • Adafactor
    • split_mode needs to be set to true (it basically splits the layers of the model, training a lower and upper part per step and offloading the other part to CPU RAM)
    • I got good results with network_dim = 64 and network_alpha = 64
    • fp8 base needs to stay true as well as gradient_dtype and save_dtype at bf16 (at least I never changed that; although I used different settings for SDXL in the past)
  • I had to remove the Flux Train Validate"-nodes and "Preview Image"-nodes since they ran into an error (annyoingly late during the process when sample images were created) "!!! Exception during processing !!! torch.cat(): expected a non-empty list of Tensors"-error" and I was unable to find a fix
  • If you like you can use the configuration provided at the very end of this post
  • you can also use/train using captions; just place the txt-files with the same name as the image in the input-folder

Observations

  • Speed on a 3060 is about 9,5 seconds/iteration, hence 3.000 steps as proposed as the default here (which is ok for small datasets with about 10-20 pictures) is about 8 hours
  • you can get good results with 1.500 - 2.500 steps
  • VRAM stays well below 10GB
  • RAM consumption is/was quite high; 32 GB are barely enough if you have some other applications running; I limited usage to 28GB, and it worked; hence, if you have 28 GB free, it should run; it looks like there have been some recent updates that are optimized better, but I have not tested that yet in detail
  • I was unable to run 1024x1024 or even 768x768 due to RAM contraints (will have to check with recent updates); the same goes for ranks higher than 128. My guess is, that it will work on a 3060 / with 12 GB VRAM, but it will be slower
  • using split_mode reduces VRAM usage as described above at a loss of speed; since I have only PCIe 3.0 and PCIe 4.0 is double the speed, you will probaly see better speeds if you have fast RAM and PCIe 4.0 using the same card; if you have more VRAM, try to set split_mode to false and see if it works; should be a lot faster

Detailed steps (for Linux)

  • mkdir ComfyUI_training
  • cd ComfyUI_training/
  • mkdir training
  • mkdir training/input
  • mkdir training/output
  • git clone
  • cd ComfyUI/
  • python3.11 -m venv venv (depending on your installation it may also be python or python3 instead of python3.11)
  • source venv/bin/activate
  • pip install -r requirements.txt
  • pip install protobuf
  • cd custom_nodes/
  • cd ..
  • systemd-run --scope -p MemoryMax=28000M --user nice -n 19 python3 main.py --lowvram (you can also just run "python3 main.py", but using this command you limit memory usage and prio on CPU)
  • open your browser and go to
  • Click on "Manager" in the menu
  • go to "Custom Nodes Manager"
  • search for "ComfyUI Flux Trainer" (white spaces!) and install the package from Author "kijai" by clicking on "install"
  • click on the "restart" button and agree on rebooting so ComfyUI restarts
  • reload the browser page
  • click on "Load" in the menu
  • navigate to ../ComfyUI_training/ComfyUI/custom_nodes/ComfyUI-FluxTrainer/examples and select/open the file "flux_lora_train_example_01.json"

you can also use the "workflow_adafactor_splitmode_dimalpha64_3000steps_low10GBVRAM.json" configuration I provided here)

  • you will get a Message saying "Warning: Missing Node Types"
  • go to Manager and click "Install Missing Custom Nodes"
  • install the missing packages just like you did for "ComfyUI Flux Trainer" by clicking on the respective "install"-buttons; at the time of writing this it was two packages ("rgthree's ComfyUI Nodes" by "rgthree" and "KJNodes for ComfyUI" by "kijai"
  • click on the "restart" button and agree on rebooting so ComfyUI restarts
  • reload the browser page
  • download "flux1-dev-fp8.safetensors" from and put it into ".../ComfyUI_training/ComfyUI/models/unet/
  • download "t5xxl_fp8_e4m3fn.safetensors" from and put it into ".../ComfyUI_training/ComfyUI/models/clip/"
  • download "clip_l.safetensors" from and put it into ".../ComfyUI_training/ComfyUI/models/clip/"
  • download "ae.safetensors" from and put it into ".../ComfyUI_training/ComfyUI/models/vae/"
  • reload the browser page (ComfyUI)

if you used the "workflow_adafactor_splitmode_dimalpha64_3000steps_low10GBVRAM.json" I provided you can proceed till the end / "Queue Prompt" step here after you put your images into the correct folder; here we use the "../ComfyUI_training/training/input/" created above

  • find the "FluxTrain ModelSelect"-node and select:

=> flux1-dev-fp8.safetensors for "transformer"

=> ae.safetensors for vae

=> clip_l.safetensors for clip_c

=> t5xxl_fp8_e4m3fn.safetensors for t5

  • find the "Init Flux LoRA Training"-node and select:

=> true for split_mode (this is the crucial setting for low VRAM / 12 GB VRAM)

=> 64 for network_dim

=> 64 for network_alpha

=> define a output-path for your LoRA by putting it into outputDir; here we use "../training/output/"

=> define a prompt for sample images in the text box for sample prompts (by default it says something like "cute anime girl blonde..."; this will only be relevant if that works for you; see below)

  • find the "Optimizer Config Adafactor"-node and connect the "optimizer_settings" output with the "optimizer_settings" of the "Init Flux LoRA Training"-node
  • find the three "TrainDataSetAdd"-nodes and remove the two ones with 768 and 1024 for width/height by clicking on their title and pressing the remove/DEL key on your keyboard
  • add the path to your dataset (a folder with the images you want to train on) in the remaining "TrainDataSetAdd"-node (by default it says "../datasets/akihiko_yoshida_no_caps"; if you specify an empty folder you will get an error!); here we use "../training/input/"
  • define a triggerword for your LoRA in the "TrainDataSetAdd"-node; for example "loratrigger" (by default it says "akihikoyoshida")
  • remove all "Flux Train Validate"-nodes and "Preview Image"-nodes (if present I get an error later in training)
  • click on "Queue Prompt"
  • once training finishes, your output is in ../ComfyUI_training/training/output/ (4 files for 4 stages with different steps)

All credits go to the creators of

===== save as workflow_adafactor_splitmode_dimalpha64_3000steps_low10GBVRAM.json =====

https://pastebin.com/CjDyMBHh

737
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/stablediffusion by /u/hardmaru on 2024-08-31 13:08:44+00:00.


See Clem's post:

SD 1.5 is by no means a state-of-the-art model, but given that it is the one arguably the largest derivative fine-tune models and a broad tool set developed around it, it is a bit sad to see.

738
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/stablediffusion by /u/Raphael_in_flesh on 2024-08-31 08:57:04+00:00.

739
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/stablediffusion by /u/aziib on 2024-08-31 08:27:58+00:00.

740
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/stablediffusion by /u/x0rchid on 2024-08-31 08:24:53+00:00.

741
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/stablediffusion by /u/cgpixel23 on 2024-08-31 08:20:26+00:00.

742
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/stablediffusion by /u/YentaMagenta on 2024-08-31 03:19:09+00:00.


I'm not including a TLDR because the title of the post is essentially the TLDR, but the first 2-3 paragraphs and the call to action to contact Governor Newsom are the most important if you want to save time.

While everyone tears their hair out about SB 1047, another California bill, AB 3211 has been quietly making its way through the CA legislature and seems poised to pass. This bill would have a much bigger impact since it would render illegal in California any AI image generation system, service, model, or model hosting site that does not incorporate near-impossibly robust AI watermarking systems into all of the models/services it offers. The bill would require such watermarking systems to embed very specific, invisible, and hard-to-remove metadata that identify images as AI-generated and provide additional information about how, when, and by what service the image was generated.

As I'm sure many of you understand, this requirement may be not even be technologically feasible. Making an image file (or any digital file for that matter) from which appended or embedded metadata can't be removed is nigh impossible—as we saw with failed DRM schemes. Indeed, the requirements of this bill could be likely be defeated at present with a simple screenshot. And even if truly unbeatable watermarks could be devised, that would likely be well beyond the ability of most model creators, especially open-source developers. The bill would also require all model creators/providers to conduct extensive adversarial testing and to develop and make public tools for the detection of the content generated by their models or systems. Although other sections of the bill are delayed until 2026, it appears all of these primary provisions may become effective immediately upon codification.

If I read the bill right, essentially every existing Stable Diffusion model, fine tune, and LoRA would be rendered illegal in California. And sites like CivitAI, HuggingFace, etc. would be obliged to either filter content for California residents or block access to California residents entirely. (Given the expense and liabilities of filtering, we all know what option they would likely pick.) There do not appear to be any escape clauses for technological feasibility when it comes to the watermarking requirements. Given that the highly specific and infallible technologies demanded by the bill do not yet exist and may never exist (especially for open source), this bill is (at least for now) an effective blanket ban on AI image generation in California. I have to imagine lawsuits will result.

Microsoft, OpenAI, and Adobe are all now supporting this measure. This is almost certainly because it will mean that essentially no open-source image generation model or service will ever be able to meet the technological requirements and thus compete with them. This also probably means the end of any sort of open-source AI image model development within California, and maybe even by any company that wants to do business in California. This bill therefore represents probably the single greatest threat of regulatory capture we've yet seen with respect to AI technology. It's not clear that the bill's author (or anyone else who may have amended it) really has the technical expertise to understand how impossible and overreaching it is. If they do have such expertise, then it seems they designed the bill to be a stealth blanket ban.

Additionally, this legislation would ban the sale of any new still or video cameras that do not incorporate image authentication systems. This may not seem so bad, since it would not come into effect for a couple of years and apply only to "newly manufactured" devices. But the definition of "newly manufactured" is ambiguous, meaning that people who want to save money by buying older models that were nonetheless fabricated after the law went into effect may be unable to purchase such devices in California. Because phones are also recording devices, this could severely limit what phones Californians could legally purchase.

The bill would also set strict requirements for any large online social media platform that has 2 million or greater users in California to examine metadata to adjudicate what images are AI, and for those platforms to prominently label them as such. Any images that could not be confirmed to be non-AI would be required to be labeled as having unknown provenance. Given California's somewhat broad definition of social media platform, this could apply to anything from Facebook and Reddit, to WordPress or other websites and services with active comment sections. This would be a technological and free speech nightmare.

Having already preliminarily passed unanimously through the California Assembly with a vote of 62-0 (out of 80 members), it seems likely this bill will go on to pass the California State Senate in some form. It remains to be seen whether Governor Newsom would sign this draconian, invasive, and potentially destructive legislation. It's also hard to see how this bill would pass Constitutional muster, since it seems to be overbroad, technically infeasible, and represent both an abrogation of 1st Amendment rights and a form of compelled speech. It's surprising that neither the EFF nor the ACLU appear to have weighed in on this bill, at least as of a CA Senate Judiciary Committee analysis from June 2024.

I don't have time to write up a form letter for folks right now, but I encourage all of you to contact Governor Newsom to let him know how you feel about this bill. Also, if anyone has connections to EFF or ACLU, I bet they would be interested in hearing from you and learning more.

743
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/stablediffusion by /u/piggledy on 2024-08-30 22:24:19+00:00.

744
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/stablediffusion by /u/terminusresearchorg on 2024-08-30 21:30:30+00:00.


I'm sure it's all a big mystery how Fal is managing to train a 100 step LoRA in 5 minutes.

They've given precious few hints on Twitter so far:

  • Using H100 for training
  • Uses the GPUs "more efficiently", "among other changes"
  • "Totally new training process"

We considered a few different potential approaches, but the one that felt the most eerily similar is HyperDreambooth.

When reading the HyperDreambooth paper, you discover the main improvement they propose is a hypernetwork that predicts better starting weights for a rank-1 LoRA. But this has some other limitations, including the need to train a hypernetwork with its own issues.

Simply Guessing isn't really enough for me, so, I went ahead and contacted someone who had trained a model on their API. Upon viewing the weights, it looks like a normal LoRA. there is no metadata of the model training config saved into the safetensors. That's an obvious route to look, so, it's not there anymore.

However, there's a notorious issue with Fal's API where error messages manage to reveal more information than the successful outputs will tell you. Crafted a special image for training that passes the image loading but then does not process correctly, leading to an error message that contained a link:

[removed for privacy reasons]

It's a 172M rank 16 LoRA in the Diffusers state_key_dict format. But there was more...

At the bottom is the downloaded LoRA, and just above that you see the LoRA config and .... what the hell? A folder of images?!

[removed for privacy reasons]

The filenames give it away - these are just a random person from Facebook.

Here's the LoRA config:

{
    "images_data_url": "[removed for privacy reasons]",
    "trigger_word": "women danish 36y",
    "disable_captions": false,
    "disable_segmentation_and_captioning": false,
    "resolution_buckets": "512",
    "iter_multiplier": 1.0,
    "is_style": false,
    "is_input_format_already_preprocessed": false,
    "instance_prompt": "women danish 36y"
}

That's super interesting.

  • There is an option to set resolution_buckets which is 512px here for some reason
  • The trigger phrase is women danish 36y
  • My requested LoRA is then continued from this

Who the heck is she?

a real person!

So, we did an experiment (we being JimmyCarter / uptightmoose, from my research group):

  • retrieve 4 images of a subject that Flux doesn't know (Dr. Seuss)
  • rent an A100-80G for $1/hr
  • train on a very high batch size (16) with the 4 images over 100 steps

The result:

a fully generalised Dr. Seuss

We didn't even use starting LoRA weights. Not sure why they do that - it's entirely unnecessary. Too bad they didn't explain this fast LoRA training process, leaving the rest of us to reverse engineer the process and explain it to others.

The optimiser states and random states don't even seem to be reused from the initial LoRA - it's literally just using it as starting weights.

Cost comparison

Fal's training costs $2 for 2 minutes of runtime and produces inferior LoRAs to a full hour of training at a proper set of hyperparameters. Their compute would cost roughly $60/hr compared to renting your own H100 for $3.99/hr. Heck - you can rent an MI300X for $3.99 on RunPod and get 192G VRAM.

You can achieve similar results locally by using segmented training masks and high batch size.

Hope this helps others and makes you worry less that you're missing out on some great new advancement.

745
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/stablediffusion by /u/Lishtenbird on 2024-08-30 20:35:42+00:00.


Some rule updates have just been published, and in short, they appear to focus on making this a more productive place, with a stronger focus on open-source technology.

Personally, I believe that there is still a category of posts that already were of debatable value in this community, and would fit even less under the new rules: the "No Workflow" posts - often also hiding behind the indiscriminate "Animation" flair. (I raised this in the comments to the announcement, but was advised to make it a separate post instead.)

Yes - a "workflow" can have a very definite technical meaning (as in, a reusable ComfyUI workflow file), but I'd rather treat this term in a more broad sense. The idea of communicating "how to get this" itself is more important in the context.

So, let's broadly categorize (applicable - not news, not discussion...) posts by the amount of "workflow" they have, how useful they are to the community, and how much it asks from the poster.

A post with a complete, downloadable ComfyUI workflow.

  • Having this is great; it's immediately actionable for others...
  • But it's also very restrictive and cumbersome. What if it wasn't Comfy? What if it was a multi-step process? What if it included manual work in other applications? What if it's an older work, and exact details were lost? This does sound excessive.

A post mentioning the general steps and prompts and models/LoRAs/special processing used.

  • This is still very useful to anyone who would like to build upon this - so, in the collaborative spirit of open source.
  • This is permissive enough to not be a hassle: a single sentence can be enough to describe what's being done, and it can be useful even without an exact prompt ("Using this model with this LoRA for this character in this setting"). Edge cases shouldn't be much of a problem as long as the post is made in good faith ("I'm still training this LoRA for this model, any feedback?"). A requirement like this is not uncommon in hardware communities: everyone knows people will ask for specs anyway, so why not save everyone time and require sharing them in the first place? And as a side effect, that displays a minimum level of dedication from the side of the poster.

A post with no details whatsoever - just pictures (or a single picture) with an artistic title.

  • This is content that the poster decided was worth sharing here instead of Civit, or Discord, or any other generative art community around. What is the value of having such a post specifically here, instead of all those other places, where it can be found in hundreds or thousands? So, an average post like that has very low usefulness - often even lower than one posted under a specific model or LoRA elsewhere.
  • A good post like that still wastes time for everyone going on a wild goose chase trying to guess the information that the poster already had, but didn't provide. Often these posts will never get communication from the poster anyway because it's considered an artistic secret (or was cross-posted to Midjourney), or because it's only shared here because it'll get more attention (eventually - to their linked Instagram, or their online service, or Patreon...) than in other places. Often these posts are also made by users with little activity, they may be really skirting the rules, and are heavily mislabeled - which suggests that those posters are not very interested in the subreddit in the first place. Is that really collaborative, and contributing back to the community that made creating that content possible in the first place?

I think there is a happy middle ground between demanding everyone to share everything (a complete "workflow"), and allowing absolutely anything that formally fits a description. And I think since the rules are being updated and the community, seemingly, refocused - maybe it's time to shift that needle, too, towards a more useful-for-everyone level.

TL;DR: I believe that all "work" posts should be required to provide a baseline minimum of information about how it was created, otherwise it doesn't belong to this community, and should be shared elsewhere.

746
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/stablediffusion by /u/Yacben on 2024-08-30 18:14:23+00:00.

747
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/stablediffusion by /u/R34vspec on 2024-08-30 18:11:06+00:00.

748
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/stablediffusion by /u/toidicodedao on 2024-08-30 17:08:31+00:00.

749
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/stablediffusion by /u/Pretend_Potential on 2024-08-30 16:47:23+00:00.


Hi everyone! I'm happy to be part of the new moderation team within this dynamic community. Huge thanks to u/mcmonkey4eva and u/SandCheezy for their amazing work so far. The new mod team is here to support them in keeping this space safe, welcoming, and enjoyable for everyone.

We've updated the rules based on community feedback to clarify and expand on existing guidelines. Our goal is to maintain a neutral stance while ensuring the rules are clear for all. If you are unsure about the content you want to post, feel free to message us.

As volunteers, we are here to help maintain the standards of the subreddit. We rely on your support to keep the space positive and inclusive for everyone. So, please remember to report posts that break any rules.If you have questions please post them in the comments below. Feel free to message me or the modteam for any other help you may require.

With that said, the rules for this subreddit are:

All posts must be Open-source/Local AI image generation related

Posts should be related to open-source and/or Local AI image generation only. These include Stable Diffusion and other platforms like Flux, AuraFlow, PixArt, etc. Comparisons and discussions across different platforms are encouraged.

Be respectful and follow Reddit's Content Policy.

This Subreddit is a place for respectful discussion. Please remember to treat others with kindness and follow Reddit's Content Policy ().

No X-rated, lewd, or sexually suggestive content 

This is a public subreddit and there are more appropriate places for this type of content such as r/unstable_diffusion. Please do not use Reddit’s NSFW tag to try and skirt this rule

No excessive violence, gore or graphic content

Content with mild creepiness or eeriness is acceptable (think Tim Burton), but it must remain suitable for a public audience. Avoid gratuitous violence, gore, or overly graphic material. Ensure the focus remains on creativity without crossing into shock and/or horror territory.

No repost or spam

Do not make multiple similar posts, or post things others have already posted. We want to encourage original content and discussion on this Subreddit, so please make sure to do a quick search before posting something that may have already been covered.

Limited self-promotion

Open-source, free, or local tools can be promoted at any time (once per tool/guide/update). Paid services or paywalled content can only be shared during our monthly event. (There will be a separate post explaining how this works shortly.)

No politics

General political discussions, images of political figures, or propaganda is not allowed. Posts regarding legislation and/or policies related to AI image generation are allowed as long as they do not break any other rules of this subreddit.

No insulting, name-calling, or antagonizing behavior

Always interact with other members respectfully. Insulting, name-calling, hate speech, discrimination, threatening content and disrespect towards each other's religious beliefs is not allowed. Debates and arguments are welcome, but keep them respectful—personal attacks and antagonizing behavior will not be tolerated. 

No hateful comments about art or artists

This applies to both AI and non-AI art. Please be respectful of others and their work regardless of your personal beliefs. Constructive criticism and respectful discussions are encouraged. 

Use the appropriate flair

Flairs are tags that help users understand the content and context of a post at a glance

750
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/stablediffusion by /u/RunDiffusion on 2024-08-29 14:00:31+00:00.

view more: ‹ prev next ›