this post was submitted on 05 Sep 2024
1 points (100.0% liked)

StableDiffusion

98 readers
1 users here now

/r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and...

founded 1 year ago
MODERATORS
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/stablediffusion by /u/miaoshouai on 2024-09-05 16:14:37+00:00.


With the release of the FLUX model, the use of LLM becomes much more common because of the ability that the model can understand the natural language through the combination of T5 and CLIP_L model. However, most of the LLMs require large VRAM and the results it returns are not optimized for image prompting.

I recently trained PromptGen v1 and got a lot of great feedback from the community and I just released PromptGen v1.5 which is a major upgrade based on many of your feedbacks. In addition, version 1.5 is a model trained specifically to solve the issues I mentioned above in the era of Flux. PromptGen is trained based on Microsoft Florence2 base model, thus the model size is only 1G and can generate captions in light speed and uses much less VRAM.

PromptGen v1.5 can handle image caption in 5 different modes all under 1 model: danbooru style tags, one line image description, structured caption, detailed caption and mixed caption, each of which handles a specific scenario in doing prompting jobs. Below are some of the features of this model:

  • When using PromptGen, you won't get annoying text like"This image is about...", I know many of you tried hard in your LLM prompt to get rid of these words.

  • Caption the image in detail. The new version has greatly improved its capability of capturing details in the image and also the accuracy.

  • In LLM, it's hard to tell the model to name the positions of each subject in the image. The structured caption mode really helps to tell these position information in the image. eg, it will tell you: a person is on the left side of the image or right side of the image. This mode also reads the text from the image, which can be super useful if you want to recreate a scene.

  • Memory efficient compared to other models! This is a really light weight caption model as I mentioned above, and its quality is really good. This is a comparison of using PromptGen vs. Joy Caption, where PromptGen even captures the facial expression for the character to look down and camera angle for shooting from side.

  • V1.5 is designed to handle image captions for the Flux model for both T5XXL CLIP and CLIP_L. ComfyUI-Miaoshouai-Tagger is the ComfyUI custom node created for people to use this model more easily. Inside Miaoshou Tagger v1.1, there is a new node called "Flux CLIP Text Encode" which eliminates the need to run two separate tagger tools for caption creation under the "mixed" mode. You can easily populate both CLIPs in a single generation, significantly boosting speed when working with Flux models. Also, this node comes with an empty condition output so that there is no more need for you to grab another empty TEXT CLIP just for the negative prompt in Ksampler for FLUX.

So, please give the new version a try, I'm looking forward to getting your feedback and working more on the model.

Huggingface Page:

Github Page for ComfyUI MiaoshouAI Tagger:

no comments (yet)
sorted by: hot top controversial new old
there doesn't seem to be anything here