• To improve security, we will soon start forcing password resets for any account that uses a weak password on the next login. If you have a weak password or a defunct email, please update it now to prevent future disruption.

[Stable Diffusion] Prompt Sharing and Learning Thread

Sharinel

Member
Dec 23, 2018
469
1,941
Always experiment with your generations. Here's a 1.5 model generation on Dreamshaper8
01280.png

And here it is after I put it through a SDXL checkpoint for hires (a pony one no less, without all that score_9 shite)

01281.png

Just in case you weren't aware that you can mix'n'match 1.5 and SDXL/Pony in some cases. Here's what I used in Forge


1714774844896.png
 

EvylEve

Newbie
Apr 5, 2021
20
29
Always experiment with your generations. Here's a 1.5 model generation on Dreamshaper8
View attachment 3600573

And here it is after I put it through a SDXL checkpoint for hires (a pony one no less, without all that score_9 shite)

View attachment 3600575

Just in case you weren't aware that you can mix'n'match 1.5 and SDXL/Pony in some cases. Here's what I used in Forge


View attachment 3600577
Very nice result, mind a silly question ? Have you experimented just hires fix ? Never played with "Refine" ?
 

Sharinel

Member
Dec 23, 2018
469
1,941
Very nice result, mind a silly question ? Have you experimented just hires fix ? Never played with "Refine" ?
Refine doesn't work as well, From what I can work out, it is because the VAE would need to change midway through and there is no dropdown to choose it. I have my VAE set to automatic, which is why I suspect it works on hires as that is amending a completed image, but as refiner is amending an ongoing image it doesn't seem to work.

Although that's just me, maybe I'm missing something. Adetailer works fine though as it does have a VAE dropdown, and is very useful for changing the look of faces - a particular checkpoint can have a 'default' face, so just use another checkpoint to change it (also if you have a 1.5 Lora that you want to use the face in SDXL....)
 

devilkkw

Member
Mar 17, 2021
282
962
Always experiment with your generations. Here's a 1.5 model generation on Dreamshaper8
View attachment 3600573

And here it is after I put it through a SDXL checkpoint for hires (a pony one no less, without all that score_9 shite)

View attachment 3600575

Just in case you weren't aware that you can mix'n'match 1.5 and SDXL/Pony in some cases. Here's what I used in Forge


View attachment 3600577
Nice, some parts get really good details, but seem hires ruined the work, especially in the hands. have you try with lower denoise?
 

EvylEve

Newbie
Apr 5, 2021
20
29
Refine doesn't work as well, From what I can work out, it is because the VAE would need to change midway through and there is no dropdown to choose it. I have my VAE set to automatic, which is why I suspect it works on hires as that is amending a completed image, but as refiner is amending an ongoing image it doesn't seem to work.

Although that's just me, maybe I'm missing something. Adetailer works fine though as it does have a VAE dropdown, and is very useful for changing the look of faces - a particular checkpoint can have a 'default' face, so just use another checkpoint to change it (also if you have a 1.5 Lora that you want to use the face in SDXL....)
From my little experience I have to wholly agree with you, VAE is actually one of the issues I'm fighting with refining, the only semi-wa I've found so far, is using models with baked in VAE and let Stable Diffusion manage it via Automatic, yet sometimes I got messy blurbs of random noise/colors.

I'm actually trying to generate with anime/cartoonish style models (way easier to get what I really want), then try to make it look a bit more realistic through other steps, avoiding heavy inpainting or other Photoshop re-edits.

So far the only positive results I've got were throuh hires fix and with prompts from file, but it's still a long way to go.
 

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,791
You can use refiner for SD1.5 models also, it's not just limited to XL models. This means you can use a cartoon oriented ckpt and then switch to a photo realistic with the refiner function. You can also do this with hirefix, meaning that you are using a cartoon ckpt as main model and then switching to a photo realistic in hiresfix. I made a post about this a while ago. I wish it was possible to select VAE in hiresfix and in refiner this way you could mix and match SD1.5 models with Xl models etc.
 

devilkkw

Member
Mar 17, 2021
282
962
A fully automated workflow for "image 2 text 2 image" or image 2 image"
kkw-automated.png

This workflow load image, if there's a prompt use it for make new image, if not use a blip model for give prompt.
I've made it simply as possible, you have only to selct some switch for generation.
It also have ability to load lora, and swap something in prompt.

image 2 text 2 image semple:
kkw-automated-i2t2i-_00002_.png
changing subject kkw-automated-i2t2i-_00003_.png

Image 2 Image sample (subject change):
foxy
kkw-automated-i2i-_00003_.png
dog kkw-automated-i2i-_00002_.png
cat kkw-automated-i2i-_00001_.png
tom cruise
kkw-automated-i2i-_00004_.png
emma watson
kkw-automated-i2i-_00005_.png

All image have workflow included.
Hope you like and experimenting with it.
 

felldude

Member
Aug 26, 2017
467
1,430
I have been working on a 2k Pony training.
It was around 10GB of training data around 1000 high quality images.

Here are some test post images if someone wanted to compare the results with their favorite pony model.

ComfyUI_00078_.png ComfyUI_00972_.png ComfyUI_00684_.png
 

JValkonian

Member
Nov 29, 2022
153
138
A fully automated workflow for "image 2 text 2 image" or image 2 image"
View attachment 3610374

This workflow load image, if there's a prompt use it for make new image, if not use a blip model for give prompt.
I've made it simply as possible, you have only to selct some switch for generation.
It also have ability to load lora, and swap something in prompt.

image 2 text 2 image semple:
View attachment 3610395
changing subject View attachment 3610388

Image 2 Image sample (subject change):
foxy
View attachment 3610392
dog View attachment 3610393
cat View attachment 3610394
tom cruise
View attachment 3610391
emma watson
View attachment 3610390

All image have workflow included.
Hope you like and experimenting with it.
 
  • Like
Reactions: devilkkw

mailpa

New Member
Sep 5, 2021
2
3
is there a better way to use loras in comfyui? I'm used to place a lora loader right next to the base model and link it to every sampler I use. Anyone tested other archtectures, like placing it only on a second sampler that finishes the image?
 

Sepheyer

Well-Known Member
Dec 21, 2020
1,520
3,583
is there a better way to use loras in comfyui? I'm used to place a lora loader right next to the base model and link it to every sampler I use. Anyone tested other archtectures, like placing it only on a second sampler that finishes the image?
Yes, I did that. In a way LORA changes the very first latent you get so much that not having it on your first latent produces very different results for the final image.
 
  • Like
Reactions: mailpa

devilkkw

Member
Mar 17, 2021
282
962
is there a better way to use loras in comfyui? I'm used to place a lora loader right next to the base model and link it to every sampler I use. Anyone tested other archtectures, like placing it only on a second sampler that finishes the image?
If you download this:
A fully automated workflow for "image 2 text 2 image" or image 2 image"
View attachment 3610374

This workflow load image, if there's a prompt use it for make new image, if not use a blip model for give prompt.
I've made it simply as possible, you have only to selct some switch for generation.
It also have ability to load lora, and swap something in prompt.

image 2 text 2 image semple:
View attachment 3610395
changing subject View attachment 3610388

Image 2 Image sample (subject change):
foxy
View attachment 3610392
dog View attachment 3610393
cat View attachment 3610394
tom cruise
View attachment 3610391
emma watson
View attachment 3610390

All image have workflow included.
Hope you like and experimenting with it.
you see there's a "clip set layer" node connected after lora, i set it at -24 and seem lora work better.
Just use it to see how connect it and do some test, like fixing seed and change value. Also most important is value of lora, weight and clip do difference.

Also on multiple sampler have you try using "reencode vae" then reapply lora in next sampler? There are some many tricks CUI allow you to test, maybe sharing a simple workflow and let us modify it to check is a good idea, because everyone have their skill and with those tool everyday we have to learn.
 

hkennereth

Member
Mar 3, 2019
224
726
is there a better way to use loras in comfyui? I'm used to place a lora loader right next to the base model and link it to every sampler I use. Anyone tested other archtectures, like placing it only on a second sampler that finishes the image?
That's my usual process. In my workflow I run the prompt first without any loras on a SDXL checkpoint, which gives me more original composition and poses, and then using a combination of ControlNet and img2img I run a second SD1.5 checkpoint with a character lora to get the likeness of the person I'm making images of. Works great.
 

Nano999

Member
Jun 4, 2022
147
66
How to turn 1.5 into SDXL or Pony? You just download the model and load it as a base model and that's it?
And if you want to use 1.5 just load any other model, right
Because when I try to use SDXL models they just generate ugly nonsense
1715777733677.png
 
Last edited:

Sharinel

Member
Dec 23, 2018
469
1,941
How to turn 1.5 into SDXL or Pony? You just download the model and load it as a base model and that's it?
And if you want to use 1.5 just load any other model, right
Because when I try to use SDXL models they just generate ugly nonsense
View attachment 3636655
Check your VAE, that is normally the cause of the SDXL nonsense generations. If you are using Auto1111/Forge, set it to automatic. Not sure what you do in Comfy but I'm sure you just need to add another 20 flowcharts and some spaghetti linking them and you'll be fine
VAE.jpg
 
  • Like
Reactions: VanMortis

Nano999

Member
Jun 4, 2022
147
66
flowcharts?
spaghetti?

I'm trying to use this lora, but nothing works:


It never generates good quality and super slow, like 1 image 10 minutes

1715783186097.png
1715783191537.png

1715782994679.png
1715783023805.png
1715785589592.png
Original:
1715785465038.png

My result:
1715785730167.png
1715785748877.png
1715785756279.png
 

EvylEve

Newbie
Apr 5, 2021
20
29
flowcharts?
spaghetti?

I'm trying to use this lora, but nothing works:


It never generates good quality and super slow, like 1 image 10 minutes

View attachment 3637020
View attachment 3637022

View attachment 3637010
View attachment 3637013
View attachment 3637116
Original:
View attachment 3637112

My result:
View attachment 3637122
View attachment 3637124
View attachment 3637125
mmm, weird, have you tried increasing dimension of your target image ? Default on A1111 is 512x512, which is perfect for SD 1.5, but SDXL requires something more.

One more tip: don't bother with refiner initially, hires fix is way better (tamper with settings to enable model swap for it).

Remove Pony's specific tokens (if you're not using a pony based model).

However with your exact same prompt, using base SDXL model (but the one with VAE fix, you can get it ) and no refiner, just increasing the size to 768x768 I get something like:

grid-0088.png

Increasing to 1024x1024:


grid-0089.jpg


And if I switch to a pony model (ponyRealism v11):



grid-0090.png
 
Last edited:

spikor

New Member
Dec 11, 2021
9
82
I'm a noob with SD, but I had the same issue, switch to 1024x1024 images and it will work. SDXL is trained on that resolution and apparently breaks with 512x512.
On the other hand, SD1.5 works way better with 512x512 for the same reason.
 

Sharinel

Member
Dec 23, 2018
469
1,941
flowcharts?
spaghetti?

I'm trying to use this lora, but nothing works:


It never generates good quality and super slow, like 1 image 10 minutes

View attachment 3637020
View attachment 3637022

View attachment 3637010
View attachment 3637013
View attachment 3637116
Original:
View attachment 3637112

My result:
View attachment 3637122
View attachment 3637124
View attachment 3637125
Right, lets go through this

1. You are running a Pony prompt and trying to use a Pony Lora, but using the base SDXL. Doesn't work. It's like trying to put diesel into a petrol vehicle. Both do the same thing, slightly differently, and they don't play nice with each other. ( How do I know it's a Pony prompt? All that score_9 stuff at the start of the prompt)

2. Go back to civitai, download the following checkpoint -
This is the base Pony model.

3. As EvylEve said above, swap to 1024*1024 and get rid of the 'enable refiner' tick box. For the SD VAE section, change it to automatic (most new checkpoints have the VAE included, so use their one by default with the automatic setting). Don't bother with the hires.fix just yet

4. If it still doesn't work, take screenshots again and we'll (try to) see what's going on
 
Last edited:
  • Like
Reactions: EvylEve