Hi there, I’ve been toying around with Stable Diffusion for some time now and I’m starting to get a bit of a feel for it. So far, I’m quite happy with my workflow for generating image compositions (i.e. getting the characters, items, poses, … that I want). But these images look quite crude sometimes. Question is - what tools can I use to iron out images, get them more crisp looking and so on, while keeping the image composition the same all the time? Any tips are highly welcome

  • Salad@lemmy.fmhy.ml
    link
    fedilink
    arrow-up
    4
    ·
    edit-2
    1 year ago

    Civitai has a bunch of Lora’s that might help with crispness? My favourites are Denoise & AddDetail. I’d also use a popular SD1.5 model like Deliberate (until sdxl is released). I always do highres fix, maybe scale 2x from 512 on a low denoise & see what you think.

    There’s also an after detailer extension I use for doing extra processing passes on hands/face etc.

  • bttoddx@lemmy.dbzer0.com
    link
    fedilink
    arrow-up
    3
    ·
    1 year ago

    Honestly I go through a process where I place the image into img to img and re-render the image at increasingly lower and lower denoising scales until I get a result that looks good. I tend to alternate between sampling models too, depending on what I’m going for. Haven’t quite got the hang of inpainting yet, but I’ve seen other people’s workflows of upscaling the image and then address problem areas individually.

    • Scrubbles@poptalk.scrubbles.tech
      link
      fedilink
      arrow-up
      6
      ·
      edit-2
      1 year ago

      Yeah it’s really kind of a process right now. Most of the time I can get an okay image with a really good prompt, but if I want a great image it’s usually

      • Form the perfect prompt, couple hours fine tuning
      • Move to img2img, render to make a bit more cohesive
      • Pick one from the batch, use in paint
      • For each round of in painting:
        • Paint area
        • Alter prompt
        • Make a batch that’s nice
        • If good, move to next round of in painting, else repeat
      • A couple final rounds of img2img
      • Scale up

      It’s quite a process right now to make it perfect. Overall if you want something real specific it may take a few hours of fine tuning and adjustments to get it there

  • voluntaryexilecat@lemmy.dbzer0.com
    link
    fedilink
    arrow-up
    2
    ·
    1 year ago

    You mean the step after the ludicrous amount of inpainting one has to do sometimes?

    Apart from the mentioned adddetail lora (works in negative prompt as well), maybe rerunning the image through ultimate sd upscaler with the controlnet extension? (go easy on the denoise level here, or your image becomes a surrealist’s dream)

  • tryingnottobefat@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    1 year ago

    I use SD to make character portraits for TTRPGs online and I’ve had decent luck fixing images with Photoshop (Beta) generative fill. It’s good at some things, like replacing one eye so that it better matches the other. It’s okay at noses. It’s very bad at mouths. It’s good at removing backgrounds or replacing something with background. Extending images is 50-50 but it can be nice for getting a character centred in frame, or for filling in the top of their head if it was cut off. It’s also pretty good at blending two images- for example, I often have a good full body image with a really weird face; it makes it a lot easier to paste a different face on top and blend the edges.

    I know this is a paid option for what you’re asking but hopefully it helps.