Anyone have any tips on recreating perchance’s text-to-image output on a local model? Like which exact checkpoint the generator uses, or any additional unseen parameters that go into its image generation?
You must log in or # to comment.
They say, the model is Chroma, https://huggingface.co/lodestones/Chroma/tree/main. I have my doubts though. I’ve tried all kinds of sampler/scheduler combinations using XY Plot, nothing came close to the anime style in here. Somewhat similar was euler_ancestral + beta/normal. There could be a list of loras applied for anime styles, I’m not sure. I’d also like to know.

