

deleted by creator


deleted by creator


deleted by creator


Not after he changed the text model without warning…


deleted by creator


??? This is just the example workflow from their HF model card. I’m not sure what you mean here as it doesn’t come out well at all with the same settings I used with Perchance.



So I just tried what you suggested. Perchance’s is on the left and mine on the right.

As you can see mine is still not there yet and I need more info. Here is the workflow:

If you could replicate Perchance’s exactly and provide your workflow, I would be glad to see


You probably want a GPU with 12GB of VRAM as that’s enough to fit the entire fp8-scaled version. Also a PCIe 4 or 5 NVME SSD if you don’t want model load times to be a problem, but that’s about it. Linux is great for AI too.


Does this even work??? I advise you to try my options here and post your results. If you can replicate this exact image locally I’d be pleased.


This is what happened when I ran Chroma locally at 7 CFG, doesn’t work



Schnell because of the speed I asume?
Haven’t found good LoRAs to make that intricate style.
Chroma does accept negative prompts! Even though it is somewhat based on Flux.
deleted by creator