Another experiment in using Stable Diffusion on a Poser render, as if it were a Photoshop filter.
1. Load and pose the figure and any figure-props in Poser. Here the standard Meshbox H.P. Lovecraft figure is being used, with a brass telescope from Steampunk Balloon. The M4 pose applied is meant to be gripping some rigging rope (not visible here) on a steampunk airship.
2. In Poser, use the Materials python script that ships with every copy of Poser, to lift the scene’s Gamma to 1. For comics you might also apply, as I did here, a good bright even light that tends to flatten things out (a ‘flat light’ as I call them). These measures means the dark suit can now be seen properly — one of the most fatal problems in making comics from 3D is the unfathomable tendency for makers to accept lots of ‘black on black’. Add a Comic Book outline via the Comic Book Preview panel, and render in Preview. This will help the Canny edge detection later on. Output to .PNG format at 768px square.
3. In Poser’s good old Firefly renderer, use my lineart-only render preset to just get the lines. This type of render gives you all the lines, not just the ones Comic Book Preview chooses to show. Render to 768px square, .PNG file. We do lose the hair, which is just an image texture. But the next step will bring it back.
4. Combine the two .PNGs in Photoshop. Do this by dropping the Firefly lineart on top of the Gamma-lifted Preview render, set the layer to Multiply, and adjust to taste. Here I also set a white backdrop layer, since the PNGs otherwise have embedded transparency. To lighten things up a bit more, I also blended a little into the white background.
5. Now start the free InvokeAI. Import your final Step 4 .PNG and use it for both Img2Img and also in the Canny Controlnet. Use the settings seen in the screenshot-combo above, making sure to get them all. You may of course need to juggle the prompt and negative prompt, if using your own test render. The Stable Diffusion model being used is the free Another Damn Art Model (ADAM) 4.5, available at CivitAI.
That’s it. Upscale the best 1024px result 2x so you can mask, cutout and defringe cleanly in Photoshop, if planning to composite the character onto a background. The intended destination is as part of one frame in a comic-book page, thus the roughness and a few imperfections (visible when the image is scrutinised at a large size) don’t really matter. The lack of contrast and colour vibrancy is also a good thing, as it can be tweaked up later on — it would be trickier to try to subdue garish colours / high contrast.
Should also work nicely (not yet tested) if you start with a figure + lighting-matched backdrop render. But obviously having the figure and backdrop separate could make adjustments on the comics page easier in Photoshop (slightly blur or lighten the background, to make the characters stand out etc). You may also want two very different characters interacting, and thus would likely want to deal with them separately and then bring them together in Photoshop.
The Stable Diffusion result is of course not perfect, but you can pick the best from 4 or even 8 image-generations. Here he’s acquired a ring on his finger, and the jawline is too ‘1930s heroic’ and not really ‘Lovecraft deformed’ enough. But the silhouette of the figure and prop match perfectly with the Poser renders, which means you can get consistent colour throughout a comic-book page (here’s how: greyscale, get a full colour render from Poser, size it to fit and lay on top, then set Photoshop’s blending mode to Colour).
One thing I tried along the way was prompting for Cary Grant (the 1930s movie actor). It does pretty well, and SD must have been trained well on his images. Consider using an M4 with a ‘somewhat-Cary morph’ and just prompting for the old movie star, for a more or less consistent head. Or try some other big movie star of the 1920s and 30s. I think the difficulty I had with getting an exact (Lovecraft) was that the ADAM model doesn’t really ‘know’ him well. But it’s the best I’ve yet found for this sort of ‘SD as Photoshop filter’ workflow, being very strong and thus working well at low Img2Img settings.
Part two: Successful test – the ‘proof of workflow’.
Pingback: Successful test – the ‘proof of workflow’ – MyClone Poser and Daz Studio blog
Pingback: New for Poser and DAZ – April 2025 – MyClone Poser and Daz Studio blog